aboutsummaryrefslogtreecommitdiff
path: root/Carpet/CarpetIOHDF5
Commit message (Collapse)AuthorAge
* CarpetIOHDF5: Close checkpoint files in meta modeErik Schnetter2005-11-19
| | | | darcs-hash:20051119212845-dae7b-2ca10604822dbe488f621a9dcf039fe055c6d8dc.gz
* CarpetIOHDF5: Put unique build ID into output filesErik Schnetter2005-11-19
| | | | darcs-hash:20051119212808-dae7b-023ea8552306cda54ab6204ee338809a55228a3b.gz
* CarpetIOHDF5: Remove unused local variable in Input.ccErik Schnetter2005-11-19
| | | | darcs-hash:20051119212959-dae7b-d50e2cc4c8a980720b44cfafd9504eb201e3aa8b.gz
* CarpetIOHDF5: Change #include "hdf5.h" to <hdf5.h>Erik Schnetter2005-11-19
| | | | darcs-hash:20051119212924-dae7b-3447198d7a1d4090ffc6cff4cde12ccf037c5e8f.gz
* CarpetIOHDF5: bugfix in recovery codeThomas Radke2005-11-20
| | | | | | | | Before reading a variable from a dataset, also check that its timelevel is valid. This fixes problems when recovering from a checkpoint (created with 'Carpet::enable_all_storage = true') and this boolean parameter set to 'false'. darcs-hash:20051120134642-776a0-4fe21611ca733ecb42f8e2a82bfa1fe51a5d9e81.gz
* CarpetIOHDF5: bugfix in checkpointing codeThomas Radke2005-11-16
| | | | | | | Don't remove an initial data checkpoint file if IO::checkpoint_keep is set to a value larger 0. darcs-hash:20051116133326-776a0-5fa5bd333cd26434609e920cf49434551db9ff2e.gz
* CarpetIOHDF5: fixed typo in checkpointing example in thorn documentationThomas Radke2005-11-16
| | | | darcs-hash:20051116132635-776a0-1ea49bd1b181bc7a44b9cfe2638326e1937677dd.gz
* CarpetIOHDF5: implement "out_unchunked" option for individual variables to ↵Thomas Radke2005-10-05
| | | | | | | | | | be output Apart from setting the parameter IO::out_unchunked to choose the output mode for all variables, this can be overridden for individual variables in an option string appended to the variable's name in the IOHDF5::out_vars parameter. darcs-hash:20051005100152-776a0-9f6f2e4b691a46b12aefab555440625f39836aaf.gz
* CarpetIOHDF5: respect setting of out_unchunked parameter also in ↵Thomas Radke2005-09-18
| | | | | | | | | | single-processor runs For single-processor runs, CarpetIOHDF5 unconditionally wrote HDF5 output files in unchunked format. Like for multi-processor runs, the user can now choose between chunked and unchunked through the out_unchunked parameter. darcs-hash:20050918214401-776a0-882e8b1e6dcee4d25330bc11d4b6973e297f1a52.gz
* CarpetIOHDF5: don't checkpoint variables tagged as 'checkpoint="no"'Thomas Radke2005-09-13
| | | | darcs-hash:20050913162936-776a0-7b3fa7d3f08c37321b6ea836178168131fa98964.gz
* CarpetIOHDF5: update thorn documentationThomas Radke2005-09-13
| | | | | | | | | Document that when invoking the CarpetIOHDF5 output method via the flesh API this must be done in level mode. Also document how to trigger the output of the same variable at intermediate timesteps. darcs-hash:20050913162656-776a0-bdc0dda2138176f9aea3baee6586070455e2dbc5.gz
* CarpetIOHDF5: fixed a problem with IOHDF5::use_reflevels_from_checkpoint = ↵Thomas Radke2005-09-06
| | | | | | | | | | | "yes" during recovery When IOHDF5::use_reflevels_from_checkpoint is set, the parameter CarpetLib::refinement_levels is steered to take the number of levels found in the checkpoint. This steering used to happen during parameter recovery where it didn't have any effect if the parameter had been set in the parfile already. Now it's done in a separate routine scheduled at STARTUP. darcs-hash:20050906140808-776a0-bae608c103b161ac67690da2a8803bdff84cf2f4.gz
* CarpetIOHDF5: bugfix for outputting the same variable into multiple filesThomas Radke2005-08-23
| | | | | | | | | | Before a variable is output it is checked whether it has been output already during the current iteration (eg. due to triggers). This check was only variable-based and therefore caused problems when the same variable was to be output to multiple files (using different alias names). Now the check has been extended to also take the output filenames into account. darcs-hash:20050823135345-776a0-1555987b4aee34bb646e67f491375dbcc44dddad.gz
* Carpet*: generalise the comm_state class for collective buffer communicationsThomas Radke2005-08-15
| | | | | | | | | CarpetLib's comm_state class (actually, it's still just a struct) has been extended to handle collective buffer communications for all possible C datatypes at the same time. This makes it unnecessary for the higher-level communication routines to loop over each individual datatype separately. darcs-hash:20050815150023-776a0-dddc1aca7ccaebae872f9f451b2c3595cd951fed.gz
* CarpetIOHDF5: optimise I/O accesses for the case when all processors recover ↵Thomas Radke2005-07-28
| | | | | | | | | | | | | | from a single chunked checkpoint file The list of HDF5 datasets to process is now reordered so that processor-local components are processed first. When processing the list of datasets, only those will be reopened from which more data is to be read. The check when a variable of a given timelevel has been fully recovered was improved. This closes http://bugs.carpetcode.org/show_bug.cgi?id=87 "Single-file, many-cpu recovery very slow". darcs-hash:20050728122546-776a0-21dfceef87e12e72b8a0ccb0911c76066521e192.gz
* Carpet{IOScalar,IOHDF5,IOStreamedHDF5}: use C++ strings (rather than ↵Thomas Radke2005-07-26
| | | | | | | | | | | | Util_asprintf()) to construct C output strings There was a small memory leak in using Util_asprintf() to continuously append to an allocated string buffer. The code has now been rewritten to use C++ string class objects which are destroyed automatically. This closes http://bugs.carpetcode.org/show_bug.cgi?id=89. darcs-hash:20050726122331-776a0-874ccd0d5766b85b1110fcd6f501a7e39c35e965.gz
* CarpetIOHDF5: Documentation updateErik Schnetter2005-07-26
| | | | | | Use \textless etc. instead of $<$. darcs-hash:20050726101551-891bb-b6e8fb5f3fb540bf449626fa3650b4870bb444de.gz
* CarpetIOHDF5: remove an old checkpoint only if the current one has been ↵Thomas Radke2005-07-25
| | | | | | written successfully by all output processors darcs-hash:20050725150549-776a0-fe03ace195af6a723af91ca7d0a63eaeae25b050.gz
* Fixed critical bug in Checkpointing / Zero-Variable groupscott2005-07-13
| | | | | | | | Apparently there are groups with 0 variables. For them CCTK_FirstVarIndexI(group) returns -2. Since IOUtil does not check the validity of the varindex for which it creates an IO request, this can potentially lead to memory corruption and in fact does so on the Itanium-2 architecture. Fix: The checkpointing routine now does nothing for variable groups with 0 variables. darcs-hash:20050713140545-34d71-d9966cc8d510dd7a85a42ef7dc683491b2d2c895.gz
* CarpetIOHDF5: brief note on how to visualise parallel HDF5 output dataThomas Radke2005-06-28
| | | | darcs-hash:20050628153314-776a0-d7649e76b4fcf801a37bbfae4eb68bbf7b0532f5.gz
* CarpetIOHDF5: add "cctk_bbox" and "cctk_nghostzones" attributes to each datasetThomas Radke2005-06-28
| | | | darcs-hash:20050628113206-776a0-3ed3eae73dcc785de93273c16df556c3c6531de3.gz
* CarpetIOHDF5: provide include header "CarpetIOHDF5.hh"Thomas Radke2005-06-25
| | | | darcs-hash:20050625173934-776a0-b0c78122d773961cafbb3e962e9b7252a85ae74a.gz
* CarpetIOHDF5: implement parallel I/OThomas Radke2005-06-24
| | | | | | | | | | | | | | Like CactusPUGHIO/IOHDF5, CarpetIOHDF5 now also provides parallel I/O for data and checkpointing/recovery. The I/O mode is set via IOUtils' parameters IO::out_mode and IO::out_unchunked, with parallel output to chunked files (one per processor) being the default. The recovery and filereader interface can read any type of CarpetIOHDF5 data files transparently - regardless of how it was created (serial/parallel, or on a different number of processors). See the updated thorn documentation for details. darcs-hash:20050624123924-776a0-5639aee9677f0362fc94c80c534b47fd1b07ae74.gz
* CarpetIOHDF5: fixed filereader testsuite parfileThomas Radke2005-06-23
| | | | | | Filereader files are found in IO::filereader_ID_dir and not in IO::recover_dir. darcs-hash:20050623155818-776a0-cc24468227060880e5cb1a7c6259ffceb7536fad.gz
* CarpetIOHDF5: fix testsuite output filesThomas Radke2005-06-14
| | | | | | | | | | | The CarpetIOASCII 1D output files now have two additional comment lines: # 1D ASCII output created by CarpetIOASCII # This caused the testsuite to be broken. darcs-hash:20050614154107-776a0-c530d6ce356996d8c6f14b63c7d4ec3dd5c56b8e.gz
* CarpetIOHDF5: added logic to keep specific checkpoints aroundThomas Radke2005-06-10
| | | | | | | | The parameter IO::checkpoint_keep is steerable at any time now (after you've updated CactusBase/IOUtil/param.ccl) so that you can keep specific checkpoints around. Please see the thorn documentation of CactusBase/IOUtil for details. darcs-hash:20050610091144-776a0-b5e90353851eb1d7871f16b05d1b47748599d27a.gz
* CarpetIOHDF5: resolve conflictErik Schnetter2005-06-06
| | | | darcs-hash:20050606164745-891bb-bfdba5217c624e406550ea12d38969eed76c51ed.gz
* CarpetIOHDF5: API for H5Sselect_hyperslab() has changed in HDF5 1.6.4Thomas Radke2005-05-12
| | | | | | | The second argument to H5Sselect_hyperslab must be a 'const hsize_t start[]' in the latest release 1.6.4. It used to be 'const hssize_t start[]' in all previous releases. darcs-hash:20050512101748-776a0-068b805f7e8c6399e96c38d8689d0e246b708cf9.gz
* CarpetIOHDF5: update (chunked) ASCII output files for 4-processor recovery ↵Thomas Radke2005-06-06
| | | | | | | | testsuite The processor decomposition has changed recently so the chunked output looks different now (but still has the same results). darcs-hash:20050606115735-776a0-c1d26041bb4cbb73b76908f0a42b086772fd131b.gz
* CarpetIOHDF5: renewed checkpoint datafileThomas Radke2005-06-04
| | | | | | | The old checkpoint file contained additional variables from my local version of WaveToyC. darcs-hash:20050604153525-776a0-ee642d7a86461be70c9aa64a9f614b166794fc34.gz
* CarpetIOHDF5: Put unique simulation ID into each fileErik Schnetter2005-06-05
| | | | | | Add the unique simulation ID as attribute to each dataset. darcs-hash:20050605221351-891bb-05a025dbdefc60c7dc476e4b7b50ff608bdacd61.gz
* CarpetIOHDF5: implement parallel recoveryThomas Radke2005-05-27
| | | | | | | All processors open the checkpoint file and recover their portions from it in parallel. No MPI communication is needed anymore. darcs-hash:20050527124239-776a0-25d4fa77b50ea22fb2b25c87e399d95090c7eaf2.gz
* CarpetIOHDF5: don't use buffer zones for the wavetoy testsuiteThomas Radke2005-05-26
| | | | | | | | | The wavetoy checkpoint parfile did set CarpetLib::buffer_width = 6 which is not necessary for wavetoy. It even caused slight differences on 16 processors because then there apparently were more buffer zones than real gridpoints on a processor. Carpet should test for this case. darcs-hash:20050526161905-776a0-843b778a8175c27e966ae7d237c46d843b7dc75b.gz
* CarpetIOHDF5: code cleanupThomas Radke2005-05-21
| | | | | | Removed old code which was used to synchronise variables after recovery. darcs-hash:20050521171927-776a0-fe5e3bf11fd5f2a7ddd8b055fd3a3ade3a9ff625.gz
* CarpetIOHDF5: API for H5Sselect_hyperslab() has changed in HDF5 1.6.4Thomas Radke2005-05-12
| | | | | | | The second argument to H5Sselect_hyperslab must be a 'const hsize_t start[]' in the latest release 1.6.4. It used to be 'const hssize_t start[]' in all previous releases. darcs-hash:20050512101740-776a0-3581a3be23f057105585cf57b384a166f30aec29.gz
* global: Add varying refinement factorsErik Schnetter2005-05-01
| | | | | | | | | | Add support for varying refinement factors. The spatial refinement factors can be different in different directions, can be different from the time refinement factor, and can be different on each level. (However, the underlying spatial transport operators do currently not handle any factors except two.) darcs-hash:20050501205010-891bb-8d3a74abaad55ee6c77ef18d51fca2a2b69740de.gz
* CarpetIOHDF5: bugfix for checkpoint/recoveryThomas Radke2005-04-12
| | | | | | | | | | | | | | | | | | | | | | | CarpetIOHDF5 used to output unchunked data only, ie. all ghostzones and boundary zones were cut off from the bboxes to be output. This caused problems after recovery: uninitialized ghostzones led to wrong results. The obvious solution, calling CCTK_SyncGroup() for all groups after recovery, was also problematic because that (1) synchronised only the current timelevel and (2) boundary prolongation was done in a scheduling order different to the regular order used during checkpointing. The solution implemented now by this patch is to write checkpoint files always in chunked mode (which includes all ghostzones and boundary zones). This also makes synchronisation of all groups after recovery unnecessary. Regular HDF5 output files can also be written in chunked mode but the default (still) is unchunked. A new boolean parameter IOHDF5::out_unchunked (with default value "yes") was introduced to toggle this option. Note that this parameter has the same meaning as IO::out_unchunked but an opposite default value. This is the only reason why IOHDF5::out_unchunked was introduced. darcs-hash:20050412161430-776a0-d5efd21ecdbe41ad9a804014b816acad0cd71b2c.gz
* CarpetIOHDF5: Use CCTK_REAL instead of double for meta dataErik Schnetter2005-04-11
| | | | | | | | Use the type CCTK_REAL instead of double for storing meta data in the HDF5 files. This is necessary if CCTK_REAL has more precision than double. darcs-hash:20050411170627-891bb-374e4c2581155d825f9a1925b1d4319051bc36d6.gz
* CarpetIOHDF5: use CCTK_ActiveTimeLevelsGI() instead of ↵Thomas Radke2005-04-11
| | | | | | CCTK_GroupStorageIncrease() to find out the number of timelevels to checkpoint darcs-hash:20050411121428-776a0-13b4d0626e749b2e20079d8101e7a5e9e57e18e1.gz
* CarpetIOHDF5: optimisation of syncing all variables after recoveryThomas Radke2005-04-07
| | | | | | Synchronise all variables of the same vartype at once by calling Carpet::SyncProlongateGroups(). darcs-hash:20050407153843-776a0-e567718c6ba858f4c074c5ec65dd0fc5cb373526.gz
* CarpetIOHDF5: pass vartype in comm_state constructor tto make use of ↵Thomas Radke2005-03-31
| | | | | | collective communication buffers darcs-hash:20050331080034-776a0-629822f876800af1b76d5d43ca131f5373e991a4.gz
* CarpetIOHDF5: fix for IOHDF5::use_reflevels_from_checkpoint = "yes" for the ↵Thomas Radke2005-03-21
| | | | | | case when CarpetRegrid::refinement_levels was also set in the recovery parfile darcs-hash:20050321110931-776a0-6fd09edfbd764f2b4d3f296a3f8c429f1000e407.gz
* CarpetIOHDF5: write a "carpet_version" integer attribute to tag data output ↵Thomas Radke2005-02-14
| | | | | | and checkpoint files darcs-hash:20050214163413-776a0-77171dd6e4746b5d889bfcbe515c0d6f59c6ba10.gz
* CarpetIOHDF5: read the 'group_timelevel' attribute as a positive integer to ↵Thomas Radke2005-02-14
| | | | | | retain backwards compatibility with older checkpoints darcs-hash:20050214145219-776a0-ddc4e8af96f31b02aaff3ace9208b6b1d8dbc96c.gz
* CarpetIOHDF5: Use positive timelevelsErik Schnetter2005-02-07
| | | | darcs-hash:20050207131924-891bb-0dbd85d6ac494fcac9aef96ad00c37025a4891e1.gz
* CarpetIOHDF5: document the 'refinement_levels' I/O parameter optionThomas Radke2005-02-04
| | | | darcs-hash:20050204181953-776a0-786609f50f6b6b1526ceeb94d8aea32bb2dd903b.gz
* CarpetIOHDF5: output individual refinement levels if requestedThomas Radke2005-02-04
| | | | | | | | | | | | | After updating CactusBase/IOUtil, one can now choose refinement levels for individual grid functions to be output, simply by using an options string, eg.: IOHDF5::out_vars = "wavetoy::phi{refinement_levels = {1 2}}" If no such option is given, output defaults to all refinement levels. Note that the parsing routine (in IOUtil) does not check for invalid refinement levels (>= max_refinement_levels). darcs-hash:20050204181016-776a0-4d1d74a64c2869ffc4a16846146e1a0b7fd98638.gz
* global: Change the way in which the grid hierarchy is storedErik Schnetter2005-02-01
| | | | | | | | | | | | | | | | | | | | Change the way in which the grid hierarchy is stored. The new hierarchy is map mglevel reflevel component timelevel i.e., mglevel moved from the bottom to almost the top. This is because mglevel used to be a true multigrid level, but is now meant to be a convergence level. Do not allocate all storage all the time. Allow storage to be switched on an off per refinement level (and for a single mglevel, which prompted the change above). Handle storage management with CCTK_{In,De}creaseGroupStorage instead of CCTK_{En,Dis}ableGroupStorage. darcs-hash:20050201225827-891bb-eae3b6bd092ae8d6b5e49be84c6f09f0e882933e.gz
* global: Turn CarpetLib templates into classesErik Schnetter2005-01-01
| | | | | | | | | | | | | | | | | | | Turn most of the templates in CarpetLib, which used to have the form template<int D> class XXX into classes, i.e., into something like class XXX by setting D to the new global integer constant dim, which in turn is set to 3. The templates gf and data, which used to be of the form template<typename T, int D> class XXX are now of the form template<typename T> class XXX The templates vect, bbox, and bboxset remain templates. This change simplifies the code somewhat. darcs-hash:20050101182234-891bb-c3063528841f0d078b12cc506309ea27d8ce730d.gz
* CarpetIOHDF5/src/Output.cc: fix calculation of "origin" attribute; don't ↵Thomas Radke2005-01-10
| | | | | | output "cctk_bbox" and "cctk_nghostzones" attributes for unchunked data darcs-hash:20050110105925-776a0-610cbdb983ac67dcb5a28bf558cba4937d20fe60.gz