aboutsummaryrefslogtreecommitdiff
path: root/Carpet/CarpetIOHDF5/src/Output.cc
Commit message (Collapse)AuthorAge
* CarpetIOHDF5: Remove unused variables, initialise other variable to avoid ↵Erik Schnetter2013-01-18
| | | | compiler warnings
* Allow padding in transport operatorsErik Schnetter2012-11-22
| | | | | Rewrite padding infrastructure. Add padded array extents to transport operator APIs.
* CarpetIOHDF5: close all HDF5 objects when output file is closedRoland Haas2012-10-24
| | | | also garbage collect HDF5 at each H5close
* CarpetIOHDF5: make index files sparse files with full datasetsRoland Haas2012-09-11
| | | | | | | | | rather than datasets of extent [1,1,1]. It so happens that this patch removes the separate shape arrays and separate dataspaces for the datasets in index files and creates the index file datasets with the same dataspace as the "heavy" file datasets. It retains the h5shape attribute that was originally introduced for index files even though it is now redundant (since one could call H5Sget_simple_extent_dims on the datasets in the index files).
* CarpetIOHDF5: support indices in sequential chunked outputRoland Haas2012-09-11
| | | | which happens to be the output method used for single process runs.
* CarpetLib: Change API to obtain pointer to grid function dataErik Schnetter2012-09-11
| | | | | | | | Change the API to obtain a pointer to grid function data: - Use a function "typed_data_pointer" instead of overloading the () operator (because this looks nicer) - Don't use a virtual function (because this isn't needed) - Update all uses
* CarpetIOHDF5: Update to new gdata::copy_from APIErik Schnetter2012-09-11
|
* CarpetIOHDF5: Store cell centering offset with grid function attributesErik Schnetter2011-12-14
|
* CarpetIOHDF5: Check that MPI datatypes are defined before using themErik Schnetter2011-12-14
|
* CarpetIOHDF5: Index file supportIan Hinder2011-12-14
| | | | | | | | | | | | | Scanning the attributes of a large CarpetIOHDF5 output file, as is necessary in the visitCarpetHDF5 plugin, can be very time consuming. This commit adds support for writing an "index" HDF5 file at the same time as the data file, conditional on a parameter "CarpetIOHDF5::output_index". The index file is the same as the data file except it contains null datasets, and hence is very small. The attributes can be read from this index file instead of the data file, greatly increasing performance. The datasets will have size 1 in the index file, so an additional attribute (h5space) is added to the dataset to specify the correct dataset dimensions.
* CarpetIOHDF5: Correct error in outputting distributed grid arraysErik Schnetter2011-12-14
|
* CarpetIOHDF5: Update to new dh classesErik Schnetter2011-12-14
|
* CarpetIOHDF5: Remove explicit bboxset normalizationsErik Schnetter2011-12-14
|
* Combine CarpetLib's INSTANTIATE and Carpet's TYPECASE mechanism into aErik Schnetter2011-12-14
| | | | | | single mechanism provided by CarpetLib. Use this mechanism everywhere.
* Import CarpetErik Schnetter2011-12-14
| | | | Ignore-this: 309b4dd613f4af2b84aa5d6743fdb6b3
* Fixed "origin" and "delta" output attributes to workChristian Reisswig2008-06-11
| | | | for generalized grids.
* CarpetIOHDF5: Make code saferErik Schnetter2008-02-19
| | | | | | Change sprintf to snprintf. Add assert statements. darcs-hash:20080219052249-dae7b-abbdbb9df6c099cdd62ebaac135b654062659619.gz
* Bugfix to MapIsCartesian-Attributereisswig2008-01-31
| | | | | | | Without this bugfix, whenever there is an aliased function "Multipatch_MapIsCartesian" function, the corresponding write call of this attribute fails because the attribute's scalar dataspace was already closed. darcs-hash:20080131151816-79e7e-3678879958c776fc1240e8b93f054732c64b62c5.gz
* CarpetIOHDF5: Use CCTK_REAL instead of long long for timing measurementsErik Schnetter2008-01-30
| | | | darcs-hash:20080130222154-dae7b-113029d4e40be633fca3253f5e2d47f656ae41ac.gz
* CarpetIOHDF5: Add attribute specifying whether coordinate system is CartesianErik Schnetter2008-01-28
| | | | | | | | Add an HDF5 attribute "MapisCartesian" specifying whether the coordinate system is Cartesian. This attribute is added if there is a thorn providing this information. darcs-hash:20080128155820-dae7b-759cc1608ba8a7c9d34203b114ceb3836db0f64d.gz
* CarpetIOHDF5: #include <cstring>Erik Schnetter2007-11-02
| | | | darcs-hash:20071102224421-dae7b-79e89575a8e23c46fea47ef043d28ab7331da985.gz
* CarpetIOHDF5: Record time spent in I/OErik Schnetter2007-10-03
| | | | darcs-hash:20071003194857-dae7b-d8bb68e9c4ee52559fea874b3f80a57eebf9650f.gz
* CarpetIOHDF5: Correct error in setting up gdata communication objectErik Schnetter2007-05-01
| | | | | | Set up gdata object correctly before copying it between processors. darcs-hash:20070501163757-dae7b-55cb3575d707f88806bff70f4cfc7153de879c1f.gz
* CarpetIOHDF5: Checkpoint and restore each level's current timeErik Schnetter2007-04-19
| | | | darcs-hash:20070419021113-dae7b-baa8e7a012bddab40246f9485d5b3987fd7dc587.gz
* CarpetIOHDF5: Do not use "map" as local variable nameErik Schnetter2007-01-12
| | | | | | | | Use "m" instead of "map" as local variable name. Remove "Carpet::" qualifier in front of variable "maps". darcs-hash:20070112224022-dae7b-0c5241b73c1f4a8ff4722e04bc70ed047d6158da.gz
* CarpetIOHDF5: support per-variable output compression levelsThomas Radke2006-11-17
| | | | | | | | | | | | | | | | | Implement variable-specific output request option 'compression_level' so that users can specify eg. IOHDF5::compression_level = 1 IOHDF5::out_vars = "admbase::metric admconstraints::hamiltonian admbase::lapse{ compression_level = 0 }" to request HDF5 dataset compression for every output variable except for the lapse. This modification also requires an update of thorn CactusBase/IOUtil. darcs-hash:20061117132206-776a0-0e1d07a85cf206fa262a94fd0dd63c6f27e50fa2.gz
* CarpetIOHDF5: support automatic gzip dataset compressionThomas Radke2006-11-17
| | | | | | | | | With the new steerable integer parameter IOHDF5::compression_level users can now request gzip dataset compression while writing HDF5 files: levels from 1-9 specify a specific compression rate, 0 (which is the default) disables dataset compression. darcs-hash:20061117115153-776a0-7aaead5d2a0216841a27e091fddb9b6c4f40eed4.gz
* CarpetIOHDF5: use aliased function Coord_GroupSystem() only optionallyThomas Radke2006-10-04
| | | | | | | If no thorn provided this aliased function, CarpetIOHDF5 assumes that no coordinate information is available. darcs-hash:20061004144616-776a0-00da12cca7d6b6ad1ae0a38a96923f771239de79.gz
* CarpetIOHDF5: fix the HDF5 library version test when determining the ↵Thomas Radke2006-05-11
| | | | | | | | | datatype for H5Sselect_hyperslab() arguments This patch lets you compile CarpetIOHDF5 also with HDF5-1.8.x (and future versions). darcs-hash:20060511172957-776a0-acbc1bd6b8d92223c0b52a43babf394c0ab9b0f4.gz
* CarpetIOHDF5: fix small memory leak in HDF5 outputThomas Radke2006-02-12
| | | | | | While outputting dataset attributes, an HDF5 dataspace wasn't closed properly. darcs-hash:20060212143008-776a0-41e46c61bce2dc22fbfc7093d2ad776bfae00687.gz
* CarpetIOHDF5: bugfix for writing checkpointsThomas Radke2006-02-06
| | | | | | | | Accumulate any low-level errors returned by HDF5 library calls and check them after writing a checkpoint. Do not remove an existing checkpoint if there were any low-level errors in generating the previous one. darcs-hash:20060206183846-776a0-549e715d7a3fceafe70678aaf1329052dce724bb.gz
* Carpet*: generalise the comm_state class for collective buffer communicationsThomas Radke2005-08-15
| | | | | | | | | CarpetLib's comm_state class (actually, it's still just a struct) has been extended to handle collective buffer communications for all possible C datatypes at the same time. This makes it unnecessary for the higher-level communication routines to loop over each individual datatype separately. darcs-hash:20050815150023-776a0-dddc1aca7ccaebae872f9f451b2c3595cd951fed.gz
* CarpetIOHDF5: add "cctk_bbox" and "cctk_nghostzones" attributes to each datasetThomas Radke2005-06-28
| | | | darcs-hash:20050628113206-776a0-3ed3eae73dcc785de93273c16df556c3c6531de3.gz
* CarpetIOHDF5: implement parallel I/OThomas Radke2005-06-24
| | | | | | | | | | | | | | Like CactusPUGHIO/IOHDF5, CarpetIOHDF5 now also provides parallel I/O for data and checkpointing/recovery. The I/O mode is set via IOUtils' parameters IO::out_mode and IO::out_unchunked, with parallel output to chunked files (one per processor) being the default. The recovery and filereader interface can read any type of CarpetIOHDF5 data files transparently - regardless of how it was created (serial/parallel, or on a different number of processors). See the updated thorn documentation for details. darcs-hash:20050624123924-776a0-5639aee9677f0362fc94c80c534b47fd1b07ae74.gz
* CarpetIOHDF5: added logic to keep specific checkpoints aroundThomas Radke2005-06-10
| | | | | | | | The parameter IO::checkpoint_keep is steerable at any time now (after you've updated CactusBase/IOUtil/param.ccl) so that you can keep specific checkpoints around. Please see the thorn documentation of CactusBase/IOUtil for details. darcs-hash:20050610091144-776a0-b5e90353851eb1d7871f16b05d1b47748599d27a.gz
* CarpetIOHDF5: resolve conflictErik Schnetter2005-06-06
| | | | darcs-hash:20050606164745-891bb-bfdba5217c624e406550ea12d38969eed76c51ed.gz
* CarpetIOHDF5: API for H5Sselect_hyperslab() has changed in HDF5 1.6.4Thomas Radke2005-05-12
| | | | | | | The second argument to H5Sselect_hyperslab must be a 'const hsize_t start[]' in the latest release 1.6.4. It used to be 'const hssize_t start[]' in all previous releases. darcs-hash:20050512101748-776a0-068b805f7e8c6399e96c38d8689d0e246b708cf9.gz
* CarpetIOHDF5: Put unique simulation ID into each fileErik Schnetter2005-06-05
| | | | | | Add the unique simulation ID as attribute to each dataset. darcs-hash:20050605221351-891bb-05a025dbdefc60c7dc476e4b7b50ff608bdacd61.gz
* CarpetIOHDF5: implement parallel recoveryThomas Radke2005-05-27
| | | | | | | All processors open the checkpoint file and recover their portions from it in parallel. No MPI communication is needed anymore. darcs-hash:20050527124239-776a0-25d4fa77b50ea22fb2b25c87e399d95090c7eaf2.gz
* CarpetIOHDF5: API for H5Sselect_hyperslab() has changed in HDF5 1.6.4Thomas Radke2005-05-12
| | | | | | | The second argument to H5Sselect_hyperslab must be a 'const hsize_t start[]' in the latest release 1.6.4. It used to be 'const hssize_t start[]' in all previous releases. darcs-hash:20050512101740-776a0-3581a3be23f057105585cf57b384a166f30aec29.gz
* CarpetIOHDF5: bugfix for checkpoint/recoveryThomas Radke2005-04-12
| | | | | | | | | | | | | | | | | | | | | | | CarpetIOHDF5 used to output unchunked data only, ie. all ghostzones and boundary zones were cut off from the bboxes to be output. This caused problems after recovery: uninitialized ghostzones led to wrong results. The obvious solution, calling CCTK_SyncGroup() for all groups after recovery, was also problematic because that (1) synchronised only the current timelevel and (2) boundary prolongation was done in a scheduling order different to the regular order used during checkpointing. The solution implemented now by this patch is to write checkpoint files always in chunked mode (which includes all ghostzones and boundary zones). This also makes synchronisation of all groups after recovery unnecessary. Regular HDF5 output files can also be written in chunked mode but the default (still) is unchunked. A new boolean parameter IOHDF5::out_unchunked (with default value "yes") was introduced to toggle this option. Note that this parameter has the same meaning as IO::out_unchunked but an opposite default value. This is the only reason why IOHDF5::out_unchunked was introduced. darcs-hash:20050412161430-776a0-d5efd21ecdbe41ad9a804014b816acad0cd71b2c.gz
* CarpetIOHDF5: Use CCTK_REAL instead of double for meta dataErik Schnetter2005-04-11
| | | | | | | | Use the type CCTK_REAL instead of double for storing meta data in the HDF5 files. This is necessary if CCTK_REAL has more precision than double. darcs-hash:20050411170627-891bb-374e4c2581155d825f9a1925b1d4319051bc36d6.gz
* CarpetIOHDF5: pass vartype in comm_state constructor tto make use of ↵Thomas Radke2005-03-31
| | | | | | collective communication buffers darcs-hash:20050331080034-776a0-629822f876800af1b76d5d43ca131f5373e991a4.gz
* CarpetIOHDF5: write a "carpet_version" integer attribute to tag data output ↵Thomas Radke2005-02-14
| | | | | | and checkpoint files darcs-hash:20050214163413-776a0-77171dd6e4746b5d889bfcbe515c0d6f59c6ba10.gz
* CarpetIOHDF5: Use positive timelevelsErik Schnetter2005-02-07
| | | | darcs-hash:20050207131924-891bb-0dbd85d6ac494fcac9aef96ad00c37025a4891e1.gz
* CarpetIOHDF5: output individual refinement levels if requestedThomas Radke2005-02-04
| | | | | | | | | | | | | After updating CactusBase/IOUtil, one can now choose refinement levels for individual grid functions to be output, simply by using an options string, eg.: IOHDF5::out_vars = "wavetoy::phi{refinement_levels = {1 2}}" If no such option is given, output defaults to all refinement levels. Note that the parsing routine (in IOUtil) does not check for invalid refinement levels (>= max_refinement_levels). darcs-hash:20050204181016-776a0-4d1d74a64c2869ffc4a16846146e1a0b7fd98638.gz
* global: Change the way in which the grid hierarchy is storedErik Schnetter2005-02-01
| | | | | | | | | | | | | | | | | | | | Change the way in which the grid hierarchy is stored. The new hierarchy is map mglevel reflevel component timelevel i.e., mglevel moved from the bottom to almost the top. This is because mglevel used to be a true multigrid level, but is now meant to be a convergence level. Do not allocate all storage all the time. Allow storage to be switched on an off per refinement level (and for a single mglevel, which prompted the change above). Handle storage management with CCTK_{In,De}creaseGroupStorage instead of CCTK_{En,Dis}ableGroupStorage. darcs-hash:20050201225827-891bb-eae3b6bd092ae8d6b5e49be84c6f09f0e882933e.gz
* global: Turn CarpetLib templates into classesErik Schnetter2005-01-01
| | | | | | | | | | | | | | | | | | | Turn most of the templates in CarpetLib, which used to have the form template<int D> class XXX into classes, i.e., into something like class XXX by setting D to the new global integer constant dim, which in turn is set to 3. The templates gf and data, which used to be of the form template<typename T, int D> class XXX are now of the form template<typename T> class XXX The templates vect, bbox, and bboxset remain templates. This change simplifies the code somewhat. darcs-hash:20050101182234-891bb-c3063528841f0d078b12cc506309ea27d8ce730d.gz
* CarpetIOHDF5/src/Output.cc: fix calculation of "origin" attribute; don't ↵Thomas Radke2005-01-10
| | | | | | output "cctk_bbox" and "cctk_nghostzones" attributes for unchunked data darcs-hash:20050110105925-776a0-610cbdb983ac67dcb5a28bf558cba4937d20fe60.gz
* CarpetIOHDF5/src/Output.cc: write bbox attributes only if a variable has a ↵Thomas Radke2005-01-03
| | | | | | coordinate system associated with it darcs-hash:20050103174917-776a0-29c425b306db7d85ff60d91496bd4db5895a0a0f.gz