| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
| |
Use "m" instead of "map" as local variable name.
Remove "Carpet::" qualifier in front of variable "maps".
darcs-hash:20070112224022-dae7b-0c5241b73c1f4a8ff4722e04bc70ed047d6158da.gz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implement variable-specific output request option 'compression_level'
so that users can specify eg.
IOHDF5::compression_level = 1
IOHDF5::out_vars = "admbase::metric
admconstraints::hamiltonian
admbase::lapse{ compression_level = 0 }"
to request HDF5 dataset compression for every output variable except for the
lapse.
This modification also requires an update of thorn CactusBase/IOUtil.
darcs-hash:20061117132206-776a0-0e1d07a85cf206fa262a94fd0dd63c6f27e50fa2.gz
|
|
|
|
|
|
|
|
|
| |
With the new steerable integer parameter IOHDF5::compression_level users can
now request gzip dataset compression while writing HDF5 files: levels from 1-9
specify a specific compression rate, 0 (which is the default) disables dataset
compression.
darcs-hash:20061117115153-776a0-7aaead5d2a0216841a27e091fddb9b6c4f40eed4.gz
|
|
|
|
|
|
|
| |
If no thorn provided this aliased function, CarpetIOHDF5 assumes that no
coordinate information is available.
darcs-hash:20061004144616-776a0-00da12cca7d6b6ad1ae0a38a96923f771239de79.gz
|
|
|
|
|
|
|
|
|
| |
datatype for H5Sselect_hyperslab() arguments
This patch lets you compile CarpetIOHDF5 also with HDF5-1.8.x (and future
versions).
darcs-hash:20060511172957-776a0-acbc1bd6b8d92223c0b52a43babf394c0ab9b0f4.gz
|
|
|
|
|
|
| |
While outputting dataset attributes, an HDF5 dataspace wasn't closed properly.
darcs-hash:20060212143008-776a0-41e46c61bce2dc22fbfc7093d2ad776bfae00687.gz
|
|
|
|
|
|
|
|
| |
Accumulate any low-level errors returned by HDF5 library calls and check them
after writing a checkpoint. Do not remove an existing checkpoint if there were
any low-level errors in generating the previous one.
darcs-hash:20060206183846-776a0-549e715d7a3fceafe70678aaf1329052dce724bb.gz
|
|
|
|
|
|
|
|
|
| |
CarpetLib's comm_state class (actually, it's still just a struct) has been
extended to handle collective buffer communications for all possible C datatypes
at the same time. This makes it unnecessary for the higher-level communication
routines to loop over each individual datatype separately.
darcs-hash:20050815150023-776a0-dddc1aca7ccaebae872f9f451b2c3595cd951fed.gz
|
|
|
|
| |
darcs-hash:20050628113206-776a0-3ed3eae73dcc785de93273c16df556c3c6531de3.gz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Like CactusPUGHIO/IOHDF5, CarpetIOHDF5 now also provides parallel I/O for
data and checkpointing/recovery.
The I/O mode is set via IOUtils' parameters IO::out_mode and IO::out_unchunked,
with parallel output to chunked files (one per processor) being the default.
The recovery and filereader interface can read any type of CarpetIOHDF5 data
files transparently - regardless of how it was created (serial/parallel,
or on a different number of processors).
See the updated thorn documentation for details.
darcs-hash:20050624123924-776a0-5639aee9677f0362fc94c80c534b47fd1b07ae74.gz
|
|
|
|
|
|
|
|
| |
The parameter IO::checkpoint_keep is steerable at any time now (after you've
updated CactusBase/IOUtil/param.ccl) so that you can keep specific checkpoints
around. Please see the thorn documentation of CactusBase/IOUtil for details.
darcs-hash:20050610091144-776a0-b5e90353851eb1d7871f16b05d1b47748599d27a.gz
|
|
|
|
| |
darcs-hash:20050606164745-891bb-bfdba5217c624e406550ea12d38969eed76c51ed.gz
|
|
|
|
|
|
|
| |
The second argument to H5Sselect_hyperslab must be a 'const hsize_t start[]' in the latest release 1.6.4.
It used to be 'const hssize_t start[]' in all previous releases.
darcs-hash:20050512101748-776a0-068b805f7e8c6399e96c38d8689d0e246b708cf9.gz
|
|
|
|
|
|
| |
Add the unique simulation ID as attribute to each dataset.
darcs-hash:20050605221351-891bb-05a025dbdefc60c7dc476e4b7b50ff608bdacd61.gz
|
|
|
|
|
|
|
| |
All processors open the checkpoint file and recover their portions from it
in parallel. No MPI communication is needed anymore.
darcs-hash:20050527124239-776a0-25d4fa77b50ea22fb2b25c87e399d95090c7eaf2.gz
|
|
|
|
|
|
|
| |
The second argument to H5Sselect_hyperslab must be a 'const hsize_t start[]' in the latest release 1.6.4.
It used to be 'const hssize_t start[]' in all previous releases.
darcs-hash:20050512101740-776a0-3581a3be23f057105585cf57b384a166f30aec29.gz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CarpetIOHDF5 used to output unchunked data only, ie. all ghostzones and
boundary zones were cut off from the bboxes to be output.
This caused problems after recovery: uninitialized ghostzones led to wrong
results. The obvious solution, calling CCTK_SyncGroup() for all groups after
recovery, was also problematic because that (1) synchronised only the current
timelevel and (2) boundary prolongation was done in a scheduling order
different to the regular order used during checkpointing.
The solution implemented now by this patch is to write checkpoint files always
in chunked mode (which includes all ghostzones and boundary zones). This also
makes synchronisation of all groups after recovery unnecessary.
Regular HDF5 output files can also be written in chunked mode but the default
(still) is unchunked. A new boolean parameter IOHDF5::out_unchunked (with
default value "yes") was introduced to toggle this option.
Note that this parameter has the same meaning as IO::out_unchunked but an
opposite default value. This is the only reason why IOHDF5::out_unchunked
was introduced.
darcs-hash:20050412161430-776a0-d5efd21ecdbe41ad9a804014b816acad0cd71b2c.gz
|
|
|
|
|
|
|
|
| |
Use the type CCTK_REAL instead of double for storing meta data in the
HDF5 files. This is necessary if CCTK_REAL has more precision than
double.
darcs-hash:20050411170627-891bb-374e4c2581155d825f9a1925b1d4319051bc36d6.gz
|
|
|
|
|
|
| |
collective communication buffers
darcs-hash:20050331080034-776a0-629822f876800af1b76d5d43ca131f5373e991a4.gz
|
|
|
|
|
|
| |
and checkpoint files
darcs-hash:20050214163413-776a0-77171dd6e4746b5d889bfcbe515c0d6f59c6ba10.gz
|
|
|
|
| |
darcs-hash:20050207131924-891bb-0dbd85d6ac494fcac9aef96ad00c37025a4891e1.gz
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
After updating CactusBase/IOUtil, one can now choose refinement levels for
individual grid functions to be output, simply by using an options string, eg.:
IOHDF5::out_vars = "wavetoy::phi{refinement_levels = {1 2}}"
If no such option is given, output defaults to all refinement levels.
Note that the parsing routine (in IOUtil) does not check for invalid
refinement levels (>= max_refinement_levels).
darcs-hash:20050204181016-776a0-4d1d74a64c2869ffc4a16846146e1a0b7fd98638.gz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Change the way in which the grid hierarchy is stored. The new hierarchy is
map
mglevel
reflevel
component
timelevel
i.e., mglevel moved from the bottom to almost the top. This is
because mglevel used to be a true multigrid level, but is now meant to
be a convergence level.
Do not allocate all storage all the time. Allow storage to be
switched on an off per refinement level (and for a single mglevel,
which prompted the change above). Handle storage management with
CCTK_{In,De}creaseGroupStorage instead of
CCTK_{En,Dis}ableGroupStorage.
darcs-hash:20050201225827-891bb-eae3b6bd092ae8d6b5e49be84c6f09f0e882933e.gz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Turn most of the templates in CarpetLib, which used to have the form
template<int D> class XXX
into classes, i.e., into something like
class XXX
by setting D to the new global integer constant dim, which in turn is set to 3.
The templates gf and data, which used to be of the form
template<typename T, int D> class XXX
are now of the form
template<typename T> class XXX
The templates vect, bbox, and bboxset remain templates.
This change simplifies the code somewhat.
darcs-hash:20050101182234-891bb-c3063528841f0d078b12cc506309ea27d8ce730d.gz
|
|
|
|
|
|
| |
output "cctk_bbox" and "cctk_nghostzones" attributes for unchunked data
darcs-hash:20050110105925-776a0-610cbdb983ac67dcb5a28bf558cba4937d20fe60.gz
|
|
|
|
|
|
| |
coordinate system associated with it
darcs-hash:20050103174917-776a0-29c425b306db7d85ff60d91496bd4db5895a0a0f.gz
|
|
|
|
| |
darcs-hash:20050101162121-891bb-ac9d070faecc19f91b4b57389d3507bfc6c6e5ee.gz
|
|
|
|
| |
darcs-hash:20041225152539-891bb-24fd38d3792883217dbd102df99bda9255c9c0e2.gz
|
|
|
|
| |
darcs-hash:20041225152403-891bb-65b7df424b36f544260bdd82db2f7a3f2353e911.gz
|
|
|
|
| |
darcs-hash:20041225145321-891bb-28ca9adbd7601709dc0c75158eab8013ee863ef9.gz
|
|
|
|
| |
darcs-hash:20041225145224-891bb-03520cc5f5fa997d7bb6aeccebf3b1b32bfcfdf1.gz
|
|
|
|
|
|
| |
from 1 to 2; don't sleep(3) after such warnings
darcs-hash:20041207163828-3fd61-dac602f5c3a0285ed05e79046d3fde3c57e22c33.gz
|
|
|
|
| |
darcs-hash:20041206152516-3fd61-1dec08523b8094b4decabed463b37188d85914a6.gz
|
|
|
|
|
|
|
|
|
|
|
|
| |
DISTRIB=CONSTANT grid variables (including scalars) are assumed to have the
same values across all processors. Therefore only the portion on processor 0
will be output. The portions from other processors are compared against this
processor's portion, and a level-1 warning will be issued if they don't match.
During recovery, the DISTRIB=CONSTANT array portion from the checkpoint file is
distributed across all processors (independently of how many there are).
darcs-hash:20041206151054-3fd61-5a47be7936480027c1e6f4419a4051d223e10c7d.gz
|
|
|
|
|
|
| |
parfile, wait 5 seconds after the warning(s) before scrolling on
darcs-hash:20041203155935-3fd61-8bf992a86bf39e6cbc1abe21351c571815730f7e.gz
|
|
|
|
|
|
|
|
|
|
|
| |
file
After recovery, it may happen that HDF5 output is requested at the same
iteration when it had been done before.
Therefore the code must check for this case, and remove an already existing
dataset from the output file before it can be created anew.
darcs-hash:20041203152858-3fd61-7703c468af4f84b6a737f9fa9a5272f4d2313724.gz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CarpetIOHDF5 does output for grid variables of any dimensions, not only 3D.
Therefore parameters with '3D' in their names have been marked deprecated now
and should not be used anymore. They are still valid though but you will get
a level-1 warning if you still use them.
At some point in the future those deprecated parameters will be removed.
So you should eventually fix your parameter files to substitute their occurances
by their newly introduced counterparts (parameters of the same name but without
the '3D').
CarpetIOHDF5/src/util/ contains a small perl script which can be applied to
parfiles to automatically substitute old parameter names:
~/cactus/arrangements/Carpet/CarpetIOHDF5/src/util> ./SubstituteDeprecatedParameters.pl
This perl script automatically substitutes deprecated parameter names in a parfile.
Usage: ./SubstituteDeprecatedParameters.pl <parameter file>
darcs-hash:20041203134032-3fd61-5d49fdff6c13f19772c6b441d5d558708dd88c71.gz
|
|
|
|
|
|
| |
some better info output for IO::verbose = "full"
darcs-hash:20041201113424-3fd61-188206cd3e0ad315a9219fbc1b123af8ab5bff62.gz
|
|
darcs-hash:20041130150933-3fd61-b07a8e91c055082ff3ddebccf11a07d368c7b47c.gz
|