| Commit message (Collapse) | Author | Age |
|
|
|
| |
out_group_separator chooses the string by which thorn name and group name are separated in file names. The default is "::" for backward compatibility. This parameter only affects output where CarpetIO*::one_file_per_group is set; otherwise, the thorn name does not appear in the file name.
|
|
|
|
|
|
| |
to checkpoint when cctk_iteration % checkpoint_every_divisor == 0 rather
than at whenever checkpoint_every iterations have passed since the last
checkpoint
|
|
|
|
|
| |
Introduce a new parameter skip_recover_variables that skips recovery
on a set of variables.
|
|
|
|
| |
rather than always ocuring together with the old-style 3D output.
|
|
|
|
|
|
|
| |
This is implemented by re-computing the allactive bboxset that dh::regrid
computes and outputting only the part of each component that intersects
allactive. This changes the number of components since the intersection
might be non-rectangular-box.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Scanning the attributes of a large CarpetIOHDF5 output file, as is
necessary in the visitCarpetHDF5 plugin, can be very time consuming.
This commit adds support for writing an "index" HDF5 file at the same
time as the data file, conditional on a parameter
"CarpetIOHDF5::output_index". The index file is the same as the data
file except it contains null datasets, and hence is very small. The
attributes can be read from this index file instead of the data file,
greatly increasing performance. The datasets will have size 1 in the
index file, so an additional attribute (h5space) is added to the
dataset to specify the correct dataset dimensions.
|
| |
|
|
|
|
|
|
| |
Rename out3D_ghosts to output_ghost_points. Rename out3D_outer_ghosts
to output_boundary_points. Keep the old parameter name for
compatibility.
|
|
|
|
| |
Ignore-this: 309b4dd613f4af2b84aa5d6743fdb6b3
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
during recovery
Various people had reported problems of running out of memory when recovering
from multiple chunked checkpoint files. It turned out that the HDF5 library
itself requires a considerable amount of memory for each opened HDF5 file.
When all chunked files of a checkpoint are opened at the same time during
recovery (which is the default) this may cause the simulation to abort with an
'out of memory' error in extreme cases.
This patch introduces a new steerable boolean parameter
IOHDF5::open_one_input_file_at_a_time
which, if set to "yes", will tell the recovery code to open/read/close chunked
files one after another for each refinement level, thus avoiding excessive
HDF5-internal memory requirements due to multiple open files.
The default behaviour is (as before) to keep all input files open until all
refinement levels are recovered.
darcs-hash:20071019091424-3fd61-834471be8da361b235d0a4cbf3d6f16ae0b653f0.gz
|
|
|
|
|
|
|
| |
Make CarpetIOHDF5::use_grid_structure_from_checkpoint=yes the default
setting.
darcs-hash:20070523204057-dae7b-80c06a4a883db327ce7768f3363ebccfbc3b0dd6.gz
|
|
|
|
|
|
|
|
| |
A new steerable boolean parameter IOHDF5::out_one_file_per_group was added
which - if set to "true" - will cause Cactus to output all variables of a
group into a single HDF5 file (useful to reduce the total number of output files).
darcs-hash:20070430162902-3fd61-f8c3e4cd641c40e8afe859933e611cda50c52efe.gz
|
|
|
|
|
|
|
|
|
|
|
|
| |
By setting the new steerable boolean parameter IO::abort_on_io_errors to true
in a parfile, the user can now tell the simulation to abort in case of any
I/O errors while writing HDF5 output/checkpoint files. The default is to only
warn about such errors and continue the simulation.
This patch requires an up-to-date CVS version of thorn CactusBase/IOUtil
from which the parameter IO::abort_on_io_errors is inherited.
darcs-hash:20070418155052-776a0-554152ad445c5215daac96e8fa2b55f06318d0c1.gz
|
|
|
|
|
|
|
|
|
| |
With the new steerable integer parameter IOHDF5::compression_level users can
now request gzip dataset compression while writing HDF5 files: levels from 1-9
specify a specific compression rate, 0 (which is the default) disables dataset
compression.
darcs-hash:20061117115153-776a0-7aaead5d2a0216841a27e091fddb9b6c4f40eed4.gz
|
|
|
|
|
|
|
|
|
|
|
| |
Add a parameter "use_grid_structure_from_checkpoint" that reads the
grid structure from the checkpoint file, and sets up the Carpet grid
hierarchy accordingly.
The Carpet grid hierarchy is written unconditionally to all checkpoint
files.
darcs-hash:20060413202124-dae7b-f97e6aac2267ebc5f5e3867cbf78ca52bbd33016.gz
|
|
|
|
| |
darcs-hash:20060413201901-dae7b-89acfa2956a3a82957ec6a4282a5051685ee5252.gz
|
|
|
|
|
|
|
|
|
|
|
| |
This patch finally closes the long-standing issue of keeping both old and new
CarpetIOHDF5 parameters around. Now the old parameters have been removed, only
the new ones can be used further on.
If you still have old-style parameter files, you must convert them now.
The perl script CarpetIOHDF5/srcutil/SubstituteDeprecatedParameters.pl does
that for you automatically.
darcs-hash:20060206184519-776a0-29d9d612e011dda4bf2b6054cee73546beae373a.gz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Like CactusPUGHIO/IOHDF5, CarpetIOHDF5 now also provides parallel I/O for
data and checkpointing/recovery.
The I/O mode is set via IOUtils' parameters IO::out_mode and IO::out_unchunked,
with parallel output to chunked files (one per processor) being the default.
The recovery and filereader interface can read any type of CarpetIOHDF5 data
files transparently - regardless of how it was created (serial/parallel,
or on a different number of processors).
See the updated thorn documentation for details.
darcs-hash:20050624123924-776a0-5639aee9677f0362fc94c80c534b47fd1b07ae74.gz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CarpetIOHDF5 used to output unchunked data only, ie. all ghostzones and
boundary zones were cut off from the bboxes to be output.
This caused problems after recovery: uninitialized ghostzones led to wrong
results. The obvious solution, calling CCTK_SyncGroup() for all groups after
recovery, was also problematic because that (1) synchronised only the current
timelevel and (2) boundary prolongation was done in a scheduling order
different to the regular order used during checkpointing.
The solution implemented now by this patch is to write checkpoint files always
in chunked mode (which includes all ghostzones and boundary zones). This also
makes synchronisation of all groups after recovery unnecessary.
Regular HDF5 output files can also be written in chunked mode but the default
(still) is unchunked. A new boolean parameter IOHDF5::out_unchunked (with
default value "yes") was introduced to toggle this option.
Note that this parameter has the same meaning as IO::out_unchunked but an
opposite default value. This is the only reason why IOHDF5::out_unchunked
was introduced.
darcs-hash:20050412161430-776a0-d5efd21ecdbe41ad9a804014b816acad0cd71b2c.gz
|
|
|
|
| |
darcs-hash:20050101162121-891bb-ac9d070faecc19f91b4b57389d3507bfc6c6e5ee.gz
|
|
|
|
| |
darcs-hash:20041227213659-891bb-1131e2598c2d8fecb7f080f4d0746a8d3fc6cd0e.gz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CarpetIOHDF5 does output for grid variables of any dimensions, not only 3D.
Therefore parameters with '3D' in their names have been marked deprecated now
and should not be used anymore. They are still valid though but you will get
a level-1 warning if you still use them.
At some point in the future those deprecated parameters will be removed.
So you should eventually fix your parameter files to substitute their occurances
by their newly introduced counterparts (parameters of the same name but without
the '3D').
CarpetIOHDF5/src/util/ contains a small perl script which can be applied to
parfiles to automatically substitute old parameter names:
~/cactus/arrangements/Carpet/CarpetIOHDF5/src/util> ./SubstituteDeprecatedParameters.pl
This perl script automatically substitutes deprecated parameter names in a parfile.
Usage: ./SubstituteDeprecatedParameters.pl <parameter file>
darcs-hash:20041203134032-3fd61-5d49fdff6c13f19772c6b441d5d558708dd88c71.gz
|
|
|
|
|
|
| |
CarpetIOHDF5::verbose parameter
darcs-hash:20041201113125-3fd61-81d942b3c8f1434ba56fee4f9160bc0f52d3cbae.gz
|
|
|
|
| |
darcs-hash:20041115210227-891bb-284966387a3805be01668cab9b81b2409b32eeea.gz
|
|
|
|
|
|
| |
Replace all CVS header tags with the standard "$Header:$".
darcs-hash:20040918132147-891bb-dea889bdd94a479ec412d14d08e9efca63e5c24d.gz
|
|
|
|
|
|
|
|
|
|
| |
Indent the code a bit more consistently.
Move some variable declarations closer to their use, and declare each
variable only once.
Use C++ string functions to calculate the file name.
darcs-hash:20040625105430-07bb3-02fe212e57dd9723ab0705ea89b96336a08fcd08.gz
|
|
|
|
| |
darcs-hash:20040622093012-1d9bf-92fd7a789157281ac2b5c8bfd8d2acec4706915d.gz
|
|
|
|
| |
darcs-hash:20040614073305-1d9bf-c42059693a8d87aae8fe1e1df172716d3376a433.gz
|
|
|
|
| |
darcs-hash:20040604081756-5b8bb-9b67f6d827ffd5f5cf152c1ff815a0d4c73b3431.gz
|
|
|
|
|
|
| |
file instead from the parameter file. This helps with queued runs.
darcs-hash:20040602120608-5b8bb-ae803b785ffc6b06a5740f2f91562f65e792f993.gz
|
|
|
|
|
|
|
| |
Add out_dt output criterion.
Correct return types.
darcs-hash:20040521161134-07bb3-605eedc5f4e23ea1d7022f6c26c6af7303e11055.gz
|
|
|
|
|
|
|
| |
Use "." instead of "" for the default input directory, because the
latter translates to the root directory.
darcs-hash:20040409163214-07bb3-ff94a25308c9978c3546ed8f2643c7694f584dbd.gz
|
|
|
|
| |
darcs-hash:20040403134237-07bb3-8bb947389abf79bfad644fdf6ed22b8861a9e7f2.gz
|
|
|
|
|
|
|
|
|
|
| |
Do not use CCTK_QueryParameterTimesSet to find out whether to use a
parameter value from this thorn, or from IO. Use special parameter
values for that instead, and make these the default.
Remove now-unnecessary Get*Parameter functions.
darcs-hash:20040403104021-07bb3-88addd2629255577d851436003f854791670ac7a.gz
|
|
|
|
| |
darcs-hash:20040320144337-19929-adcadaa186d5083edd435519a14ee81eec48e633.gz
|
|
|
|
|
|
| |
name clash.
darcs-hash:20040315224524-19929-724f3e63dac71ecc7ee8e12e520636229a180f30.gz
|
|
|
|
| |
darcs-hash:20040314141400-19929-aa1ab601329b6e11b78f554a7aba66bae838723b.gz
|
|
|
|
|
|
|
|
|
| |
I have to fix this tomorrow with Erik's help.
Recovery sort of working - does not crash, but does the strangest things... I have to fix this tomorrow with Erik's help.
Good night.
darcs-hash:20040311231325-19929-996452fe821c4ec4feaa66088c9bd994b02f0f3b.gz
|
|
|
|
| |
darcs-hash:20040310202349-19929-86629501cc132260e69de3be2a9d0ea6417cc732.gz
|
|
|
|
| |
darcs-hash:20040303084459-07bb3-4aed4fedaa19c030886252e05b792452f0c92671.gz
|
|
darcs-hash:20010301114010-f6438-12fb8a9ffcc80e86c0a97e37b5b0dae0dbc59b79.gz
|