| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
| |
Introduce a new API to checkpoint only a subset of group, via an
aliased function IO_SetCheckpointGroups. This can be used for
simulation spawning, i.e. off-loading certain calculations (e.g.
analysis) outside of the main simulation.
|
| |
|
| |
|
|
|
|
|
| |
* add some attributes to metadata to match what the old code wrote
* only tag object names with components of there are more than one
|
|
|
|
|
|
|
| |
This is implemented by re-computing the allactive bboxset that dh::regrid
computes and outputting only the part of each component that intersects
allactive. This changes the number of components since the intersection
might be non-rectangular-box.
|
|
|
|
|
| |
add code to honor IO::ioproc_every to output data unchunked (for 3d output
only)
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Scanning the attributes of a large CarpetIOHDF5 output file, as is
necessary in the visitCarpetHDF5 plugin, can be very time consuming.
This commit adds support for writing an "index" HDF5 file at the same
time as the data file, conditional on a parameter
"CarpetIOHDF5::output_index". The index file is the same as the data
file except it contains null datasets, and hence is very small. The
attributes can be read from this index file instead of the data file,
greatly increasing performance. The datasets will have size 1 in the
index file, so an additional attribute (h5space) is added to the
dataset to specify the correct dataset dimensions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Store the current Cactus time (and not a fake Carpet time) in the th
"time hiearchy". This removes the now redundant "leveltimes" data
structure in Carpet.
Add past time levels to th, so that it can store the time for past
time levels instead of assuming the time step size is constant. This
allows changing the time step size during evolution.
Share the time hierarchy between all maps, instead of having one time
hierarchy per map.
Simplify the time level cycling and time stepping code used during
evolution.
Improve structure of the code that loops over time levels for certain
schedule bins. Introduce a new Carpet variable "timelevel", similar
to "reflevel".
This also makes it possible to avoid time interpolation for the past
time levels during regridding. The past time levels of the fine grid
then remain aligned (in time) with the past time levels of the coarse
grid. This is controlled by a new parameter
"time_interpolation_during_regridding", which defaults to "yes" for
backwards compatibility.
Simplify the three time level initialisation. Instead of initialising
all three time levels by taking altogether three time steps (forwards
and backwards), initialise only one past time level by taking one time
step backwards. The remaining time level is initialised during the
first time step of the evolution, which begins by cycling time levels,
which drops the non-initialised last time level anyway.
Update Carpet and the mode handling correspondingly.
Update the CarpetIOHDF5 checkpoint format correspondingly.
Update CarpetInterp, CarpetReduce, and CarpetRegrid2 correspondingly.
Update CarpetJacobi and CarpetMG correspondingly.
|
|
|
|
|
|
|
| |
Allow different numbers of ghost zones and different spatial
prolongation orders on different refinement levels.
This causes incompatible changes to the checkpoint file format.
|
|
|
|
| |
Ignore-this: 309b4dd613f4af2b84aa5d6743fdb6b3
|
| |
|
|
|
|
|
|
|
|
| |
variables also in the POST_RECOVER_VARIABLES bin so that the
last checkpoint iteration counter starts counting from the recovered
iteration number.
(see also discussion thread starting at
http://lists.carpetcode.org/archives/developers/2008-August/002309.html)
|
|
|
|
| |
darcs-hash:20080130222154-dae7b-113029d4e40be633fca3253f5e2d47f656ae41ac.gz
|
|
|
|
| |
darcs-hash:20080130222322-dae7b-223b30e93a6f7860c1234ed181453c77b1ee056e.gz
|
|
|
|
|
|
|
| |
Enclose the macro "HDF5_ERROR" in a do { ... } while (0) pair to make
it safe to use with a trailing semicolon.
darcs-hash:20080111111512-dae7b-b65c7b375ee6ac882b59414db2810f06fcc3d799.gz
|
|
|
|
| |
darcs-hash:20071130052144-fff0f-57258dcf6536be5b2d8a14f2a5ef5be56d6f038f.gz
|
|
|
|
| |
darcs-hash:20071003194857-dae7b-d8bb68e9c4ee52559fea874b3f80a57eebf9650f.gz
|
|
|
|
|
|
|
| |
If IO::recover_and_remove is set, the recovery file will also be removed after
IO::checkpoint_keep successful checkpoints have been written.
darcs-hash:20060607102431-776a0-92edd93f6dc004ab824b237fbd03ee732f7a3841.gz
|
|
|
|
|
|
|
| |
Correct errors in the handling of the parameter
"use_grid_structure_from_checkpoint".
darcs-hash:20060508193609-dae7b-c5cf907171eb31e8298669cf4bd4aa03f2c79429.gz
|
|
|
|
|
|
|
|
|
|
|
| |
Add a parameter "use_grid_structure_from_checkpoint" that reads the
grid structure from the checkpoint file, and sets up the Carpet grid
hierarchy accordingly.
The Carpet grid hierarchy is written unconditionally to all checkpoint
files.
darcs-hash:20060413202124-dae7b-f97e6aac2267ebc5f5e3867cbf78ca52bbd33016.gz
|
|
|
|
|
|
|
|
|
|
|
|
| |
The scheduled routine CarpetIOHDF5_CloseFiles() was declared to return an int
and take no arguments. Instead it must be declared to take a 'const cGH* const'
argument. It should also return void.
See http://www.cactuscode.org/old/pipermail/developers/2006-February/001656.html.
This patch also fixes a couple of g++ warning about signed-unsigned integer
comparisons.
darcs-hash:20060209165534-776a0-24101ebd8c09cea0a9af04acc48f8e2aa2961e34.gz
|
|
|
|
|
|
|
|
| |
Accumulate any low-level errors returned by HDF5 library calls and check them
after writing a checkpoint. Do not remove an existing checkpoint if there were
any low-level errors in generating the previous one.
darcs-hash:20060206183846-776a0-549e715d7a3fceafe70678aaf1329052dce724bb.gz
|
|
|
|
| |
darcs-hash:20051119212924-dae7b-3447198d7a1d4090ffc6cff4cde12ccf037c5e8f.gz
|
|
|
|
|
|
|
|
|
|
|
| |
"yes" during recovery
When IOHDF5::use_reflevels_from_checkpoint is set, the parameter CarpetLib::refinement_levels is steered to take the number of levels found in the checkpoint.
This steering used to happen during parameter recovery where it didn't have
any effect if the parameter had been set in the parfile already.
Now it's done in a separate routine scheduled at STARTUP.
darcs-hash:20050906140808-776a0-bae608c103b161ac67690da2a8803bdff84cf2f4.gz
|
|
|
|
|
|
|
|
|
|
| |
Before a variable is output it is checked whether it has been output already
during the current iteration (eg. due to triggers).
This check was only variable-based and therefore caused problems when the
same variable was to be output to multiple files (using different alias names).
Now the check has been extended to also take the output filenames into account.
darcs-hash:20050823135345-776a0-1555987b4aee34bb646e67f491375dbcc44dddad.gz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Like CactusPUGHIO/IOHDF5, CarpetIOHDF5 now also provides parallel I/O for
data and checkpointing/recovery.
The I/O mode is set via IOUtils' parameters IO::out_mode and IO::out_unchunked,
with parallel output to chunked files (one per processor) being the default.
The recovery and filereader interface can read any type of CarpetIOHDF5 data
files transparently - regardless of how it was created (serial/parallel,
or on a different number of processors).
See the updated thorn documentation for details.
darcs-hash:20050624123924-776a0-5639aee9677f0362fc94c80c534b47fd1b07ae74.gz
|
|
|
|
|
|
|
|
| |
The parameter IO::checkpoint_keep is steerable at any time now (after you've
updated CactusBase/IOUtil/param.ccl) so that you can keep specific checkpoints
around. Please see the thorn documentation of CactusBase/IOUtil for details.
darcs-hash:20050610091144-776a0-b5e90353851eb1d7871f16b05d1b47748599d27a.gz
|
|
|
|
|
|
|
| |
All processors open the checkpoint file and recover their portions from it
in parallel. No MPI communication is needed anymore.
darcs-hash:20050527124239-776a0-25d4fa77b50ea22fb2b25c87e399d95090c7eaf2.gz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CarpetIOHDF5 used to output unchunked data only, ie. all ghostzones and
boundary zones were cut off from the bboxes to be output.
This caused problems after recovery: uninitialized ghostzones led to wrong
results. The obvious solution, calling CCTK_SyncGroup() for all groups after
recovery, was also problematic because that (1) synchronised only the current
timelevel and (2) boundary prolongation was done in a scheduling order
different to the regular order used during checkpointing.
The solution implemented now by this patch is to write checkpoint files always
in chunked mode (which includes all ghostzones and boundary zones). This also
makes synchronisation of all groups after recovery unnecessary.
Regular HDF5 output files can also be written in chunked mode but the default
(still) is unchunked. A new boolean parameter IOHDF5::out_unchunked (with
default value "yes") was introduced to toggle this option.
Note that this parameter has the same meaning as IO::out_unchunked but an
opposite default value. This is the only reason why IOHDF5::out_unchunked
was introduced.
darcs-hash:20050412161430-776a0-d5efd21ecdbe41ad9a804014b816acad0cd71b2c.gz
|
|
|
|
|
|
|
|
| |
Use the type CCTK_REAL instead of double for storing meta data in the
HDF5 files. This is necessary if CCTK_REAL has more precision than
double.
darcs-hash:20050411170627-891bb-374e4c2581155d825f9a1925b1d4319051bc36d6.gz
|
|
|
|
|
|
| |
and checkpoint files
darcs-hash:20050214163413-776a0-77171dd6e4746b5d889bfcbe515c0d6f59c6ba10.gz
|
|
|
|
| |
darcs-hash:20050101162121-891bb-ac9d070faecc19f91b4b57389d3507bfc6c6e5ee.gz
|
|
|
|
|
|
| |
some better info output for IO::verbose = "full"
darcs-hash:20041201113424-3fd61-188206cd3e0ad315a9219fbc1b123af8ab5bff62.gz
|
|
darcs-hash:20041130150933-3fd61-b07a8e91c055082ff3ddebccf11a07d368c7b47c.gz
|