| Commit message (Collapse) | Author | Age |
| |
|
|
|
|
|
| |
* add some attributes to metadata to match what the old code wrote
* only tag object names with components of there are more than one
|
|
|
|
|
|
|
|
| |
Register new-style output code as an IO method with IOUtil for 3d output.
The new code is not quite as capable as the old code, since it does not
include Ian Hinder's indexing facility and so far outputs all data on the
root processor. It does support the new-style output_symmetry_points etc.
options however.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Scanning the attributes of a large CarpetIOHDF5 output file, as is
necessary in the visitCarpetHDF5 plugin, can be very time consuming.
This commit adds support for writing an "index" HDF5 file at the same
time as the data file, conditional on a parameter
"CarpetIOHDF5::output_index". The index file is the same as the data
file except it contains null datasets, and hence is very small. The
attributes can be read from this index file instead of the data file,
greatly increasing performance. The datasets will have size 1 in the
index file, so an additional attribute (h5space) is added to the
dataset to specify the correct dataset dimensions.
|
|
|
|
|
| |
When checkpoints of initial data are disabled, but termination
checkpoints are enabled, then do checkpoint the initial data.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Store the current Cactus time (and not a fake Carpet time) in the th
"time hiearchy". This removes the now redundant "leveltimes" data
structure in Carpet.
Add past time levels to th, so that it can store the time for past
time levels instead of assuming the time step size is constant. This
allows changing the time step size during evolution.
Share the time hierarchy between all maps, instead of having one time
hierarchy per map.
Simplify the time level cycling and time stepping code used during
evolution.
Improve structure of the code that loops over time levels for certain
schedule bins. Introduce a new Carpet variable "timelevel", similar
to "reflevel".
This also makes it possible to avoid time interpolation for the past
time levels during regridding. The past time levels of the fine grid
then remain aligned (in time) with the past time levels of the coarse
grid. This is controlled by a new parameter
"time_interpolation_during_regridding", which defaults to "yes" for
backwards compatibility.
Simplify the three time level initialisation. Instead of initialising
all three time levels by taking altogether three time steps (forwards
and backwards), initialise only one past time level by taking one time
step backwards. The remaining time level is initialised during the
first time step of the evolution, which begins by cycling time levels,
which drops the non-initialised last time level anyway.
Update Carpet and the mode handling correspondingly.
Update the CarpetIOHDF5 checkpoint format correspondingly.
Update CarpetInterp, CarpetReduce, and CarpetRegrid2 correspondingly.
Update CarpetJacobi and CarpetMG correspondingly.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
Allow different numbers of ghost zones and different spatial
prolongation orders on different refinement levels.
This causes incompatible changes to the checkpoint file format.
|
|
|
|
|
|
| |
Serialise the time hierarchy with 17 digits of accuracy, instead of
the standard 6 digits. This is required for correctly recovering the
time hierarchy.
|
|
|
|
| |
Ignore-this: e7d15a216ec4fbb524f1ebe1fdeff905
|
|
|
|
| |
Ignore-this: 309b4dd613f4af2b84aa5d6743fdb6b3
|
|\ |
|
| |
| |
| |
| | |
writing string attributes, another in querying the "Checkpoint" tag of CCTK groups
|
| |
| |
| |
| |
| |
| |
| | |
functionCarpetIOHDF5_InitCheckpointingIntervals
Calling the scheduled function CarpetIOHDF5_InitCheckpointingIntervals
leads to namespace problems with PGI compiler.
|
| |
| |
| |
| |
| |
| |
| | |
functionCarpetIOHDF5_InitCheckpointingIntervals
Calling the scheduled function CarpetIOHDF5_InitCheckpointingIntervals
leads to namespace problems with PGI compiler.
|
|\|
| |
| |
| |
| | |
Conflicts:
Carpet/CarpetWeb/index.html
|
| |
| |
| |
| |
| |
| |
| |
| | |
variables also in the POST_RECOVER_VARIABLES bin so that the
last checkpoint iteration counter starts counting from the recovered
iteration number.
(see also discussion thread starting at
http://lists.carpetcode.org/archives/developers/2008-August/002309.html)
|
| |
| |
| |
| |
| |
| | |
(<last-checkpoint-iteration> + IO::checkpoint_every)"
This reverts commit 30f1c46a7f94d423bda65e04220015e0296e1347.
|
| | |
|
| |
| |
| |
| |
| |
| | |
(<last-checkpoint-iteration> + IO::checkpoint_every)"
This reverts commit 30f1c46a7f94d423bda65e04220015e0296e1347.
|
|\| |
|
| |
| |
| |
| | |
(<last-checkpoint-iteration> + IO::checkpoint_every)
|
| |
| |
| |
| |
| | |
Broadcast the result of checkpoint_every_walltime_hours, since different
processors may come to different decisions.
|
| |
| |
| |
| | |
Correct a multi-processor synchronisation problem when using IO::out_dt.
|
| | |
|
| |
| |
| |
| |
| | |
Broadcast the result of checkpoint_every_walltime_hours, since different
processors may come to different decisions.
|
| |
| |
| |
| | |
Correct a multi-processor synchronisation problem when using IO::out_dt.
|
|/
|
|
|
| |
The CactusBase/IOUtil routine IOUtil_ParseVarsForOutput now accepts an
additional parameter out_dt_default.
|
|
|
|
| |
Correct error in determining checkpointing interval after restarting.
|
|
|
|
| |
integer-type grid variables
|
| |
|
|
|
|
|
|
| |
Change sprintf to snprintf. Add assert statements.
darcs-hash:20080219052249-dae7b-abbdbb9df6c099cdd62ebaac135b654062659619.gz
|
|
|
|
|
|
|
| |
Add functions which write a single HDF5 attribute. Call these
functions to write the attributes.
darcs-hash:20080219052032-dae7b-dbb3e2ee3aad27cce26735639b9065c86cc9cef3.gz
|
|
|
|
|
|
|
| |
Write attributes containing the configuration, source tree, and run
ids into the output files.
darcs-hash:20080203190955-dae7b-e22cbe007b26040e7131b1f93307382dc0cc5882.gz
|
|
|
|
|
|
|
| |
Fix creative change of argument order to a call to WriteMetaData which resulted
in nioprocs=-1. This should fix checkpoint/recovery.
darcs-hash:20080203024904-fff0f-5d3514b62c80cd2920ac45a98758fff8c00135c1.gz
|
|
|
|
| |
darcs-hash:20080130222154-dae7b-113029d4e40be633fca3253f5e2d47f656ae41ac.gz
|
|
|
|
|
|
|
|
| |
Output an additional attribute "Datasets" into the "Parameters and
Global Attributes" group. This attribute is an array of strings and
contains the full variable names of all variables in this file.
darcs-hash:20080111111324-dae7b-895d9c619e1f2126ea367cb93b77f407d50f15b4.gz
|
|
|
|
| |
darcs-hash:20071004024754-dae7b-2096582f0b63bd0521d41e3eea01e74f7962bf79.gz
|
|
|
|
| |
darcs-hash:20071003194857-dae7b-d8bb68e9c4ee52559fea874b3f80a57eebf9650f.gz
|
|
|
|
| |
darcs-hash:20071003194834-dae7b-f9a5ba7a3c9c3b0542a9cc316d5854d961d15baa.gz
|
|
|
|
| |
darcs-hash:20070825060808-dae7b-9ea6675a36f9230332067acdb705d48e75d12fd2.gz
|
|
|
|
|
|
|
|
| |
A new steerable boolean parameter IOHDF5::out_one_file_per_group was added
which - if set to "true" - will cause Cactus to output all variables of a
group into a single HDF5 file (useful to reduce the total number of output files).
darcs-hash:20070430162902-3fd61-f8c3e4cd641c40e8afe859933e611cda50c52efe.gz
|
|
|
|
| |
darcs-hash:20070419021113-dae7b-baa8e7a012bddab40246f9485d5b3987fd7dc587.gz
|
|
|
|
| |
darcs-hash:20070419021042-dae7b-fa34acf3bf956f5b07a74e882f1cb0663f2826dd.gz
|
|
|
|
|
|
|
|
|
|
|
|
| |
By setting the new steerable boolean parameter IO::abort_on_io_errors to true
in a parfile, the user can now tell the simulation to abort in case of any
I/O errors while writing HDF5 output/checkpoint files. The default is to only
warn about such errors and continue the simulation.
This patch requires an up-to-date CVS version of thorn CactusBase/IOUtil
from which the parameter IO::abort_on_io_errors is inherited.
darcs-hash:20070418155052-776a0-554152ad445c5215daac96e8fa2b55f06318d0c1.gz
|
|
|
|
|
|
|
|
| |
Adapt to region_t changes. Use the type region_t instead of
gridstructure_t. This is an incompatible change to the format of HDF5
files.
darcs-hash:20070112223732-dae7b-9f2527492cffa6f929a9dd32604713267621d7fb.gz
|