| Commit message (Collapse) | Author | Age |
... | |
|
|
|
|
|
| |
single mechanism provided by CarpetLib.
Use this mechanism everywhere.
|
| |
|
|
|
|
|
|
|
| |
Allow different numbers of ghost zones and different spatial
prolongation orders on different refinement levels.
This causes incompatible changes to the checkpoint file format.
|
|
|
|
|
|
| |
Serialise the time hierarchy with 17 digits of accuracy, instead of
the standard 6 digits. This is required for correctly recovering the
time hierarchy.
|
|
|
|
|
|
| |
Rename out3D_ghosts to output_ghost_points. Rename out3D_outer_ghosts
to output_boundary_points. Keep the old parameter name for
compatibility.
|
|
|
|
|
|
|
| |
Output a warning message if multiple input files need to be read from
one MPI process, since this is usually very slow. When reading files
from the same number of processes that wrote them, each process is
only supposed to need to open one file.
|
|
|
|
| |
Ignore-this: e7d15a216ec4fbb524f1ebe1fdeff905
|
|
|
|
| |
Ignore-this: 309b4dd613f4af2b84aa5d6743fdb6b3
|
|\ |
|
| |
| |
| |
| | |
Due to a wrong upper range in the time hierarchy initialisation loop, only maps on the coarsest refinement level were initialised. This caused an assertion failure when recovering multiple refinement levels which weren't aligned.
|
|\| |
|
| |
| |
| |
| | |
simulation with a proper error message (rather than just an assertion failure)
|
|\| |
|
| |
| |
| |
| | |
needed by OpenDX and VisIt
|
|\| |
|
| |
| |
| |
| | |
Carpet HDF5 output data. The slice data are output in HDF5 again.
|
|\| |
|
| |
| |
| |
| | |
writing string attributes, another in querying the "Checkpoint" tag of CCTK groups
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
functionCarpetIOHDF5_InitCheckpointingIntervals
Calling the scheduled function CarpetIOHDF5_InitCheckpointingIntervals
leads to namespace problems with PGI compiler.
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
functionCarpetIOHDF5_InitCheckpointingIntervals
Calling the scheduled function CarpetIOHDF5_InitCheckpointingIntervals
leads to namespace problems with PGI compiler.
|
|\|
| |
| |
| |
| | |
Conflicts:
Carpet/CarpetWeb/index.html
|
| |
| |
| |
| |
| |
| |
| |
| | |
variables also in the POST_RECOVER_VARIABLES bin so that the
last checkpoint iteration counter starts counting from the recovered
iteration number.
(see also discussion thread starting at
http://lists.carpetcode.org/archives/developers/2008-August/002309.html)
|
| |
| |
| |
| |
| |
| | |
(<last-checkpoint-iteration> + IO::checkpoint_every)"
This reverts commit 30f1c46a7f94d423bda65e04220015e0296e1347.
|
| | |
|
| |
| |
| |
| |
| |
| | |
(<last-checkpoint-iteration> + IO::checkpoint_every)"
This reverts commit 30f1c46a7f94d423bda65e04220015e0296e1347.
|
|\| |
|
| |
| |
| |
| | |
(<last-checkpoint-iteration> + IO::checkpoint_every)
|
| |
| |
| |
| |
| | |
Broadcast the result of checkpoint_every_walltime_hours, since different
processors may come to different decisions.
|
| |
| |
| |
| | |
Correct a multi-processor synchronisation problem when using IO::out_dt.
|
| | |
|
| |
| |
| |
| |
| | |
Broadcast the result of checkpoint_every_walltime_hours, since different
processors may come to different decisions.
|
| |
| |
| |
| | |
Correct a multi-processor synchronisation problem when using IO::out_dt.
|
|/
|
|
|
| |
The CactusBase/IOUtil routine IOUtil_ParseVarsForOutput now accepts an
additional parameter out_dt_default.
|
|
|
|
|
|
|
|
|
| |
Introduce a tree data structure "fulltree", which decomposes a single,
rectangular region into a tree of non-overlapping, rectangular sub-regions.
Move the processor decomposition from the regridding thorns into Carpet.
Create such trees during processor decomposition.
Store these trees with the grid hierarchy.
|
| |
|
|\ |
|
| |
| |
| |
| | |
for generalized grids.
|
|/
|
|
| |
Correct error in determining checkpointing interval after restarting.
|
|
|
|
| |
integer-type grid variables
|
| |
|
| |
|
|
|
|
| |
darcs-hash:20080220003219-dae7b-0505c565d4989163b001ee53b1ff6577211649f1.gz
|
|
|
|
|
|
|
| |
Change some CCTK_REAL variables to double, because they are read by
HDF5 routines as "native double".
darcs-hash:20080219052326-dae7b-0bca03938f51c0ed598e671f5278659e6e051827.gz
|
|
|
|
|
|
| |
Change sprintf to snprintf. Add assert statements.
darcs-hash:20080219052249-dae7b-abbdbb9df6c099cdd62ebaac135b654062659619.gz
|
|
|
|
|
|
|
| |
Add functions which write a single HDF5 attribute. Call these
functions to write the attributes.
darcs-hash:20080219052032-dae7b-dbb3e2ee3aad27cce26735639b9065c86cc9cef3.gz
|
|
|
|
|
|
|
| |
Write attributes containing the configuration, source tree, and run
ids into the output files.
darcs-hash:20080203190955-dae7b-e22cbe007b26040e7131b1f93307382dc0cc5882.gz
|
|
|
|
|
|
|
| |
Fix creative change of argument order to a call to WriteMetaData which resulted
in nioprocs=-1. This should fix checkpoint/recovery.
darcs-hash:20080203024904-fff0f-5d3514b62c80cd2920ac45a98758fff8c00135c1.gz
|
|
|
|
|
|
|
| |
Without this bugfix, whenever there is an aliased function "Multipatch_MapIsCartesian" function,
the corresponding write call of this attribute fails because the attribute's scalar dataspace was already closed.
darcs-hash:20080131151816-79e7e-3678879958c776fc1240e8b93f054732c64b62c5.gz
|