| Commit message (Collapse) | Author | Age |
| |
|
|
|
|
|
|
| |
Recovery happens level-by-level in Cactus. When recovering the
refinement level times and the global time, set them correctly
according to the current refinement level.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Initialise the times of all time levels of grid arrays while
recovering.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Store the current Cactus time (and not a fake Carpet time) in the th
"time hiearchy". This removes the now redundant "leveltimes" data
structure in Carpet.
Add past time levels to th, so that it can store the time for past
time levels instead of assuming the time step size is constant. This
allows changing the time step size during evolution.
Share the time hierarchy between all maps, instead of having one time
hierarchy per map.
Simplify the time level cycling and time stepping code used during
evolution.
Improve structure of the code that loops over time levels for certain
schedule bins. Introduce a new Carpet variable "timelevel", similar
to "reflevel".
This also makes it possible to avoid time interpolation for the past
time levels during regridding. The past time levels of the fine grid
then remain aligned (in time) with the past time levels of the coarse
grid. This is controlled by a new parameter
"time_interpolation_during_regridding", which defaults to "yes" for
backwards compatibility.
Simplify the three time level initialisation. Instead of initialising
all three time levels by taking altogether three time steps (forwards
and backwards), initialise only one past time level by taking one time
step backwards. The remaining time level is initialised during the
first time step of the evolution, which begins by cycling time levels,
which drops the non-initialised last time level anyway.
Update Carpet and the mode handling correspondingly.
Update the CarpetIOHDF5 checkpoint format correspondingly.
Update CarpetInterp, CarpetReduce, and CarpetRegrid2 correspondingly.
Update CarpetJacobi and CarpetMG correspondingly.
|
| |
|
|
|
|
|
|
|
| |
Allow different numbers of ghost zones and different spatial
prolongation orders on different refinement levels.
This causes incompatible changes to the checkpoint file format.
|
|
|
|
|
|
|
| |
Output a warning message if multiple input files need to be read from
one MPI process, since this is usually very slow. When reading files
from the same number of processes that wrote them, each process is
only supposed to need to open one file.
|
|
|
|
| |
Ignore-this: 309b4dd613f4af2b84aa5d6743fdb6b3
|
|
|
|
| |
Due to a wrong upper range in the time hierarchy initialisation loop, only maps on the coarsest refinement level were initialised. This caused an assertion failure when recovering multiple refinement levels which weren't aligned.
|
|
|
|
| |
simulation with a proper error message (rather than just an assertion failure)
|
|
|
|
|
|
|
|
|
| |
Introduce a tree data structure "fulltree", which decomposes a single,
rectangular region into a tree of non-overlapping, rectangular sub-regions.
Move the processor decomposition from the regridding thorns into Carpet.
Create such trees during processor decomposition.
Store these trees with the grid hierarchy.
|
| |
|
| |
|
|
|
|
| |
darcs-hash:20080220003219-dae7b-0505c565d4989163b001ee53b1ff6577211649f1.gz
|
|
|
|
|
|
|
| |
Change some CCTK_REAL variables to double, because they are read by
HDF5 routines as "native double".
darcs-hash:20080219052326-dae7b-0bca03938f51c0ed598e671f5278659e6e051827.gz
|
|
|
|
| |
darcs-hash:20080130222154-dae7b-113029d4e40be633fca3253f5e2d47f656ae41ac.gz
|
|
|
|
| |
darcs-hash:20080130221846-dae7b-08cd77d33269fc3ec8d20db87731b2b2097d5d38.gz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
during recovery
Various people had reported problems of running out of memory when recovering
from multiple chunked checkpoint files. It turned out that the HDF5 library
itself requires a considerable amount of memory for each opened HDF5 file.
When all chunked files of a checkpoint are opened at the same time during
recovery (which is the default) this may cause the simulation to abort with an
'out of memory' error in extreme cases.
This patch introduces a new steerable boolean parameter
IOHDF5::open_one_input_file_at_a_time
which, if set to "yes", will tell the recovery code to open/read/close chunked
files one after another for each refinement level, thus avoiding excessive
HDF5-internal memory requirements due to multiple open files.
The default behaviour is (as before) to keep all input files open until all
refinement levels are recovered.
darcs-hash:20071019091424-3fd61-834471be8da361b235d0a4cbf3d6f16ae0b653f0.gz
|
|
|
|
|
|
|
| |
The new code to collect I/O timing statistics introduced a memory leak
while accumulating the number of bytes transfered.
darcs-hash:20071018115734-3fd61-e087a4ad1c8fdcf8a59320b71f90b92e9fd850de.gz
|
|
|
|
| |
darcs-hash:20071004024754-dae7b-2096582f0b63bd0521d41e3eea01e74f7962bf79.gz
|
|
|
|
| |
darcs-hash:20071003194857-dae7b-d8bb68e9c4ee52559fea874b3f80a57eebf9650f.gz
|
|
|
|
|
|
|
| |
Initialise the list of variables which need to be synchronised after
recovery correctly.
darcs-hash:20070823210422-dae7b-31da5366355d1bfbea2e4181b8639c5b4e6caf0c.gz
|
|
|
|
|
|
|
| |
Synchronise the recovered grid functions. This removes the need for
calling the postregrid bin after recovering.
darcs-hash:20070608201848-dae7b-4d2044344f7e8e9a30ca60780199f2906a58d957.gz
|
|
|
|
| |
darcs-hash:20070523204447-dae7b-364638404dec31fbf3f0db103d930fc60c13ad65.gz
|
|
|
|
|
|
|
|
|
| |
When no files can be found when reading initial data from files, then
output all files names that were tried. Since the file names are
constructed dynamically, this makes it easier to find errors in
parameter files.
darcs-hash:20070523204234-dae7b-676f945408731a162d2795ab2559b012eaf4fcaf.gz
|
|
|
|
| |
darcs-hash:20070419021113-dae7b-baa8e7a012bddab40246f9485d5b3987fd7dc587.gz
|
|
|
|
|
|
|
|
| |
Adapt to region_t changes. Use the type region_t instead of
gridstructure_t. This is an incompatible change to the format of HDF5
files.
darcs-hash:20070112223732-dae7b-9f2527492cffa6f929a9dd32604713267621d7fb.gz
|
|
|
|
|
|
|
|
| |
Use "m" instead of "map" as local variable name.
Remove "Carpet::" qualifier in front of variable "maps".
darcs-hash:20070112224022-dae7b-0c5241b73c1f4a8ff4722e04bc70ed047d6158da.gz
|
|
|
|
| |
darcs-hash:20060925220348-dae7b-303594fd2b999c93d2b816a9d3f11d0d97e391c2.gz
|
|
|
|
| |
darcs-hash:20060925220323-dae7b-040ddfb0afc83c15cd4802fe26fe4822826a2e8a.gz
|
|
|
|
|
|
|
|
|
|
|
| |
Due to bug in my previous patch, the logic for removing the checkpoint file
after successful recovery was wrong: it was removed if IO::recover_and_remove
was set to "false".
This patch fixes this bug by reversing the logic.
Thanks to Ian Hinder for noticing this and presenting the fix.
darcs-hash:20060626162548-776a0-8d3ebc0c43a74cb3faa892aa2a410e13bb37825e.gz
|
|
|
|
|
|
|
| |
If IO::recover_and_remove is set, the recovery file will also be removed after
IO::checkpoint_keep successful checkpoints have been written.
darcs-hash:20060607102431-776a0-92edd93f6dc004ab824b237fbd03ee732f7a3841.gz
|
|
|
|
|
|
| |
points (zero size)
darcs-hash:20060512115514-776a0-dba29d6e31a12d4cff6772e69bd1ef54e3aa2d8b.gz
|
|
|
|
|
|
|
|
|
| |
datatype for H5Sselect_hyperslab() arguments
This patch lets you compile CarpetIOHDF5 also with HDF5-1.8.x (and future
versions).
darcs-hash:20060511172957-776a0-acbc1bd6b8d92223c0b52a43babf394c0ab9b0f4.gz
|
|
|
|
|
|
|
| |
Correct errors in the handling of the parameter
"use_grid_structure_from_checkpoint".
darcs-hash:20060508193609-dae7b-c5cf907171eb31e8298669cf4bd4aa03f2c79429.gz
|
|
|
|
|
|
|
|
|
|
|
| |
Add a parameter "use_grid_structure_from_checkpoint" that reads the
grid structure from the checkpoint file, and sets up the Carpet grid
hierarchy accordingly.
The Carpet grid hierarchy is written unconditionally to all checkpoint
files.
darcs-hash:20060413202124-dae7b-f97e6aac2267ebc5f5e3867cbf78ca52bbd33016.gz
|
|
|
|
|
|
|
|
|
| |
When recovering from a checkpoint, each processor now continuously reads through
all chunked files until all grid variables on this processor have been fully
recovered. This should always minimise the number of individual checkpoint files
necessary to open on each processor.
darcs-hash:20060217160928-776a0-28c076749861c0b26d1c41a6f4ef3bdb00c23274.gz
|
|
|
|
|
|
|
|
| |
This patch introduces some optimisation for the case when recovering with the
same number of processors as used during the checkpoint: each processor
opens only its own chunked file and reads its metadata, skipping all others.
darcs-hash:20060212200032-776a0-3dd501d20b8efb66faa715b401038218bb388b4f.gz
|
|
|
|
|
|
|
|
|
|
|
|
| |
The scheduled routine CarpetIOHDF5_CloseFiles() was declared to return an int
and take no arguments. Instead it must be declared to take a 'const cGH* const'
argument. It should also return void.
See http://www.cactuscode.org/old/pipermail/developers/2006-February/001656.html.
This patch also fixes a couple of g++ warning about signed-unsigned integer
comparisons.
darcs-hash:20060209165534-776a0-24101ebd8c09cea0a9af04acc48f8e2aa2961e34.gz
|
|
|
|
|
|
|
|
| |
Accumulate any low-level errors returned by HDF5 library calls and check them
after writing a checkpoint. Do not remove an existing checkpoint if there were
any low-level errors in generating the previous one.
darcs-hash:20060206183846-776a0-549e715d7a3fceafe70678aaf1329052dce724bb.gz
|
|
|
|
|
|
|
|
| |
When the filereader was used, CarpetIOHDF5 still checked all grid variables
whether they had been read completely from a datafile, even those which
weren't even specified in the IO::filereader_ID_vars parameter.
darcs-hash:20060201174945-776a0-faa9fe295ef273ffd38308bbda7fde092503513c.gz
|
|
|
|
|
|
|
|
|
|
|
| |
The recovery code didn't properly recover grid functions with multiple maps:
all maps were initialised with the data from map 0.
This patch fixes the problem so that checkpointing/recovery should work now
also for multipatch applications.
The patch only affects recovery code, meaning it will also work with older
checkpoint files.
darcs-hash:20060120164515-776a0-68f93cb5fb197f805beedfdc176fd8da9b7bfc49.gz
|
|
|
|
| |
darcs-hash:20051119212959-dae7b-d50e2cc4c8a980720b44cfafd9504eb201e3aa8b.gz
|
|
|
|
|
|
|
|
| |
Before reading a variable from a dataset, also check that its timelevel is valid.
This fixes problems when recovering from a checkpoint (created with
'Carpet::enable_all_storage = true') and this boolean parameter set to 'false'.
darcs-hash:20051120134642-776a0-4fe21611ca733ecb42f8e2a82bfa1fe51a5d9e81.gz
|
|
|
|
|
|
|
|
|
|
|
| |
"yes" during recovery
When IOHDF5::use_reflevels_from_checkpoint is set, the parameter CarpetLib::refinement_levels is steered to take the number of levels found in the checkpoint.
This steering used to happen during parameter recovery where it didn't have
any effect if the parameter had been set in the parfile already.
Now it's done in a separate routine scheduled at STARTUP.
darcs-hash:20050906140808-776a0-bae608c103b161ac67690da2a8803bdff84cf2f4.gz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
from a single chunked checkpoint file
The list of HDF5 datasets to process is now reordered so that processor-local
components are processed first. When processing the list of datasets, only
those will be reopened from which more data is to be read. The check when
a variable of a given timelevel has been fully recovered was improved.
This closes http://bugs.carpetcode.org/show_bug.cgi?id=87 "Single-file,
many-cpu recovery very slow".
darcs-hash:20050728122546-776a0-21dfceef87e12e72b8a0ccb0911c76066521e192.gz
|
|
Like CactusPUGHIO/IOHDF5, CarpetIOHDF5 now also provides parallel I/O for
data and checkpointing/recovery.
The I/O mode is set via IOUtils' parameters IO::out_mode and IO::out_unchunked,
with parallel output to chunked files (one per processor) being the default.
The recovery and filereader interface can read any type of CarpetIOHDF5 data
files transparently - regardless of how it was created (serial/parallel,
or on a different number of processors).
See the updated thorn documentation for details.
darcs-hash:20050624123924-776a0-5639aee9677f0362fc94c80c534b47fd1b07ae74.gz
|