| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
| |
Introduce a tree data structure "fulltree", which decomposes a single,
rectangular region into a tree of non-overlapping, rectangular sub-regions.
Move the processor decomposition from the regridding thorns into Carpet.
Create such trees during processor decomposition.
Store these trees with the grid hierarchy.
|
| |
|
|\ |
|
| |
| |
| |
| | |
for generalized grids.
|
|/
|
|
| |
Correct error in determining checkpointing interval after restarting.
|
|
|
|
| |
integer-type grid variables
|
| |
|
| |
|
|
|
|
| |
This isn't necessary but very convenient to generate complete thornlists with the MakeThornList script.
|
| |
|
| |
|
|
|
|
| |
initialisation scheme (ie. explicite initialisation of all three timelevels, resulting in slightly different output data)
|
|
|
|
| |
changed
|
|
|
|
| |
- don't output sum norms (not suitable for testsuites)
|
|
|
|
| |
darcs-hash:20080220003219-dae7b-0505c565d4989163b001ee53b1ff6577211649f1.gz
|
|
|
|
|
|
|
| |
Change some CCTK_REAL variables to double, because they are read by
HDF5 routines as "native double".
darcs-hash:20080219052326-dae7b-0bca03938f51c0ed598e671f5278659e6e051827.gz
|
|
|
|
|
|
| |
Change sprintf to snprintf. Add assert statements.
darcs-hash:20080219052249-dae7b-abbdbb9df6c099cdd62ebaac135b654062659619.gz
|
|
|
|
|
|
|
| |
Add functions which write a single HDF5 attribute. Call these
functions to write the attributes.
darcs-hash:20080219052032-dae7b-dbb3e2ee3aad27cce26735639b9065c86cc9cef3.gz
|
|
|
|
|
|
|
| |
Write attributes containing the configuration, source tree, and run
ids into the output files.
darcs-hash:20080203190955-dae7b-e22cbe007b26040e7131b1f93307382dc0cc5882.gz
|
|
|
|
|
|
|
| |
Fix creative change of argument order to a call to WriteMetaData which resulted
in nioprocs=-1. This should fix checkpoint/recovery.
darcs-hash:20080203024904-fff0f-5d3514b62c80cd2920ac45a98758fff8c00135c1.gz
|
|
|
|
|
|
|
| |
Without this bugfix, whenever there is an aliased function "Multipatch_MapIsCartesian" function,
the corresponding write call of this attribute fails because the attribute's scalar dataspace was already closed.
darcs-hash:20080131151816-79e7e-3678879958c776fc1240e8b93f054732c64b62c5.gz
|
|
|
|
| |
darcs-hash:20080130222154-dae7b-113029d4e40be633fca3253f5e2d47f656ae41ac.gz
|
|
|
|
| |
darcs-hash:20080130222322-dae7b-223b30e93a6f7860c1234ed181453c77b1ee056e.gz
|
|
|
|
| |
darcs-hash:20080130221846-dae7b-08cd77d33269fc3ec8d20db87731b2b2097d5d38.gz
|
|
|
|
|
|
|
|
| |
Add an HDF5 attribute "MapisCartesian" specifying whether the
coordinate system is Cartesian. This attribute is added if there is a
thorn providing this information.
darcs-hash:20080128155820-dae7b-759cc1608ba8a7c9d34203b114ceb3836db0f64d.gz
|
|
|
|
| |
darcs-hash:20080123132645-3fd61-ed195918c2c55889623aaf2195237e2ff7bd6e17.gz
|
|
|
|
|
|
|
| |
Enclose the macro "HDF5_ERROR" in a do { ... } while (0) pair to make
it safe to use with a trailing semicolon.
darcs-hash:20080111111512-dae7b-b65c7b375ee6ac882b59414db2810f06fcc3d799.gz
|
|
|
|
|
|
|
|
| |
Output an additional attribute "Datasets" into the "Parameters and
Global Attributes" group. This attribute is an array of strings and
contains the full variable names of all variables in this file.
darcs-hash:20080111111324-dae7b-895d9c619e1f2126ea367cb93b77f407d50f15b4.gz
|
|
|
|
| |
darcs-hash:20071130052144-fff0f-57258dcf6536be5b2d8a14f2a5ef5be56d6f038f.gz
|
|
|
|
| |
darcs-hash:20071102224421-dae7b-79e89575a8e23c46fea47ef043d28ab7331da985.gz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
during recovery
Various people had reported problems of running out of memory when recovering
from multiple chunked checkpoint files. It turned out that the HDF5 library
itself requires a considerable amount of memory for each opened HDF5 file.
When all chunked files of a checkpoint are opened at the same time during
recovery (which is the default) this may cause the simulation to abort with an
'out of memory' error in extreme cases.
This patch introduces a new steerable boolean parameter
IOHDF5::open_one_input_file_at_a_time
which, if set to "yes", will tell the recovery code to open/read/close chunked
files one after another for each refinement level, thus avoiding excessive
HDF5-internal memory requirements due to multiple open files.
The default behaviour is (as before) to keep all input files open until all
refinement levels are recovered.
darcs-hash:20071019091424-3fd61-834471be8da361b235d0a4cbf3d6f16ae0b653f0.gz
|
|
|
|
|
|
|
| |
The new code to collect I/O timing statistics introduced a memory leak
while accumulating the number of bytes transfered.
darcs-hash:20071018115734-3fd61-e087a4ad1c8fdcf8a59320b71f90b92e9fd850de.gz
|
|
|
|
| |
darcs-hash:20071004024754-dae7b-2096582f0b63bd0521d41e3eea01e74f7962bf79.gz
|
|
|
|
| |
darcs-hash:20071003194857-dae7b-d8bb68e9c4ee52559fea874b3f80a57eebf9650f.gz
|
|
|
|
| |
darcs-hash:20071003194834-dae7b-f9a5ba7a3c9c3b0542a9cc316d5854d961d15baa.gz
|
|
|
|
| |
darcs-hash:20070825060808-dae7b-9ea6675a36f9230332067acdb705d48e75d12fd2.gz
|
|
|
|
|
|
|
| |
Initialise the list of variables which need to be synchronised after
recovery correctly.
darcs-hash:20070823210422-dae7b-31da5366355d1bfbea2e4181b8639c5b4e6caf0c.gz
|
|
|
|
|
|
|
| |
Synchronise the recovered grid functions. This removes the need for
calling the postregrid bin after recovering.
darcs-hash:20070608201848-dae7b-4d2044344f7e8e9a30ca60780199f2906a58d957.gz
|
|
|
|
|
|
| |
The HDF5-to-ASCII converter was accidentally removed.
darcs-hash:20070530133215-dae7b-b1da3480a9a963ad78bc8cf02f5150389e19d286.gz
|
|
|
|
|
|
|
| |
Remove the make.configuration.defn file. Configuration dependencies
are now handled by the configuration.ccl file.
darcs-hash:20060808151859-dae7b-525b7e54c2f3c771bba2acdb7b11c9fd0c1dc8c6.gz
|
|
|
|
|
|
|
| |
Require the capability HDF5 instead of testing whether HDF5 was
configured in.
darcs-hash:20051119212528-dae7b-5600109d2e5fc5c89477fd1b4a978d39f3aed555.gz
|
|
|
|
| |
darcs-hash:20070523204447-dae7b-364638404dec31fbf3f0db103d930fc60c13ad65.gz
|
|
|
|
|
|
|
|
|
| |
When no files can be found when reading initial data from files, then
output all files names that were tried. Since the file names are
constructed dynamically, this makes it easier to find errors in
parameter files.
darcs-hash:20070523204234-dae7b-676f945408731a162d2795ab2559b012eaf4fcaf.gz
|
|
|
|
|
|
|
| |
Make CarpetIOHDF5::use_grid_structure_from_checkpoint=yes the default
setting.
darcs-hash:20070523204057-dae7b-80c06a4a883db327ce7768f3363ebccfbc3b0dd6.gz
|
|
|
|
|
|
|
|
| |
A new steerable boolean parameter IOHDF5::out_one_file_per_group was added
which - if set to "true" - will cause Cactus to output all variables of a
group into a single HDF5 file (useful to reduce the total number of output files).
darcs-hash:20070430162902-3fd61-f8c3e4cd641c40e8afe859933e611cda50c52efe.gz
|
|
|
|
|
|
| |
Set up gdata object correctly before copying it between processors.
darcs-hash:20070501163757-dae7b-55cb3575d707f88806bff70f4cfc7153de879c1f.gz
|
|
|
|
| |
darcs-hash:20070419021113-dae7b-baa8e7a012bddab40246f9485d5b3987fd7dc587.gz
|
|
|
|
| |
darcs-hash:20070419021042-dae7b-fa34acf3bf956f5b07a74e882f1cb0663f2826dd.gz
|
|
|
|
|
|
|
|
|
|
|
|
| |
By setting the new steerable boolean parameter IO::abort_on_io_errors to true
in a parfile, the user can now tell the simulation to abort in case of any
I/O errors while writing HDF5 output/checkpoint files. The default is to only
warn about such errors and continue the simulation.
This patch requires an up-to-date CVS version of thorn CactusBase/IOUtil
from which the parameter IO::abort_on_io_errors is inherited.
darcs-hash:20070418155052-776a0-554152ad445c5215daac96e8fa2b55f06318d0c1.gz
|
|
|
|
|
|
|
|
|
| |
In order to make the recovery testsuites pass, one would have to specify
a rather large absolute tolerance for the sum norms. Since this isn't
possible for an individual norm, I decided to omit the output of sum norms
entirely.
darcs-hash:20070223165809-776a0-83dbbb59855996cec51bd6b9da945d4bf467bd4e.gz
|