| Commit message (Collapse) | Author | Age |
|
|
|
| |
darcs-hash:20070825060808-dae7b-9ea6675a36f9230332067acdb705d48e75d12fd2.gz
|
|
|
|
|
|
|
| |
Initialise the list of variables which need to be synchronised after
recovery correctly.
darcs-hash:20070823210422-dae7b-31da5366355d1bfbea2e4181b8639c5b4e6caf0c.gz
|
|
|
|
|
|
|
| |
Synchronise the recovered grid functions. This removes the need for
calling the postregrid bin after recovering.
darcs-hash:20070608201848-dae7b-4d2044344f7e8e9a30ca60780199f2906a58d957.gz
|
|
|
|
|
|
| |
The HDF5-to-ASCII converter was accidentally removed.
darcs-hash:20070530133215-dae7b-b1da3480a9a963ad78bc8cf02f5150389e19d286.gz
|
|
|
|
|
|
|
| |
Remove the make.configuration.defn file. Configuration dependencies
are now handled by the configuration.ccl file.
darcs-hash:20060808151859-dae7b-525b7e54c2f3c771bba2acdb7b11c9fd0c1dc8c6.gz
|
|
|
|
|
|
|
| |
Require the capability HDF5 instead of testing whether HDF5 was
configured in.
darcs-hash:20051119212528-dae7b-5600109d2e5fc5c89477fd1b4a978d39f3aed555.gz
|
|
|
|
| |
darcs-hash:20070523204447-dae7b-364638404dec31fbf3f0db103d930fc60c13ad65.gz
|
|
|
|
|
|
|
|
|
| |
When no files can be found when reading initial data from files, then
output all files names that were tried. Since the file names are
constructed dynamically, this makes it easier to find errors in
parameter files.
darcs-hash:20070523204234-dae7b-676f945408731a162d2795ab2559b012eaf4fcaf.gz
|
|
|
|
|
|
|
| |
Make CarpetIOHDF5::use_grid_structure_from_checkpoint=yes the default
setting.
darcs-hash:20070523204057-dae7b-80c06a4a883db327ce7768f3363ebccfbc3b0dd6.gz
|
|
|
|
|
|
|
|
| |
A new steerable boolean parameter IOHDF5::out_one_file_per_group was added
which - if set to "true" - will cause Cactus to output all variables of a
group into a single HDF5 file (useful to reduce the total number of output files).
darcs-hash:20070430162902-3fd61-f8c3e4cd641c40e8afe859933e611cda50c52efe.gz
|
|
|
|
|
|
| |
Set up gdata object correctly before copying it between processors.
darcs-hash:20070501163757-dae7b-55cb3575d707f88806bff70f4cfc7153de879c1f.gz
|
|
|
|
| |
darcs-hash:20070419021113-dae7b-baa8e7a012bddab40246f9485d5b3987fd7dc587.gz
|
|
|
|
| |
darcs-hash:20070419021042-dae7b-fa34acf3bf956f5b07a74e882f1cb0663f2826dd.gz
|
|
|
|
|
|
|
|
|
|
|
|
| |
By setting the new steerable boolean parameter IO::abort_on_io_errors to true
in a parfile, the user can now tell the simulation to abort in case of any
I/O errors while writing HDF5 output/checkpoint files. The default is to only
warn about such errors and continue the simulation.
This patch requires an up-to-date CVS version of thorn CactusBase/IOUtil
from which the parameter IO::abort_on_io_errors is inherited.
darcs-hash:20070418155052-776a0-554152ad445c5215daac96e8fa2b55f06318d0c1.gz
|
|
|
|
|
|
|
|
|
| |
In order to make the recovery testsuites pass, one would have to specify
a rather large absolute tolerance for the sum norms. Since this isn't
possible for an individual norm, I decided to omit the output of sum norms
entirely.
darcs-hash:20070223165809-776a0-83dbbb59855996cec51bd6b9da945d4bf467bd4e.gz
|
|
|
|
|
|
|
| |
In order to compile the hdf5toascii_slicer utility program on an IBM SP5 using
xlC, the source code also needs to include <string>.
darcs-hash:20070221174117-776a0-c791309492972a552d19541054d5e7e74a15052e.gz
|
|
|
|
| |
darcs-hash:20070204212255-dae7b-3fb713d97921718ff67bedd7382f49ca10c40751.gz
|
|
|
|
|
|
|
|
|
| |
Some of CarpetIOHDF5's testsuites produce CarpetIOASCII output files
which format is always dependent on the number of processors used.
In the newly added test.ccl configuration file this number is explicitely
specified for running the testsuites successfully.
darcs-hash:20070118100106-776a0-1317671e489dd815dafd7c4e376f9d5dcc45c552.gz
|
|
|
|
|
|
|
|
| |
Adapt to region_t changes. Use the type region_t instead of
gridstructure_t. This is an incompatible change to the format of HDF5
files.
darcs-hash:20070112223732-dae7b-9f2527492cffa6f929a9dd32604713267621d7fb.gz
|
|
|
|
|
|
|
|
| |
Use "m" instead of "map" as local variable name.
Remove "Carpet::" qualifier in front of variable "maps".
darcs-hash:20070112224022-dae7b-0c5241b73c1f4a8ff4722e04bc70ed047d6158da.gz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implement variable-specific output request option 'compression_level'
so that users can specify eg.
IOHDF5::compression_level = 1
IOHDF5::out_vars = "admbase::metric
admconstraints::hamiltonian
admbase::lapse{ compression_level = 0 }"
to request HDF5 dataset compression for every output variable except for the
lapse.
This modification also requires an update of thorn CactusBase/IOUtil.
darcs-hash:20061117132206-776a0-0e1d07a85cf206fa262a94fd0dd63c6f27e50fa2.gz
|
|
|
|
|
|
|
|
|
| |
With the new steerable integer parameter IOHDF5::compression_level users can
now request gzip dataset compression while writing HDF5 files: levels from 1-9
specify a specific compression rate, 0 (which is the default) disables dataset
compression.
darcs-hash:20061117115153-776a0-7aaead5d2a0216841a27e091fddb9b6c4f40eed4.gz
|
|
|
|
| |
darcs-hash:20061005133135-776a0-1550cde9db6e0a375b661b2ab6b0ed9c762bbe9d.gz
|
|
|
|
|
|
|
| |
If no thorn provided this aliased function, CarpetIOHDF5 assumes that no
coordinate information is available.
darcs-hash:20061004144616-776a0-00da12cca7d6b6ad1ae0a38a96923f771239de79.gz
|
|
|
|
| |
darcs-hash:20060925220348-dae7b-303594fd2b999c93d2b816a9d3f11d0d97e391c2.gz
|
|
|
|
| |
darcs-hash:20060925220323-dae7b-040ddfb0afc83c15cd4802fe26fe4822826a2e8a.gz
|
|
|
|
|
|
|
| |
Save grid structure in the output files whenever the parameters are
saved, even when it is not a checkpoint file.
darcs-hash:20060925220235-dae7b-09e23cc6ec48e20df0b560356f19648a67e955dd.gz
|
|
|
|
| |
darcs-hash:20060925220415-dae7b-2b72aef51ae3d8b056b18221acd9989668a2864d.gz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Update the filereader testsuite so that it passes again, also on multiple
processors. Added output of more norms for comparison.
The testsuite was broken because its norms output files had been created by
CactusBase/IOBasic. Temporarily going back to using that I/O thorn verified
that the testsuite still gave the same results.
After updating the parfile to use Carpet's scalar output method, both the output
filenames changed (hence the removing of old output files and adding new ones)
as well as their data contents - the latter due to CarpetIOScalar calling
CCTK_Reduce() in global mode and CactusBase/IOBasic calling it in level mode.
darcs-hash:20060915115726-776a0-dbf8ce75815a6e302b90dbd799c1e0e56274da43.gz
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Doubled Driver::global_nsize in order to get the grids properly nested.
This requires an update of the checkpoint and all output files.
While both CarpetWaveToyRecover_test_[14]proc testsuites continue to work
the CarpetWaveToyNewRecover_test_1proc is still broken, although it should
give exactly the same results as CarpetWaveToyRecover_test_1proc.
It needs to be investigated that IOHDF5::use_reflevels_from_checkpoint really
works as expected.
darcs-hash:20060911164811-776a0-70a5d06de9506fa4ea68672ed5776c0b236546d0.gz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
again
This patch partially undoes patch 'CarpetIOHDF5: Add test case for grid
structure recovery' (recorded Mon May 8 21:46:21 CEST 2006) by
* renewing the checkpoint file (containing same data as before above's patch,
now with grid structure added)
* fixing the 1D output files (by removing comment lines which aren't output
anymore by CarpetIOASCII)
It leaves the CarpetWaveToyNewRecover_test_1proc testsuite unchanged,
ie. broken as it was from the beginning.
darcs-hash:20060911160325-776a0-5339d1271436f39e6d9c2fc55e136bcbf9f5fe4e.gz
|
|
|
|
|
|
|
|
|
|
|
|
| |
../exe/wave/hdf5toascii_slicer [--match <regex string>]
...
where
[--match <regex string>] selects HDF5 datasets by their names
matching a regex string using POSIX
Extended Regular Expression syntax
darcs-hash:20060901085302-776a0-3a9cfb71f9008b1a7bf93d9857195f92b67f1e25.gz
|
|
|
|
| |
darcs-hash:20060831174548-776a0-bf352f210d8736f48a666654309f0ad3443cf3a3.gz
|
|
|
|
|
|
| |
files
darcs-hash:20060828173429-776a0-b231124b73645983b8ed56efa35d5ec2c9354add.gz
|
|
|
|
| |
darcs-hash:20060822160245-b0a3f-a52dbe5f5276995324eade2d0648af4e1388c964.gz
|
|
|
|
|
|
| |
and unchunked HDF5 output means
darcs-hash:20060822153832-776a0-73ab7e8afc9dd02bfeea6269e7d21f2cdb0c7357.gz
|
|
|
|
| |
darcs-hash:20060822144417-b0a3f-a1053a60b369e646b2dba3d0470cc4196d64de0e.gz
|
|
|
|
| |
darcs-hash:20060822143002-b0a3f-d2a21d30f28261ad7248cf44a5928b2a38be9b68.gz
|
|
|
|
|
|
|
| |
Clarify an ambiguity in the --help output: timestep selection is by
cctk_time value, not by cctk_iteration
darcs-hash:20060822142353-b0a3f-e0ad1ea0e2f4eca7038fc2d77b0ff3d724b97aab.gz
|
|
|
|
|
|
| |
Adds a missing #include <cmath> so std::fabs() can be used
darcs-hash:20060822114437-b0a3f-f85a976d02c36aa6311c7b6916aad107747954b2.gz
|
|
|
|
|
|
|
| |
The new command line parameter option '--timestep <timestep>' allows users to
select an individual timestep in an HDF5 file.
darcs-hash:20060821200601-776a0-8e977e93014eded2d6f6cab376209ce7d4293b94.gz
|
|
|
|
|
|
|
|
| |
While Intel C++ (9.0) had no problems compiling hdf5toascii_slicer.cc at all,
GNU C++ generates an error when assigning -424242424242 to a long int.
Also fixed some other things that g++ warns about.
darcs-hash:20060821094951-776a0-84d68b511de0bbd65b212755d699a415779c9674.gz
|
|
|
|
|
|
| |
is used
darcs-hash:20060817173012-dae7b-857f1313b12144e1ba997d029990a0464bb36df5.gz
|
|
|
|
| |
darcs-hash:20060814142824-776a0-1553d2404adc099fea75b546d4169f184dc5d3ab.gz
|
|
|
|
| |
darcs-hash:20060814121029-776a0-279fb0650475bb7a3f53bc74124a42c951c2aa10.gz
|
|
|
|
| |
darcs-hash:20060808143907-776a0-965984d0fe8688c51fe0a5d9bf8d1dfd51a3eb8a.gz
|
|
|
|
|
|
|
| |
This utility program extracts 2D slices from CarpetIOHDF5 output files and
prints them to stdout in CarpetIOASCII format.
darcs-hash:20060808123300-776a0-4088993ec64ad291510977ee5c10d521863ef2ac.gz
|
|
|
|
|
|
|
|
|
|
|
| |
Due to bug in my previous patch, the logic for removing the checkpoint file
after successful recovery was wrong: it was removed if IO::recover_and_remove
was set to "false".
This patch fixes this bug by reversing the logic.
Thanks to Ian Hinder for noticing this and presenting the fix.
darcs-hash:20060626162548-776a0-8d3ebc0c43a74cb3faa892aa2a410e13bb37825e.gz
|
|
|
|
| |
darcs-hash:20060613171412-dae7b-cf8a7c6112d6c364bd7f4e7568e0df2c683c01f3.gz
|
|
|
|
|
|
|
| |
If IO::recover_and_remove is set, the recovery file will also be removed after
IO::checkpoint_keep successful checkpoints have been written.
darcs-hash:20060607102431-776a0-92edd93f6dc004ab824b237fbd03ee732f7a3841.gz
|