| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
| |
Adapt to region_t changes. Use the type region_t instead of
gridstructure_t. This is an incompatible change to the format of HDF5
files.
darcs-hash:20070112223732-dae7b-9f2527492cffa6f929a9dd32604713267621d7fb.gz
|
|
|
|
|
|
|
|
| |
Use "m" instead of "map" as local variable name.
Remove "Carpet::" qualifier in front of variable "maps".
darcs-hash:20070112224022-dae7b-0c5241b73c1f4a8ff4722e04bc70ed047d6158da.gz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implement variable-specific output request option 'compression_level'
so that users can specify eg.
IOHDF5::compression_level = 1
IOHDF5::out_vars = "admbase::metric
admconstraints::hamiltonian
admbase::lapse{ compression_level = 0 }"
to request HDF5 dataset compression for every output variable except for the
lapse.
This modification also requires an update of thorn CactusBase/IOUtil.
darcs-hash:20061117132206-776a0-0e1d07a85cf206fa262a94fd0dd63c6f27e50fa2.gz
|
|
|
|
|
|
|
|
|
| |
With the new steerable integer parameter IOHDF5::compression_level users can
now request gzip dataset compression while writing HDF5 files: levels from 1-9
specify a specific compression rate, 0 (which is the default) disables dataset
compression.
darcs-hash:20061117115153-776a0-7aaead5d2a0216841a27e091fddb9b6c4f40eed4.gz
|
|
|
|
| |
darcs-hash:20061005133135-776a0-1550cde9db6e0a375b661b2ab6b0ed9c762bbe9d.gz
|
|
|
|
|
|
|
| |
If no thorn provided this aliased function, CarpetIOHDF5 assumes that no
coordinate information is available.
darcs-hash:20061004144616-776a0-00da12cca7d6b6ad1ae0a38a96923f771239de79.gz
|
|
|
|
| |
darcs-hash:20060925220348-dae7b-303594fd2b999c93d2b816a9d3f11d0d97e391c2.gz
|
|
|
|
| |
darcs-hash:20060925220323-dae7b-040ddfb0afc83c15cd4802fe26fe4822826a2e8a.gz
|
|
|
|
|
|
|
| |
Save grid structure in the output files whenever the parameters are
saved, even when it is not a checkpoint file.
darcs-hash:20060925220235-dae7b-09e23cc6ec48e20df0b560356f19648a67e955dd.gz
|
|
|
|
| |
darcs-hash:20060925220415-dae7b-2b72aef51ae3d8b056b18221acd9989668a2864d.gz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Update the filereader testsuite so that it passes again, also on multiple
processors. Added output of more norms for comparison.
The testsuite was broken because its norms output files had been created by
CactusBase/IOBasic. Temporarily going back to using that I/O thorn verified
that the testsuite still gave the same results.
After updating the parfile to use Carpet's scalar output method, both the output
filenames changed (hence the removing of old output files and adding new ones)
as well as their data contents - the latter due to CarpetIOScalar calling
CCTK_Reduce() in global mode and CactusBase/IOBasic calling it in level mode.
darcs-hash:20060915115726-776a0-dbf8ce75815a6e302b90dbd799c1e0e56274da43.gz
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Doubled Driver::global_nsize in order to get the grids properly nested.
This requires an update of the checkpoint and all output files.
While both CarpetWaveToyRecover_test_[14]proc testsuites continue to work
the CarpetWaveToyNewRecover_test_1proc is still broken, although it should
give exactly the same results as CarpetWaveToyRecover_test_1proc.
It needs to be investigated that IOHDF5::use_reflevels_from_checkpoint really
works as expected.
darcs-hash:20060911164811-776a0-70a5d06de9506fa4ea68672ed5776c0b236546d0.gz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
again
This patch partially undoes patch 'CarpetIOHDF5: Add test case for grid
structure recovery' (recorded Mon May 8 21:46:21 CEST 2006) by
* renewing the checkpoint file (containing same data as before above's patch,
now with grid structure added)
* fixing the 1D output files (by removing comment lines which aren't output
anymore by CarpetIOASCII)
It leaves the CarpetWaveToyNewRecover_test_1proc testsuite unchanged,
ie. broken as it was from the beginning.
darcs-hash:20060911160325-776a0-5339d1271436f39e6d9c2fc55e136bcbf9f5fe4e.gz
|
|
|
|
|
|
|
|
|
|
|
|
| |
../exe/wave/hdf5toascii_slicer [--match <regex string>]
...
where
[--match <regex string>] selects HDF5 datasets by their names
matching a regex string using POSIX
Extended Regular Expression syntax
darcs-hash:20060901085302-776a0-3a9cfb71f9008b1a7bf93d9857195f92b67f1e25.gz
|
|
|
|
| |
darcs-hash:20060831174548-776a0-bf352f210d8736f48a666654309f0ad3443cf3a3.gz
|
|
|
|
|
|
| |
files
darcs-hash:20060828173429-776a0-b231124b73645983b8ed56efa35d5ec2c9354add.gz
|
|
|
|
| |
darcs-hash:20060822160245-b0a3f-a52dbe5f5276995324eade2d0648af4e1388c964.gz
|
|
|
|
|
|
| |
and unchunked HDF5 output means
darcs-hash:20060822153832-776a0-73ab7e8afc9dd02bfeea6269e7d21f2cdb0c7357.gz
|
|
|
|
| |
darcs-hash:20060822144417-b0a3f-a1053a60b369e646b2dba3d0470cc4196d64de0e.gz
|
|
|
|
| |
darcs-hash:20060822143002-b0a3f-d2a21d30f28261ad7248cf44a5928b2a38be9b68.gz
|
|
|
|
|
|
|
| |
Clarify an ambiguity in the --help output: timestep selection is by
cctk_time value, not by cctk_iteration
darcs-hash:20060822142353-b0a3f-e0ad1ea0e2f4eca7038fc2d77b0ff3d724b97aab.gz
|
|
|
|
|
|
| |
Adds a missing #include <cmath> so std::fabs() can be used
darcs-hash:20060822114437-b0a3f-f85a976d02c36aa6311c7b6916aad107747954b2.gz
|
|
|
|
|
|
|
| |
The new command line parameter option '--timestep <timestep>' allows users to
select an individual timestep in an HDF5 file.
darcs-hash:20060821200601-776a0-8e977e93014eded2d6f6cab376209ce7d4293b94.gz
|
|
|
|
|
|
|
|
| |
While Intel C++ (9.0) had no problems compiling hdf5toascii_slicer.cc at all,
GNU C++ generates an error when assigning -424242424242 to a long int.
Also fixed some other things that g++ warns about.
darcs-hash:20060821094951-776a0-84d68b511de0bbd65b212755d699a415779c9674.gz
|
|
|
|
|
|
| |
is used
darcs-hash:20060817173012-dae7b-857f1313b12144e1ba997d029990a0464bb36df5.gz
|
|
|
|
| |
darcs-hash:20060814142824-776a0-1553d2404adc099fea75b546d4169f184dc5d3ab.gz
|
|
|
|
| |
darcs-hash:20060814121029-776a0-279fb0650475bb7a3f53bc74124a42c951c2aa10.gz
|
|
|
|
| |
darcs-hash:20060808143907-776a0-965984d0fe8688c51fe0a5d9bf8d1dfd51a3eb8a.gz
|
|
|
|
|
|
|
| |
This utility program extracts 2D slices from CarpetIOHDF5 output files and
prints them to stdout in CarpetIOASCII format.
darcs-hash:20060808123300-776a0-4088993ec64ad291510977ee5c10d521863ef2ac.gz
|
|
|
|
|
|
|
|
|
|
|
| |
Due to bug in my previous patch, the logic for removing the checkpoint file
after successful recovery was wrong: it was removed if IO::recover_and_remove
was set to "false".
This patch fixes this bug by reversing the logic.
Thanks to Ian Hinder for noticing this and presenting the fix.
darcs-hash:20060626162548-776a0-8d3ebc0c43a74cb3faa892aa2a410e13bb37825e.gz
|
|
|
|
| |
darcs-hash:20060613171412-dae7b-cf8a7c6112d6c364bd7f4e7568e0df2c683c01f3.gz
|
|
|
|
|
|
|
| |
If IO::recover_and_remove is set, the recovery file will also be removed after
IO::checkpoint_keep successful checkpoints have been written.
darcs-hash:20060607102431-776a0-92edd93f6dc004ab824b237fbd03ee732f7a3841.gz
|
|
|
|
|
|
| |
points (zero size)
darcs-hash:20060512115514-776a0-dba29d6e31a12d4cff6772e69bd1ef54e3aa2d8b.gz
|
|
|
|
|
|
|
|
|
| |
datatype for H5Sselect_hyperslab() arguments
This patch lets you compile CarpetIOHDF5 also with HDF5-1.8.x (and future
versions).
darcs-hash:20060511172957-776a0-acbc1bd6b8d92223c0b52a43babf394c0ab9b0f4.gz
|
|
|
|
|
|
|
| |
Check whether a group has storage only after checking whether it
should be output.
darcs-hash:20060511203215-dae7b-20604fda3117034cccf38998561b7e3bed1e6873.gz
|
|
|
|
| |
darcs-hash:20060508194621-dae7b-3094a05a1414c3ba19a0661a03d78102417a918b.gz
|
|
|
|
|
|
|
| |
Correct errors in the handling of the parameter
"use_grid_structure_from_checkpoint".
darcs-hash:20060508193609-dae7b-c5cf907171eb31e8298669cf4bd4aa03f2c79429.gz
|
|
|
|
|
|
|
|
|
|
|
| |
Add a parameter "use_grid_structure_from_checkpoint" that reads the
grid structure from the checkpoint file, and sets up the Carpet grid
hierarchy accordingly.
The Carpet grid hierarchy is written unconditionally to all checkpoint
files.
darcs-hash:20060413202124-dae7b-f97e6aac2267ebc5f5e3867cbf78ca52bbd33016.gz
|
|
|
|
| |
darcs-hash:20060413201901-dae7b-89acfa2956a3a82957ec6a4282a5051685ee5252.gz
|
|
|
|
|
|
|
|
|
| |
When recovering from a checkpoint, each processor now continuously reads through
all chunked files until all grid variables on this processor have been fully
recovered. This should always minimise the number of individual checkpoint files
necessary to open on each processor.
darcs-hash:20060217160928-776a0-28c076749861c0b26d1c41a6f4ef3bdb00c23274.gz
|
|
|
|
|
|
|
|
| |
This patch introduces some optimisation for the case when recovering with the
same number of processors as used during the checkpoint: each processor
opens only its own chunked file and reads its metadata, skipping all others.
darcs-hash:20060212200032-776a0-3dd501d20b8efb66faa715b401038218bb388b4f.gz
|
|
|
|
|
|
| |
While outputting dataset attributes, an HDF5 dataspace wasn't closed properly.
darcs-hash:20060212143008-776a0-41e46c61bce2dc22fbfc7093d2ad776bfae00687.gz
|
|
|
|
|
|
|
|
|
|
|
|
| |
The scheduled routine CarpetIOHDF5_CloseFiles() was declared to return an int
and take no arguments. Instead it must be declared to take a 'const cGH* const'
argument. It should also return void.
See http://www.cactuscode.org/old/pipermail/developers/2006-February/001656.html.
This patch also fixes a couple of g++ warning about signed-unsigned integer
comparisons.
darcs-hash:20060209165534-776a0-24101ebd8c09cea0a9af04acc48f8e2aa2961e34.gz
|
|
|
|
|
|
|
| |
The checkpoint/recovery testsuite now uses a tarball which does not contain the old I/O parameters anymore.
All parameter files now use CarpetIOBasic and CarpetIOScalar as a replacement for CactusBase/IOBasic. This modification also required an update of various output files.
darcs-hash:20060206190059-776a0-1c88d51f696442a15fd4c3182af23f9c9a5d5048.gz
|
|
|
|
|
|
|
|
|
|
|
| |
This patch finally closes the long-standing issue of keeping both old and new
CarpetIOHDF5 parameters around. Now the old parameters have been removed, only
the new ones can be used further on.
If you still have old-style parameter files, you must convert them now.
The perl script CarpetIOHDF5/srcutil/SubstituteDeprecatedParameters.pl does
that for you automatically.
darcs-hash:20060206184519-776a0-29d9d612e011dda4bf2b6054cee73546beae373a.gz
|
|
|
|
|
|
|
|
| |
Accumulate any low-level errors returned by HDF5 library calls and check them
after writing a checkpoint. Do not remove an existing checkpoint if there were
any low-level errors in generating the previous one.
darcs-hash:20060206183846-776a0-549e715d7a3fceafe70678aaf1329052dce724bb.gz
|
|
|
|
|
|
|
|
| |
When the filereader was used, CarpetIOHDF5 still checked all grid variables
whether they had been read completely from a datafile, even those which
weren't even specified in the IO::filereader_ID_vars parameter.
darcs-hash:20060201174945-776a0-faa9fe295ef273ffd38308bbda7fde092503513c.gz
|
|
|
|
|
|
| |
warnings about the use of deprecated I/O parameters
darcs-hash:20060127164814-776a0-89f59f04f6118191ba7a965cf72e3c6c548c817d.gz
|
|
|
|
|
|
|
|
|
|
|
| |
The recovery code didn't properly recover grid functions with multiple maps:
all maps were initialised with the data from map 0.
This patch fixes the problem so that checkpointing/recovery should work now
also for multipatch applications.
The patch only affects recovery code, meaning it will also work with older
checkpoint files.
darcs-hash:20060120164515-776a0-68f93cb5fb197f805beedfdc176fd8da9b7bfc49.gz
|
|
|
|
| |
darcs-hash:20051119212845-dae7b-2ca10604822dbe488f621a9dcf039fe055c6d8dc.gz
|