| Commit message (Collapse) | Author | Age |
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
this allows data files from multipatch runs to be read in with the file
reader into cartesian runs if only the inner cartesian patch is
required.CarpetIOHDF5: check wehter map exists before accessing it
this allows data files from multipatch runs to be read in with the file
reader into cartesian runs if only the inner cartesian patch is
required.
|
| |
|
|
|
|
|
|
|
|
| |
last commit to CarpetIOHDF5 broke this.
This commit also updates the test suite data so that it actually tests
the file format. This commit adds a level 2 warning if no grid structure
is found in file.
|
|
|
|
|
|
|
|
|
|
| |
storage size is literally the amount of space used on disk, so if eg
compression is used, this is much smaller than the amount of space
required to hold the data in memory.
Also change type of data read in to what the memory dataspace is not
what the dataspace on disk is. This way HDF5 actually converts from the
on-disk representation to the in-memory one.
|
|
|
|
|
| |
this allows the reader to read a dataset into a different variable than
which one was written. Eg. GRHydro::dens in PPAnalysis::dens.
|
|
|
|
| |
this means the BrowseDataSets is only called from one location
|
| |
|
|
|
|
| |
out_group_separator chooses the string by which thorn name and group name are separated in file names. The default is "::" for backward compatibility. This parameter only affects output where CarpetIO*::one_file_per_group is set; otherwise, the thorn name does not appear in the file name.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Map CCTK_COMPLEX to "double complex" in C, and "complex<double>" in
C++. (It is already mapped to "double complex" in Fortran.)
Update type definitions.
Re-implement Cactus complex number math functions by calling the
respective C functions.
Update thorn that access real and imaginary parts of complex numbers
to use standard-conforming methods instead.
|
| |
|
| |
|
|
|
|
| |
compiler warnings
|
|
|
|
|
|
|
|
| |
Index HDF5 datafiles weree not handled correctly in the case of sliced
data. The files are created and initialized correctly at the first
iteration, but every subsequent access fails with an HDF5 error.
Patch by David Radice.
|
|
|
|
| |
also commented
|
| |
|
|
|
|
|
| |
Rewrite padding infrastructure.
Add padded array extents to transport operator APIs.
|
|
|
|
| |
only check datasets whose variables were actually requested to be read
|
|
|
|
| |
also garbage collect HDF5 at each H5close
|
| |
|
|
|
|
|
|
|
|
|
| |
rather than datasets of extent [1,1,1]. It so happens that this patch removes
the separate shape arrays and separate dataspaces for the datasets in index
files and creates the index file datasets with the same dataspace as the
"heavy" file datasets. It retains the h5shape attribute that was originally
introduced for index files even though it is now redundant (since one could
call H5Sget_simple_extent_dims on the datasets in the index files).
|
| |
|
| |
|
| |
|
|
|
|
| |
garbage collect HDF5 objects when closing files
|
|
|
|
|
|
| |
we try for index files when opening the first file and if this does not
succeed do not try opening index files again. This reduces the number of
file system accesses when no index files are present.
|
| |
|
|
|
|
| |
dataset
|
|
|
|
| |
which happens to be the output method used for single process runs.
|
|
|
|
| |
open_one_input_file_at_a_time
|
|
|
|
| |
with open_one_input_file_at_a_time
|
|
|
|
|
|
| |
to checkpoint when cctk_iteration % checkpoint_every_divisor == 0 rather
than at whenever checkpoint_every iterations have passed since the last
checkpoint
|
|
|
|
|
| |
once per variable, refinement level and time level that is. So still
about ~3000 warnings in a typical simulation.
|
|
|
|
|
|
|
| |
Move MPI support from flesh to thorn ExternalLibraries/MPI. This also
requires thorns that call MPI directly to declare this in their
configuration.ccl. Existing configurations using MPI need to include
ExternalLibraries/MPI into their thorn list.
|
|
|
|
| |
Right now there is no facility to actually use this test unfortunately.
|
|
|
|
|
|
|
| |
NOTE: this assumes (like other parts of CarpetIOHDF5) that the number of
symmetry points is the number of ghost points.
NOTE: it likely outputs too many points when RotatingSymmetry is used and only
buffer points touch the symmetry boundary.
|
|
|
|
| |
to disk
|
| |
|
|
|
|
|
|
|
| |
There are still systems with only version 1.6 installed, or broken installs
of version 1.8 (Debian system packages at least up to squeeze). These
systems benefit from a compiling hdf5_recombiner, while version 1.8 is
not really required to be used here.
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
Change the API to obtain a pointer to grid function data:
- Use a function "typed_data_pointer" instead of overloading the ()
operator (because this looks nicer)
- Don't use a virtual function (because this isn't needed)
- Update all uses
|
|
|
|
|
|
|
|
|
|
|
| |
computing coordinates of points
this is the same issue (just seems from the other side of the output) as in
"CarpetIOHDF5: Correct iorigin attribute for 2D output", namely that iorigin
is stored in multiples of the stride for the given refinement level
---
Carpet/CarpetIOHDF5/src/util/hdf5toascii_slicer.cc | 6 +++---
1 files changed, 3 insertions(+), 3 deletions(-)
|
|
|
|
|
|
|
|
| |
file descriptors
---
Carpet/CarpetIOHDF5/src/util/hdf5_slicer.cc | 13 +++++--------
1 files changed, 5 insertions(+), 8 deletions(-)
|
|
|
|
| |
is used by VisIt
|
|\ |
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
Introduce a new API to checkpoint only a subset of group, via an
aliased function IO_SetCheckpointGroups. This can be used for
simulation spawning, i.e. off-loading certain calculations (e.g.
analysis) outside of the main simulation.
|