| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
| |
There are still systems with only version 1.6 installed, or broken installs
of version 1.8 (Debian system packages at least up to squeeze). These
systems benefit from a compiling hdf5_recombiner, while version 1.8 is
not really required to be used here.
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
Change the API to obtain a pointer to grid function data:
- Use a function "typed_data_pointer" instead of overloading the ()
operator (because this looks nicer)
- Don't use a virtual function (because this isn't needed)
- Update all uses
|
|
|
|
|
|
|
|
|
|
|
| |
computing coordinates of points
this is the same issue (just seems from the other side of the output) as in
"CarpetIOHDF5: Correct iorigin attribute for 2D output", namely that iorigin
is stored in multiples of the stride for the given refinement level
---
Carpet/CarpetIOHDF5/src/util/hdf5toascii_slicer.cc | 6 +++---
1 files changed, 3 insertions(+), 3 deletions(-)
|
|
|
|
|
|
|
|
| |
file descriptors
---
Carpet/CarpetIOHDF5/src/util/hdf5_slicer.cc | 13 +++++--------
1 files changed, 5 insertions(+), 8 deletions(-)
|
|
|
|
| |
is used by VisIt
|
|\ |
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
Introduce a new API to checkpoint only a subset of group, via an
aliased function IO_SetCheckpointGroups. This can be used for
simulation spawning, i.e. off-loading certain calculations (e.g.
analysis) outside of the main simulation.
|
| |
| |
| |
| |
| | |
IOUtil_DefaultIORequest returns a fresh copy of the default IO request,
rather than just a pointer to it. It has to be freeed afterwards.
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
current request does not exist for one_file_per_group output
this is re #410:
--8<-- by Erik Schnetter --8<--
The corresponding code in CarpetIOHDF5.cc, which outputs data that are not
slices, uses the same algorithm. However, it contains an additional check
ensuring that a default request is used if the corresponding request does not
exist. Look for calls to IOUtil_DefaultIORequest to find this code. I believe
that an equivalent logic would correct this problem in OutputSlice.cc.
--8<-- by Erik Schnetter --8<--
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
---
Carpet/CarpetIOASCII/doc/documentation.tex | 2 +-
Carpet/CarpetIOHDF5/doc/documentation.tex | 2 +-
Carpet/CarpetInterp/doc/documentation.tex | 2 +-
Carpet/CarpetReduce/doc/documentation.tex | 2 +-
4 files changed, 4 insertions(+), 4 deletions(-)
|
| | |
|
|/
|
|
|
|
|
|
| |
Currently CarpetIOHDF5 uses the c compiler options with the c++ compiler
to build the utilities. This can lead to warnings about flags not
supported by c++ (only by c).
This patch let's it use the c++ compiler options instead.
|
|
|
|
|
|
| |
CarpetIOHDF5 already prints the iteration and time for periodic
checkpoints. This patch adds this to both of initial data checkpoints
(also after restart) and termination checkpoints.
|
|
|
|
|
|
| |
map number
this hopefully fixes ticket #446
|
|
|
|
| |
* added new cp/recovery par files for cell-centered case
|
| |
|
|
|
|
|
| |
Introduce a new parameter skip_recover_variables that skips recovery
on a set of variables.
|
|
|
|
|
|
|
|
| |
Specifically, remove any hierarchy information that has been added to
the name of timers, as well as any code for creating timers
dynamically, as these are now unnecessary. Additionally, time some
previously-untimed parts of the code and make timer names in some
places more consistent.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
Use hg::baseextent instead of Carpet::maxspacereflevelfact to
determine the stride of a refinement level, because this works
independent of the stride on the finest level.
|
|
|
|
|
| |
in an H5E_BEGIN_TRY H5E_END_TRY environment. This prevents
hate crimes.
|
| |
|
|
|
|
| |
rather than always ocuring together with the old-style 3D output.
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
right now there is no support for hdf5 files in testsuites so users will
have to manually use 'h5diff -v' and 'h5dump -r' to compare and test the
output when changes are made to the code
|
|
|
|
|
| |
to compute the hyperslab one needs to base its location on the extent of the
data in memory not in the extent that we want to output.
|
|
|
|
|
| |
* add some attributes to metadata to match what the old code wrote
* only tag object names with components of there are more than one
|
|
|
|
|
|
|
| |
This is implemented by re-computing the allactive bboxset that dh::regrid
computes and outputting only the part of each component that intersects
allactive. This changes the number of components since the intersection
might be non-rectangular-box.
|
|
|
|
|
| |
add code to honor IO::ioproc_every to output data unchunked (for 3d output
only)
|
|
|
|
|
|
|
|
| |
Register new-style output code as an IO method with IOUtil for 3d output.
The new code is not quite as capable as the old code, since it does not
include Ian Hinder's indexing facility and so far outputs all data on the
root processor. It does support the new-style output_symmetry_points etc.
options however.
|
|
|
|
|
|
| |
Recovery happens level-by-level in Cactus. When recovering the
refinement level times and the global time, set them correctly
according to the current refinement level.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Scanning the attributes of a large CarpetIOHDF5 output file, as is
necessary in the visitCarpetHDF5 plugin, can be very time consuming.
This commit adds support for writing an "index" HDF5 file at the same
time as the data file, conditional on a parameter
"CarpetIOHDF5::output_index". The index file is the same as the data
file except it contains null datasets, and hence is very small. The
attributes can be read from this index file instead of the data file,
greatly increasing performance. The datasets will have size 1 in the
index file, so an additional attribute (h5space) is added to the
dataset to specify the correct dataset dimensions.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Initialise the times of all time levels of grid arrays while
recovering.
|
|
|
|
|
| |
When checkpoints of initial data are disabled, but termination
checkpoints are enabled, then do checkpoint the initial data.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Store the current Cactus time (and not a fake Carpet time) in the th
"time hiearchy". This removes the now redundant "leveltimes" data
structure in Carpet.
Add past time levels to th, so that it can store the time for past
time levels instead of assuming the time step size is constant. This
allows changing the time step size during evolution.
Share the time hierarchy between all maps, instead of having one time
hierarchy per map.
Simplify the time level cycling and time stepping code used during
evolution.
Improve structure of the code that loops over time levels for certain
schedule bins. Introduce a new Carpet variable "timelevel", similar
to "reflevel".
This also makes it possible to avoid time interpolation for the past
time levels during regridding. The past time levels of the fine grid
then remain aligned (in time) with the past time levels of the coarse
grid. This is controlled by a new parameter
"time_interpolation_during_regridding", which defaults to "yes" for
backwards compatibility.
Simplify the three time level initialisation. Instead of initialising
all three time levels by taking altogether three time steps (forwards
and backwards), initialise only one past time level by taking one time
step backwards. The remaining time level is initialised during the
first time step of the evolution, which begins by cycling time levels,
which drops the non-initialised last time level anyway.
Update Carpet and the mode handling correspondingly.
Update the CarpetIOHDF5 checkpoint format correspondingly.
Update CarpetInterp, CarpetReduce, and CarpetRegrid2 correspondingly.
Update CarpetJacobi and CarpetMG correspondingly.
|