| Commit message (Collapse) | Author | Age |
|
|
|
| |
compiler warnings
|
|
|
|
|
| |
Rewrite padding infrastructure.
Add padded array extents to transport operator APIs.
|
|
|
|
| |
also garbage collect HDF5 at each H5close
|
|
|
|
|
|
|
|
|
| |
rather than datasets of extent [1,1,1]. It so happens that this patch removes
the separate shape arrays and separate dataspaces for the datasets in index
files and creates the index file datasets with the same dataspace as the
"heavy" file datasets. It retains the h5shape attribute that was originally
introduced for index files even though it is now redundant (since one could
call H5Sget_simple_extent_dims on the datasets in the index files).
|
|
|
|
| |
which happens to be the output method used for single process runs.
|
|
|
|
|
|
|
|
| |
Change the API to obtain a pointer to grid function data:
- Use a function "typed_data_pointer" instead of overloading the ()
operator (because this looks nicer)
- Don't use a virtual function (because this isn't needed)
- Update all uses
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Scanning the attributes of a large CarpetIOHDF5 output file, as is
necessary in the visitCarpetHDF5 plugin, can be very time consuming.
This commit adds support for writing an "index" HDF5 file at the same
time as the data file, conditional on a parameter
"CarpetIOHDF5::output_index". The index file is the same as the data
file except it contains null datasets, and hence is very small. The
attributes can be read from this index file instead of the data file,
greatly increasing performance. The datasets will have size 1 in the
index file, so an additional attribute (h5space) is added to the
dataset to specify the correct dataset dimensions.
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
single mechanism provided by CarpetLib.
Use this mechanism everywhere.
|
|
|
|
| |
Ignore-this: 309b4dd613f4af2b84aa5d6743fdb6b3
|
|
|
|
| |
for generalized grids.
|
|
|
|
|
|
| |
Change sprintf to snprintf. Add assert statements.
darcs-hash:20080219052249-dae7b-abbdbb9df6c099cdd62ebaac135b654062659619.gz
|
|
|
|
|
|
|
| |
Without this bugfix, whenever there is an aliased function "Multipatch_MapIsCartesian" function,
the corresponding write call of this attribute fails because the attribute's scalar dataspace was already closed.
darcs-hash:20080131151816-79e7e-3678879958c776fc1240e8b93f054732c64b62c5.gz
|
|
|
|
| |
darcs-hash:20080130222154-dae7b-113029d4e40be633fca3253f5e2d47f656ae41ac.gz
|
|
|
|
|
|
|
|
| |
Add an HDF5 attribute "MapisCartesian" specifying whether the
coordinate system is Cartesian. This attribute is added if there is a
thorn providing this information.
darcs-hash:20080128155820-dae7b-759cc1608ba8a7c9d34203b114ceb3836db0f64d.gz
|
|
|
|
| |
darcs-hash:20071102224421-dae7b-79e89575a8e23c46fea47ef043d28ab7331da985.gz
|
|
|
|
| |
darcs-hash:20071003194857-dae7b-d8bb68e9c4ee52559fea874b3f80a57eebf9650f.gz
|
|
|
|
|
|
| |
Set up gdata object correctly before copying it between processors.
darcs-hash:20070501163757-dae7b-55cb3575d707f88806bff70f4cfc7153de879c1f.gz
|
|
|
|
| |
darcs-hash:20070419021113-dae7b-baa8e7a012bddab40246f9485d5b3987fd7dc587.gz
|
|
|
|
|
|
|
|
| |
Use "m" instead of "map" as local variable name.
Remove "Carpet::" qualifier in front of variable "maps".
darcs-hash:20070112224022-dae7b-0c5241b73c1f4a8ff4722e04bc70ed047d6158da.gz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implement variable-specific output request option 'compression_level'
so that users can specify eg.
IOHDF5::compression_level = 1
IOHDF5::out_vars = "admbase::metric
admconstraints::hamiltonian
admbase::lapse{ compression_level = 0 }"
to request HDF5 dataset compression for every output variable except for the
lapse.
This modification also requires an update of thorn CactusBase/IOUtil.
darcs-hash:20061117132206-776a0-0e1d07a85cf206fa262a94fd0dd63c6f27e50fa2.gz
|
|
|
|
|
|
|
|
|
| |
With the new steerable integer parameter IOHDF5::compression_level users can
now request gzip dataset compression while writing HDF5 files: levels from 1-9
specify a specific compression rate, 0 (which is the default) disables dataset
compression.
darcs-hash:20061117115153-776a0-7aaead5d2a0216841a27e091fddb9b6c4f40eed4.gz
|
|
|
|
|
|
|
| |
If no thorn provided this aliased function, CarpetIOHDF5 assumes that no
coordinate information is available.
darcs-hash:20061004144616-776a0-00da12cca7d6b6ad1ae0a38a96923f771239de79.gz
|
|
|
|
|
|
|
|
|
| |
datatype for H5Sselect_hyperslab() arguments
This patch lets you compile CarpetIOHDF5 also with HDF5-1.8.x (and future
versions).
darcs-hash:20060511172957-776a0-acbc1bd6b8d92223c0b52a43babf394c0ab9b0f4.gz
|
|
|
|
|
|
| |
While outputting dataset attributes, an HDF5 dataspace wasn't closed properly.
darcs-hash:20060212143008-776a0-41e46c61bce2dc22fbfc7093d2ad776bfae00687.gz
|
|
|
|
|
|
|
|
| |
Accumulate any low-level errors returned by HDF5 library calls and check them
after writing a checkpoint. Do not remove an existing checkpoint if there were
any low-level errors in generating the previous one.
darcs-hash:20060206183846-776a0-549e715d7a3fceafe70678aaf1329052dce724bb.gz
|
|
|
|
|
|
|
|
|
| |
CarpetLib's comm_state class (actually, it's still just a struct) has been
extended to handle collective buffer communications for all possible C datatypes
at the same time. This makes it unnecessary for the higher-level communication
routines to loop over each individual datatype separately.
darcs-hash:20050815150023-776a0-dddc1aca7ccaebae872f9f451b2c3595cd951fed.gz
|
|
|
|
| |
darcs-hash:20050628113206-776a0-3ed3eae73dcc785de93273c16df556c3c6531de3.gz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Like CactusPUGHIO/IOHDF5, CarpetIOHDF5 now also provides parallel I/O for
data and checkpointing/recovery.
The I/O mode is set via IOUtils' parameters IO::out_mode and IO::out_unchunked,
with parallel output to chunked files (one per processor) being the default.
The recovery and filereader interface can read any type of CarpetIOHDF5 data
files transparently - regardless of how it was created (serial/parallel,
or on a different number of processors).
See the updated thorn documentation for details.
darcs-hash:20050624123924-776a0-5639aee9677f0362fc94c80c534b47fd1b07ae74.gz
|
|
|
|
|
|
|
|
| |
The parameter IO::checkpoint_keep is steerable at any time now (after you've
updated CactusBase/IOUtil/param.ccl) so that you can keep specific checkpoints
around. Please see the thorn documentation of CactusBase/IOUtil for details.
darcs-hash:20050610091144-776a0-b5e90353851eb1d7871f16b05d1b47748599d27a.gz
|
|
|
|
| |
darcs-hash:20050606164745-891bb-bfdba5217c624e406550ea12d38969eed76c51ed.gz
|
|
|
|
|
|
|
| |
The second argument to H5Sselect_hyperslab must be a 'const hsize_t start[]' in the latest release 1.6.4.
It used to be 'const hssize_t start[]' in all previous releases.
darcs-hash:20050512101748-776a0-068b805f7e8c6399e96c38d8689d0e246b708cf9.gz
|
|
|
|
|
|
| |
Add the unique simulation ID as attribute to each dataset.
darcs-hash:20050605221351-891bb-05a025dbdefc60c7dc476e4b7b50ff608bdacd61.gz
|
|
|
|
|
|
|
| |
All processors open the checkpoint file and recover their portions from it
in parallel. No MPI communication is needed anymore.
darcs-hash:20050527124239-776a0-25d4fa77b50ea22fb2b25c87e399d95090c7eaf2.gz
|
|
|
|
|
|
|
| |
The second argument to H5Sselect_hyperslab must be a 'const hsize_t start[]' in the latest release 1.6.4.
It used to be 'const hssize_t start[]' in all previous releases.
darcs-hash:20050512101740-776a0-3581a3be23f057105585cf57b384a166f30aec29.gz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CarpetIOHDF5 used to output unchunked data only, ie. all ghostzones and
boundary zones were cut off from the bboxes to be output.
This caused problems after recovery: uninitialized ghostzones led to wrong
results. The obvious solution, calling CCTK_SyncGroup() for all groups after
recovery, was also problematic because that (1) synchronised only the current
timelevel and (2) boundary prolongation was done in a scheduling order
different to the regular order used during checkpointing.
The solution implemented now by this patch is to write checkpoint files always
in chunked mode (which includes all ghostzones and boundary zones). This also
makes synchronisation of all groups after recovery unnecessary.
Regular HDF5 output files can also be written in chunked mode but the default
(still) is unchunked. A new boolean parameter IOHDF5::out_unchunked (with
default value "yes") was introduced to toggle this option.
Note that this parameter has the same meaning as IO::out_unchunked but an
opposite default value. This is the only reason why IOHDF5::out_unchunked
was introduced.
darcs-hash:20050412161430-776a0-d5efd21ecdbe41ad9a804014b816acad0cd71b2c.gz
|
|
|
|
|
|
|
|
| |
Use the type CCTK_REAL instead of double for storing meta data in the
HDF5 files. This is necessary if CCTK_REAL has more precision than
double.
darcs-hash:20050411170627-891bb-374e4c2581155d825f9a1925b1d4319051bc36d6.gz
|
|
|
|
|
|
| |
collective communication buffers
darcs-hash:20050331080034-776a0-629822f876800af1b76d5d43ca131f5373e991a4.gz
|
|
|
|
|
|
| |
and checkpoint files
darcs-hash:20050214163413-776a0-77171dd6e4746b5d889bfcbe515c0d6f59c6ba10.gz
|
|
|
|
| |
darcs-hash:20050207131924-891bb-0dbd85d6ac494fcac9aef96ad00c37025a4891e1.gz
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
After updating CactusBase/IOUtil, one can now choose refinement levels for
individual grid functions to be output, simply by using an options string, eg.:
IOHDF5::out_vars = "wavetoy::phi{refinement_levels = {1 2}}"
If no such option is given, output defaults to all refinement levels.
Note that the parsing routine (in IOUtil) does not check for invalid
refinement levels (>= max_refinement_levels).
darcs-hash:20050204181016-776a0-4d1d74a64c2869ffc4a16846146e1a0b7fd98638.gz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Change the way in which the grid hierarchy is stored. The new hierarchy is
map
mglevel
reflevel
component
timelevel
i.e., mglevel moved from the bottom to almost the top. This is
because mglevel used to be a true multigrid level, but is now meant to
be a convergence level.
Do not allocate all storage all the time. Allow storage to be
switched on an off per refinement level (and for a single mglevel,
which prompted the change above). Handle storage management with
CCTK_{In,De}creaseGroupStorage instead of
CCTK_{En,Dis}ableGroupStorage.
darcs-hash:20050201225827-891bb-eae3b6bd092ae8d6b5e49be84c6f09f0e882933e.gz
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Turn most of the templates in CarpetLib, which used to have the form
template<int D> class XXX
into classes, i.e., into something like
class XXX
by setting D to the new global integer constant dim, which in turn is set to 3.
The templates gf and data, which used to be of the form
template<typename T, int D> class XXX
are now of the form
template<typename T> class XXX
The templates vect, bbox, and bboxset remain templates.
This change simplifies the code somewhat.
darcs-hash:20050101182234-891bb-c3063528841f0d078b12cc506309ea27d8ce730d.gz
|
|
|
|
|
|
| |
output "cctk_bbox" and "cctk_nghostzones" attributes for unchunked data
darcs-hash:20050110105925-776a0-610cbdb983ac67dcb5a28bf558cba4937d20fe60.gz
|
|
|
|
|
|
| |
coordinate system associated with it
darcs-hash:20050103174917-776a0-29c425b306db7d85ff60d91496bd4db5895a0a0f.gz
|