From b395f6ae55057b23f8f9e5dc8648188d9ecfb2e9 Mon Sep 17 00:00:00 2001 From: tradke Date: Mon, 8 Apr 2002 09:26:47 +0000 Subject: Fixed some typos and formatting things David found out. git-svn-id: http://svn.cactuscode.org/arrangements/CactusPUGHIO/IOHDF5/trunk@113 4825ed28-b72c-4eae-9704-e50c059e567d --- doc/documentation.tex | 20 ++++++++------------ 1 file changed, 8 insertions(+), 12 deletions(-) (limited to 'doc') diff --git a/doc/documentation.tex b/doc/documentation.tex index 9aeb647..a6b14d9 100644 --- a/doc/documentation.tex +++ b/doc/documentation.tex @@ -30,7 +30,7 @@ You obtain output by either IOHDF5::out_vars = "wavetoy::phi" \end{verbatim} \item calling one the flesh's I/O interface routines in your thorn's - code, eg. + code, eg. \begin{verbatim} CCTK_OutputVarByMethod (cctkGH, "wavetoy::phi", "IOHDF5"); \end{verbatim} @@ -80,7 +80,7 @@ Thorn IOHDF5 can also be used for creating HDF5 checkpoint files and recovering from such files later on.\\ Checkpoint routines are scheduled at several timebins so that you can save -the current state of your simulation atfer the initial data phase, +the current state of your simulation after the initial data phase, during evolution, or at termination. A recovery routine is registered with thorn IOUtil in order to restart a new simulation from a given HDF5 checkpoint. @@ -120,9 +120,9 @@ template for building your own data converter program.\\ pattern to determine the Cactus variable to restore, along with its timelevel. The iteration number is just informative and not needed here. - \item The type of your data as well as its dimensions are already + \item The type of your data as well as its dimensions are already inherited by a dataset itself as metainformation. But this is not - enough for IOHDF5 to savely match it against a specific Cactus variable. + enough for IOHDF5 to safely match it against a specific Cactus variable. For that reason, the variable's groupname, its grouptype, and the total number of timelevels must be attached to every dataset as attribute information. @@ -134,9 +134,9 @@ template for building your own data converter program.\\ \item How many processors were used to produce the data ? \item How many I/O processors were used to write the data ? \end{itemize} - Such information is put into as attributes into a group named\\ + Such information is put into as attributes into a group named {\tt "Global Attributes"}. Since we assume unchunked data here - the processor information isn't relevant -- unchunked data can + the processor information isn't relevant --- unchunked data can be fed back into a Cactus simulation running on an arbitrary number of processors. \end{enumerate} @@ -171,16 +171,12 @@ some other utilities which can be build the same way: \item {\tt hdf5\_merge.c}\\ Merges a list of HDF5 input files into a single HDF5 output file. This can be used to concatenate HDF5 output data created as one file per - timestep. + timestep. \item {\tt hdf5\_extract.c}\\ Extracts a given list of named objects (groups or datasets) from an HDF5 input file and writes them into a new HDF5 output file. This is the reverse operation to what {\tt hdf5\_merge.c} does. Useful eg. for extracting individual timesteps from a time series HDF5 datafile. - \item {\tt hdf5\_bitant\_to\_fullmode.c}\\ - Converts all datasets in a given HDF5 file from bitant into full mode data. - This is accomplished by reflecting the data in z-direction with a given - stencil width. \end{itemize} % All utility programs are self-explaining -- just call them without arguments @@ -189,7 +185,7 @@ to get a short usage info. If any of these utility programs is called without arguments it will print a usage message. % -% Automatically created from the ccl files +% Automatically created from the ccl files % Do not worry for now. \include{interface} \include{param} -- cgit v1.2.3