aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authortradke <tradke@1c20744c-e24a-42ec-9533-f5004cb800e5>2003-02-01 16:46:13 +0000
committertradke <tradke@1c20744c-e24a-42ec-9533-f5004cb800e5>2003-02-01 16:46:13 +0000
commitd05788a57ae1d662dfde8de594d71129cc6ad025 (patch)
tree1eb9acd61e874e0de63a28c6dd22bbb89dbbc43b
parent8315ec47fde1410657d7f6bc4029d84774faf3e2 (diff)
Update on PUGHInterp's documentation for the ThornGuide.
Now it thoroughly describes PUGHInterp's implementation for CCTK_InterpGridArrays() and only briefly mentions the old interpolation API's implementation. git-svn-id: http://svn.cactuscode.org/arrangements/CactusPUGH/PUGHInterp/trunk@48 1c20744c-e24a-42ec-9533-f5004cb800e5
-rw-r--r--doc/documentation.tex190
1 files changed, 173 insertions, 17 deletions
diff --git a/doc/documentation.tex b/doc/documentation.tex
index d2d0d8b..f776d8c 100644
--- a/doc/documentation.tex
+++ b/doc/documentation.tex
@@ -1,14 +1,17 @@
\documentclass{article}
% Use the Cactus ThornGuide style file
-% (Automatically used from Cactus distribution, if you have a
+% (Automatically used from Cactus distribution, if you have a
% thorn without the Cactus Flesh download this from the Cactus
% homepage at www.cactuscode.org)
\usepackage{../../../../doc/ThornGuide/cactus}
\begin{document}
-\title{PUGHInterp}
+\newcommand{\InterpGridArrays}{{\tt CCTK\_\-Interp\-Grid\-Arrays()}}
+\newcommand{\PUGHInterp}{{\it PUGH\-Interp}}
+
+\title{\PUGHInterp}
\author{Paul Walker, Thomas Radke, Erik Schnetter}
\date{$ $Date$ $}
@@ -18,24 +21,177 @@
% START CACTUS THORNGUIDE
\begin{abstract}
-Thorn PUGHInterp provides interpolation of arrays at arbitrary points.
+Thorn \PUGHInterp\ implements the Cactus interpolation API \InterpGridArrays\
+for the interpolation of CCTK grid arrays at arbitrary points.
\end{abstract}
-\section{Purpose}
-Thorn PUGHInterp implements interpolation operators for CCTK grid
-variables (grid functions and arrays distributed over all processors).%%%
-\footnote{%%%
- See the LocalInterp thorn for interpolation
- of processor-local arrays.
- }%%%
-
-The interpolation operators are invoked via the flesh interpolation API
-{\tt CCTK\_InterpGV()}. They are registered with the flesh under the
-name {\tt "uniform cartesian"} prepended by the interpolation order,
+\section{Introduction}
+Thorn \PUGHInterp\ provides an implementation of the Cactus interpolation API
+specification for the interpolation of CCTK grid arrays at arbitrary points,
+\InterpGridArrays.
+
+This function interpolates a list of CCTK grid arrays (in a multiprocessor run
+these are generally distributed over processors) on a list of interpolation
+points. The grid topology and coordinates are implicitly specified via a Cactus
+coordinate system.
+The interpolation points may be anywhere in the global Cactus grid.
+In a multiprocessor run they may vary from processor to processor;
+each processor will get whatever interpolated data it asks for.
+
+The routine \InterpGridArrays\ does not do the actual interpolation
+itself but rather takes care of whatever interprocessor communication may be
+necessary, and -- for each processor's local patch of the domain-decomposed grid
+arrays -- calls {\tt CCTK\_InterpLocalUniform()} to invoke an external
+local interpolation operator (as identified by an interpolation handle).
+It is advantageous to interpolate a list of grid arrays at once (for the same
+list of interpolation points) rather than calling \InterpGridArrays\ several
+times with a single grid array. This way note only can \PUGHInterp's
+implementation of \InterpGridArrays\ aggregate communications for multiple grid
+arrays into one (resulting in less communications overhead) but also
+{\tt CCTK\_InterpLocalUniform()} may compute interpolation coefficients once
+and reuse them for all grid arrays.
+
+Please refer to the {\it Cactus UsersGuide} for a complete function description
+of \InterpGridArrays\ and {\tt CCTK\_InterpLocalUniform()}.\\
+
+\PUGHInterp\ also implements the old Cactus interpolation API {\tt
+CCTK\_InterpGV()}. Application thorns should not make use of this depricated API
+anymore and rather switch to the new, more general API \InterpGridArrays.
+A brief description of \PUGHInterp's implementation of {\tt CCTK\_InterpGV()} is
+given in section \ref{PUGHInterp_old_API}.
+
+
+\section{\PUGHInterp's Implementation of \InterpGridArrays}
+
+If thorn \PUGHInterp\ was activated in the {\tt ActiveThorns} list of a
+parameter file for a Cactus run, it will overload at startup the flesh-provided
+dummy function for \InterpGridArrays\ with its own routine. This routine will
+then be invoked in subsequent calls to \InterpGridArrays.
+
+\PUGHInterp's routine for the interpolation of grid arrays provides exactly the
+same semantics as \InterpGridArrays which is thoroughly described in the
+{\it Function Reference} chapter of the {\it Cactus UsersGuide}.
+In the following, only user-relevant details about its implementation, such as
+specific error codes and the evaluation of parameter options table entries, are
+explained.
+
+\subsection{Implementation Notes}
+
+At first, \InterpGridArrays\ checks its function arguments for invalid values
+passed by the caller. In case of an error, the routine will issue an error
+message and return with an error code of either {\tt UTIL\_\-ERROR\_\-BAD\_\-HANDLE} for
+an invalid coordinate system and/or parameter options table, or
+{\tt UTIL\_ERROR\_BAD\_INPUT} otherwise.
+Currently there is the restriction that only {\tt CCTK\_VARIABLE\_REAL} is
+accepted as the CCTK data type for the interpolation points coordinates.
+
+Then the parameter options table is parsed and evaluated for additional
+information about the interpolation call (see section \ref{PUGHInterp_PTable}
+for details).
+
+In the single-processor case, \InterpGridArrays\ would now invoke the local
+interpolation operator (as specified by its handle) by a call to
+{\tt CCTK\_InterpLocalUniform()} to perform the actual interpolation. The return
+code from this call is then also passed back to the user.
+
+For the multi-processor case, \PUGHInterp\ does a query call to the local
+interpolator first to find out whether it can deal with the number of
+interprocessor ghostzones available. For that purpose it sets up an array of
+two interpolation points which denote the extremes of the physical coordinates
+on a processor: the lower-left and upper-right point of the processor-local
+grid's bounding box\footnote{
+Note that because the query is done with extreme interpolation points
+coordinates, the interpolation call may fail even if all the user-supplied
+interpolation points are well within each processor's local patch.
+The reason for this implementation behaviour is that we safely want to catch
+all errors caused by a too small ghostzone size.}.
+The query gets passed the same user-supplied function arguments as for the
+real interpolation call, apart from the interpolation points coordinates (which
+now describe a processor's physical bounding box coordinates) and the output
+array pointers (which are all set to NULL in order to indicate that this is a
+query call only). Any error code returned by the local interpolator for this
+query (eg. {\tt CCTK\_ERROR\_INTERP\_POINT\_OUTSIDE} if the local interpolator
+potentially requires values from grid points which are outside of the available
+processor-local patch of the global grid) cause \InterpGridArrays\ to return
+immediately with that error code.
+
+Otherwise the \InterpGridArrays\ routine will continue and map the user-supplied
+interpolation points onto the processors which own these points. In a subsequent
+global communication all processors receive "their" interpolation points
+coordinates and call {\tt CCTK\_InterpLocalUniform()} with those. The
+interpolation results are then sent back to the processors which originally
+requested the interpolation points.
+
+Like the {\it PUGH} driver thorn, \PUGHInterp\ uses MPI for the necessary
+interprocessor communication. Note that the {\tt MPI\_Alltoall()/MPI\_Alltoallv()} calls for the
+distribution of interpolation points coordinates to their owning processors and
+the back transfer of the interpolation results to the requesting processors
+are collective communication operations. So in the multi-processor case you
+{\em must\/} call \InterpGridArrays\ in parallel on {\em each\/} processor
+(even if a processor doesn't request any points to interpolate at), otherwise
+the program will run into a deadlock.
+
+
+\subsection{Passing Additional Information via the Parameter Table}
+\label{PUGHInterp_PTable}
+
+One of the function arguments to \InterpGridArrays\ is an integer handle
+which refers to a key/value options table. Such a table can be used to pass
+additional information (such as the interpolation order) to the interpolation
+routines (ie. to both \InterpGridArrays\ and the local interpolator as invoked
+via {\tt CCTK\_InterpLocalUniform()}). The table may also be modified by these
+routines, eg. to exchange internal information between the local and global
+interpolator, and/or to pass back arbitrary information to the user.
+
+The only table option currently evaluated by \PUGHInterp's implementation
+of \InterpGridArrays\ is:
+\begin{verbatim}
+ CCTK_INT input_array_time_levels[N_input_arrays];
+\end{verbatim}
+which lets you choose the timelevels for the individual grid arrays to
+interpolate. If no such table option is given, then the current timelevel (0)
+will be taken as the default.
+
+The following table options are meant for the user to specify how the
+local interpolator should deal with interpolation points near grid boundaries:
+\begin{verbatim}
+ CCTK_INT N_boundary_points_to_omit[2 * N_dims];
+ CCTK_REAL boundary_off_centering_tolerance[2 * N_dims];
+ CCTK_REAL boundary_extrapolation_tolerance[2 * N_dims];
+\end{verbatim}
+In the multi-processor case, \InterpGridArrays\ will modify these arrays in
+a user-supplied options table in order to specify the handling of interpolation
+points near interprocessor boundaries (ghostzones) for the local interpolator;
+corresponding elements in the options arrays are set to zero for all ghostzone
+faces, ie. no points should be omitted, and no off-centering and extrapolation
+is allowed at those boundaries. Array elements for physical grid boundaries are
+left unchanged by \InterpGridArrays.
+
+If any of the above three boundary handling table options is missing in the
+user-supplied table, \InterpGridArrays\ will create and add it to the table
+with appropriate defaults. For the default values, as well as a comprehensive
+discussion of grid boundary handling options, please refer to documentation
+of the thorn(s) providing local interpolator(s) (eg. thorn {\it LocalInterp} in
+the {\it Cactus ThornGuide}).
+
+
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+
+\section{\PUGHInterp\ and the old Cactus Interpolation API}
+\label{PUGHInterp_old_API}
+
+Currently, \PUGHInterp\ also implements the old Cactus interpolation API
+for grid arrays {\tt CCTK\_InterpGV()}.
+For that purpose, it registers different {\tt CCTK\_InterpGV()} operators
+under the name {\tt "uniform cartesian"} prepended by the interpolation order,
i.e.\ {\tt "second-order uniform cartesian"}). Currently there are
first, second, and third-order interpolation implemented.
-\section{Implementation Notes}
+Each operator takes care of the necessary interprocessor communication, and
+also does the actual interpolation itself (as opposed to the new API
+\InterpGridArrays\ where an external local interpolator is invoked).
+
+\subsection{Implementation Notes}
The interpolation operators registered for different orders are mapped
via wrappers (in {\tt Startup.c}) onto a single routine (in {\tt Operator.c})
just passing the order as an additional argument.
@@ -43,7 +199,7 @@ just passing the order as an additional argument.
The routine for distributed arrays will then map all points to interpolate
at to the processors which own those points, and communicate the points'
coordinates and corresponding input arrays ({\tt MPI\_Alltoall()} is used
-for this).
+for this global communication).
Then the interpolation takes place in parallel on every processor, calling
a core interpolation routine (located in {\tt Interpolate.c}). This one
@@ -52,7 +208,7 @@ list of output arrays (one output value per interpolation point).
Again, for distributed arrays, the interpolation results for remote points
are sent back to the requesting processors.
-\section{Implementation Restrictions}
+\subsection{Implementation Restrictions}
Current limitations of the core interpolation routine's implementation are:
\begin{itemize}
\item arrays up to three ({\bf MAXDIM}) dimensions only can be handled