From 7efd66162332aaa0c68bdd8c8e07b1d969bc9040 Mon Sep 17 00:00:00 2001 From: tradke Date: Wed, 29 May 2002 13:37:03 +0000 Subject: Updated saying that the local interpolation stuff has been cloned in thorn LocalInterp. git-svn-id: http://svn.cactuscode.org/arrangements/CactusPUGH/PUGHInterp/trunk@26 1c20744c-e24a-42ec-9533-f5004cb800e5 --- doc/documentation.tex | 43 +++++++++++++++++++++++-------------------- 1 file changed, 23 insertions(+), 20 deletions(-) (limited to 'doc') diff --git a/doc/documentation.tex b/doc/documentation.tex index b59a7a3..75033d2 100644 --- a/doc/documentation.tex +++ b/doc/documentation.tex @@ -11,34 +11,38 @@ at arbitrary points.} \section{Purpose} -Thorn PUGHInterp implements interpolation operators working on regular, -uniform cartesian grids. They can be applied to interpolate both CCTK grid -variables (grid functions and arrays distributed over all processors), and -processor-local arrays at arbitrary points.\\ -The operators for distributed/local arrays are invoked via the flesh -interpolation API routines {\tt CCTK\_InterpGV()} and {\tt CCTK\_InterpLocal()} -resp. They are registered with the flesh under the name {\it ''uniform -cartesian''} prepended by the interpolation order (eg. {\it ''second-order -uniform cartesian''}). Currently there is first, second, and third-order -interpolation implemented.\\ +Thorn PUGHInterp implements interpolation operators for CCTK grid +variables (grid functions and arrays distributed over all processors).%%% +\footnote{%%% + See the LocalInterp thorn for interpolation + of processor-local arrays. + }%%% + +The interpolation operators are invoked via the flesh interpolation API +{\tt CCTK\_InterpGV()}. They are registered with the flesh under the +name {\tt "uniform cartesian"} prepended by the interpolation order, +i.e.\ {\tt "second-order uniform cartesian"}). Currently there are +first, second, and third-order interpolation implemented. \section{Implementation Notes} The interpolation operators registered for different orders are mapped via wrappers (in {\tt Startup.c}) onto a single routine (in {\tt Operator.c}) -just passing the order as an additional argument.\\ +just passing the order as an additional argument. + The routine for distributed arrays will then map all points to interpolate at to the processors which own those points, and communicate the points' coordinates and corresponding input arrays ({\tt MPI\_Alltoall()} is used -for this).\\ +for this). + Then the interpolation takes place in parallel on every processor, calling a core interpolation routine (located in {\tt Interpolate.c}). This one takes a list of input arrays and points and interpolates these to a -list of output arrays (one output value per interpolation point).\\ +list of output arrays (one output value per interpolation point). Again, for distributed arrays, the interpolation results for remote points -are sent back to the requesting processors.\\[2ex] -% +are sent back to the requesting processors. + +\section{Implementation Restrictions} Current limitations of the core interpolation routine's implementation are: -% \begin{itemize} \item arrays up to three ({\bf MAXDIM}) dimensions only can be handled \item interpolation orders up to three ({\bf MAXORDER}) only are supported @@ -46,14 +50,13 @@ Current limitations of the core interpolation routine's implementation are: \item input and output array types must be the same (no type casting of interpolation results supported) \end{itemize} -% + Despite of these limitations, the code it was programmed almost generic in that it can easily be extended to support higher-dimensional arrays -or more interpolation orders.\\ -Please see the NOTES in this source file for details. +or more interpolation orders. Please see the NOTES in this source file +for details. \section{Comments} -% For more information on how to invoke interpolation operators please refer to the flesh documentation. % -- cgit v1.2.3