From 6d2aca99e8b9ff712ee47ebe23af38486e1b7d76 Mon Sep 17 00:00:00 2001 From: tradke Date: Wed, 14 Feb 2001 14:40:43 +0000 Subject: This commit was generated by cvs2svn to compensate for changes in r2, which included commits to RCS files with non-trunk default branches. git-svn-id: http://svn.cactuscode.org/arrangements/CactusPUGH/PUGHInterp/trunk@3 1c20744c-e24a-42ec-9533-f5004cb800e5 --- doc/documentation.tex | 66 +++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 66 insertions(+) create mode 100644 doc/documentation.tex (limited to 'doc') diff --git a/doc/documentation.tex b/doc/documentation.tex new file mode 100644 index 0000000..b59a7a3 --- /dev/null +++ b/doc/documentation.tex @@ -0,0 +1,66 @@ +%version $Header$ +\documentclass{article} +\begin{document} + +\title{PUGHInterp} +\author{Paul Walker, Thomas Radke, Erik Schnetter} +\date{1997-2001} +\maketitle + +\abstract{Thorn PUGHInterp provides interpolation of arrays + at arbitrary points.} + +\section{Purpose} +Thorn PUGHInterp implements interpolation operators working on regular, +uniform cartesian grids. They can be applied to interpolate both CCTK grid +variables (grid functions and arrays distributed over all processors), and +processor-local arrays at arbitrary points.\\ +The operators for distributed/local arrays are invoked via the flesh +interpolation API routines {\tt CCTK\_InterpGV()} and {\tt CCTK\_InterpLocal()} +resp. They are registered with the flesh under the name {\it ''uniform +cartesian''} prepended by the interpolation order (eg. {\it ''second-order +uniform cartesian''}). Currently there is first, second, and third-order +interpolation implemented.\\ + +\section{Implementation Notes} +The interpolation operators registered for different orders are mapped +via wrappers (in {\tt Startup.c}) onto a single routine (in {\tt Operator.c}) +just passing the order as an additional argument.\\ +The routine for distributed arrays will then map all points to interpolate +at to the processors which own those points, and communicate the points' +coordinates and corresponding input arrays ({\tt MPI\_Alltoall()} is used +for this).\\ +Then the interpolation takes place in parallel on every processor, calling +a core interpolation routine (located in {\tt Interpolate.c}). This one +takes a list of input arrays and points and interpolates these to a +list of output arrays (one output value per interpolation point).\\ +Again, for distributed arrays, the interpolation results for remote points +are sent back to the requesting processors.\\[2ex] +% +Current limitations of the core interpolation routine's implementation are: +% +\begin{itemize} + \item arrays up to three ({\bf MAXDIM}) dimensions only can be handled + \item interpolation orders up to three ({\bf MAXORDER}) only are supported + \item coordinates must be given as {\bf CCTK\_REAL} types + \item input and output array types must be the same + (no type casting of interpolation results supported) +\end{itemize} +% +Despite of these limitations, the code it was programmed almost generic +in that it can easily be extended to support higher-dimensional arrays +or more interpolation orders.\\ +Please see the NOTES in this source file for details. + +\section{Comments} +% +For more information on how to invoke interpolation operators please refer +to the flesh documentation. +% +% Automatically created from the ccl files +% Do not worry for now. +\include{interface} +\include{param} +\include{schedule} + +\end{document} -- cgit v1.2.3