%version $Header$ \documentclass{article} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{document} \title{LocalInterp} \author{Jonathan Thornburg, incorporating code from Thomas Radke} % % We want CVS to expand the Id keyword on the next line, but we don't % want TeX to go into math mode to typeset the expansion (because that % wouldn't look as nice in the output), so we use the "$ $" construct % to get TeX out of math mode again when typesetting the expansion. % \date{$ $Id$ $} \maketitle \abstract{ This thorn provides processor-local interpolation of 1-D, 2-D, and 3-D data arrays. It provides interpolators for both the older \verb|CCTK_InterpLocal()| API, and the newer and more generic \verb|CCTK_InterpLocalUniform()| APIs. } %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Introduction} % replicate abstract since it disappears from Thorn Guide This thorn provides processor-local interpolation of 1-D, 2-D, and 3-D data arrays. It provides interpolators for both the older \verb|CCTK_InterpLocal()| API, and the newer and more generic \verb|CCTK_InterpLocalUniform()| APIs. At present there are 2~interpolators provided (we hope to add other interpolators in the future): \begin{description} \item[Uniform Cartesian] This is the local interpolator which used to live in the PUGHInterp thorn. It was written in C by Thomas Radke in early 2001, drawing on earlier Fortran code by Paul Walker. It supports the \verb|CCTK_InterpLocal()| API. It provides Lagrange polynomial interpolation for all combinations of~1, 2, and 3~dimensions, and orders~1, 2, and~3. (It would be easy to add additional dimensions and/or orders if desired.) \item[Generalized Polynomial] This interpolator was written in C (plus Maple to generate the coefficients) by Jonathan Thornburg in winter 2001-2002. It supports the \verb|CCTK_InterpLocalUniform()| API. It provides Lagrange polynomial interpolation in 1~dimension (orders~1-6) and~2 and 3~dimensions (orders~1-4). (Again, it would be easy to add additional dimensions and/or orders.) It offers a number of options, described below. \end{description} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Terminology} Within Cactus, each interpolator has a character-string name; this is mapped to a Cactus {\bf interpolator handle} by \verb|CCTK_InterpHandle()|. For any given interpolator handle, there may be a separate interpolator defined for each of the interpolation APIs (both the processor-local ones provided by this thorn, and the global ones provided by driver-specific interpolator thorns such as PUGHInterp). Terminology for interpolation seems to differ a bit from one author to another. Here we refer to the {\bf point-centered} interpolation of a set of {\bf input arrays} (defining data on a uniformly or nonuniformly spaced {\bf grid} of {\bf data points}) to a set of {\bf interpolation points} (specified by a corresponding set of {\bf interpolation coordinates}), with the results being stored in a corresponding set of {\bf output arrays}. Alternatively, we may refer to {\bf cell-centered} interpolation, using a grid of {\bf data cells} and a set of {\bf interpolation cells}. (This latter terminology is common in hydrodynamics interpolators.) At present all the interpolators do polynomial interpolation, and allow the interpolation of multiple input arrays (to the same set of interpolation points) in a single interpolator call, using the basic algorithm: \begin{verbatim} for each interpolation point { choose a finite difference molecule position somewhere near the interpolation point for each input array { compute an interpolating polynomial which approximates the input data at the molecule points output = polynomial(interpolation point) } } \end{verbatim} In the future, we may add other interpolators where the choice of molecule is data-dependent (and thus may vary from one input array to the next), and/or where the entire input grid is used in each interpolation. We define the {\bf order} of the interpolation to be the order of the fitted polynomial. That is, in our terminology, linear interpolation is order~1, quadratic is order~2, cubic is order~3, etc. An order~$n$ interpolator thus has $O(\Delta x^{n+1})$ errors for smooth input data. However, because the interpolating polynomial generally changes if the interpolation point moves from one grid cell to another, unless something special is done the interpolating function isn't smooth, i.e.\ its 1st~derivative is generically {\em discontinuous\/}, with $O(\Delta x^n)$ jump discontinuities each time the interpolating polynomial changes. Correspondingly, the interpolation error is generically a ``bump function'' which is zero at each grid point and rises to a local maximum in each grid cell. There are interpolation algorithms (e.g.\ Hermite polynomial and spline interpolation) which give better smoothness, but none of the present interpolators implement them. :( As given in the Function Reference section of the Cactus User's Guide, \verb|interp_coords|, \verb|input_arrays|, and \verb|output_arrays| are actually all pointers to arrays of \verb|void *| pointers, since we support a number of different Cactus data types. Internally, the interpolator casts these \verb|void *| pointers to \verb|CCTK_REAL *| or whatever the correct Cactus data types are. But when describing how the interpolator accesses the various arrays, for simplicity we usually gloss over this casting, i.e.\ we pretend that \verb|interp_coords|, \verb|input_arrays|, and \verb|output_arrays| are pointers to arrays of \verb|CCTK_REAL *| pointers. (This may become clearer once you read the next example.) We use \verb|pt|, \verb|in|, and \verb|out| as generic 0-origin integer subscripts into the arrays of interpolation points, input arrays, and output arrays respectively. We use \verb|(i,j,k)| as a generic \verb|N_dims|-vector of integer subscripts into the input array \verb|input_arrays[in]|. (Thus {\bf {\tt (i,j,k)} space} refers to the grid of data points.) We usually only write array subscripting expressions for the 3-D case; the restrictions/generalizations to other dimensions should be obvious. For example, for 3-D interpolation, the (x,y,z) coordinates of a typical interpolation point are given by \begin{verbatim} x = interp_coords[0][pt] y = interp_coords[1][pt] z = interp_coords[2][pt] \end{verbatim} (Notice here that as described above, we've implicitly taken \verb|interp_coords| to have the C~type \verb|const CCTK_REAL* interp_coords[]|, glossing over the casting from its actual C~type of \verb|const void* interp_coords[]|.) We take \verb|axis| to be an integer specifying a dimension, i.e.\ 0~for~$x$, 1~for~$y$, 2~for~$z$, \dots. When describing Jacobians and domains of dependence, it's useful to introduce the notion of {\bf molecule position}, a nominal reference point for the interpolation molecule in \verb|(i,j,k)| coordinate space. (For example, the molecule position might just be the \verb|(i,j,k)| coordinates of the molecule's center.) We also introduce {\bf molecule coordinates} \verb|(mi,mj,jk)|, which are just \verb|(i,j,k)| coordinates relative to the molecule position. We use \verb|m| or as a generic molecule coordinate. Thus (in notation which should be obvious) a generic molecule operation can be written \begin{equation} \verb|output| = \sum_{\tt m} \verb|coeff[posn+m]| \times \verb|input[posn+m]| \end{equation} Note that there is no requirement that the output be semantically located at the position \verb|posn|! (This may become clearer once you look at the example in section~\ref{sect-generic-options/Jacobian/fixed-sized-hypercube}.) However, by convention we do assume that $\verb|m|=0$ is always a valid \verb|m| coordinate; this simplifies pointer arithmetic. When describing various entries in the parameter table in section~\ref{sect-generic-options}, we use \verb|const| qualifiers on table entries to indicate that the interpolator treats them as \verb|const| variables/arrays, i.e.\ it promises not to change them. In contrast, table entries which are not shown as \verb|const| are considered read-write by the interpolator; it typically uses them to return outputs back to the caller. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Generic Interpolator Options} \label{sect-generic-options} The newer \verb|CCTK_InterpLocalUniform()| and \verb|CCTK_InterpLocalNonUniform()| APIs specify a {\bf parameter table} (a Cactus key-value table, manipulated by the \verb|Util_Table*()| APIs) as a generic mechanism for passing optional or mandatory input/output to/from the interpolator. Different interpolators support different options; in this section we describe some options which are, or will plausibly, be common to multiple interpolators. Note that except as described in section~\ref{sect-generic-options/caching} (``Caching''), all interpolator arguments and parameters apply only to the current interpolator call: there is no visible state kept inside the interpolator from one call to another. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Interpolation Order} The uniform Cartesian interpolator encodes the order in the operator name, but other interpolators should use a (mandatory) parameter-table parameter \begin{verbatim} const CCTK_INT order; \end{verbatim} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Handling of Out-of-Range Interpolation Points} By default, interpolators should consider it an error if any interpolation point lies outside the grid, i.e.\ if the ``interpolation'' is actually an extrapolation. (Some interpolators may apply a small ``fuzz'' to this test to help avoid floating-point rounding problems.) Some interpolators may allow this behavior to be changed by the optional parameter \begin{verbatim} const CCTK_REAL out_of_range_tolerance[N_dims]; \end{verbatim} The semantics of this are as follows: For each \verb|axis|, \begin{description} \item[\rm If $\hbox{\tt out\_of\_range\_tolerance[axis]} \ge 0.0$,] then an interpolation point is considered to be ``out of range'' if and only if the interpolation point is $> \verb|out_of_range_tolerance[axis]| \times \verb|coord_delta[axis]|$ outside the grid in this coordinate direction. \item[\rm If $\hbox{\tt out\_of\_range\_tolerance[axis]} = -1.0$,]%%% \footnote{%%% Note that this is an {\em exact\/} floating-point equality test! Although such tests are normally very dangerous, this one is ok, because -1.0 can be exactly represented in any reasonable floating-point format. }%%% {} then an interpolation point is considered to be ``out of range'' if and only if a centered molecule (or more generally, a molecule whose centering is chosen pretending that the grid is of infinite extent), would require data from outside the grid. \end{description} Other values of \verb|out_of_range_tolerance[axis]| are reserved for future use. To provide the ``fuzz'' noted above, \verb|out_of_range_tolerance[]| should default to having all elements set to a small positive value, say $10^4 \epsilon$, where $\epsilon$ is the ``machine epsilon''.%%% \footnote{%%% {\bf machine epsilon} $\epsilon$ is defined to be the smallest positive floating-point number such that $1.0 + \epsilon$ compares ``not equal to'' 1.0 in floating-point arithmetic. Machine epsilon values can be found in the standard header file {\tt }; for IEEE single and double precision they are about $1.19{\times}10^{-7}$ and $2.22{\times}10^{-16}$ respectively. }%%% {} However, at present all interpolators actually set the default value to $10^4 \epsilon_d$, where $\epsilon_d$ is the machine epsilon for C \verb|double| values, even though \verb|CCTK_REAL| may actually be a different data type. (This is a bug, and may get fixed some day\dots) If any interpolation points are out of range (as determined by the \verb|out_of_range_tolerance[]| critera described above), the interpolator should return the error code \verb|CCTK_ERROR_INTERP_POINT_X_RANGE|, and report further details about the error by setting the following parameter-table entries: \begin{verbatim} /* which interpolation point is out of range? */ /* (value is pt for out-of-range point) */ CCTK_INT out_of_range_pt; /* in which coordinate axis is the point out of range? */ /* (value is axis for out-of-range point) */ CCTK_INT out_of_range_axis; /* on which end of the grid is the point out of range? */ /* (value is -1 for min, +1 for max) */ CCTK_INT out_of_range_end; \end{verbatim} Note that if multiple points and/or axes are out of range, different interpolators may vary in which out-of-range error they report. That is, it is {\em not\/} necessarily the case that the ``first'' such error will be the one reported.%%% \footnote{%%% This is to make life simpler for interpolators which work in parallel over multiple processors. }%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Molecule Family} \label{sect-generic-options/molecule-family} An interpolator may support different families/shapes of interpolation molecules. Hypercube-shaped molecules are the simplest and most common case, but one could also imagine (say) octagon-shaped molecules in 2-D, or some generalization of this in higher numbers of dimensions. The parameter \begin{verbatim} const char molecule_family[]; /* set with Util_TableSetString() */ \end{verbatim} may be used to query or change the strategy for choosing the molecules. The semantics are that if this key is present with the value \verb|""| (a 0-length ``empty'' string), this queries the default molecule family: the interpolator will (re)set the value to a string describing the default molecule family (\verb|"cube"| for hypercube-shaped molecules%%% \footnote{%%% Strictly speaking, these should be called ``parallelipiped-shaped'' molecules, but this term is so clumsy (and hard to spell!) that we just call them hypercube-shaped instead. }%%% ). If this key is present with a value which is a non-empty string, this sets the molecule family/shape based on that string. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Non-Contiguous Input Arrays} \label{sect-generic-options/non-contiguous-inputs} Sometimes the input ``arrays'' used by the interpolator might not be contiguous in memory. For example, we might want to do 2-D interpolation within a plane of a 3-D grid array, but the plane might or might not be contiguous in memory. (Another example would be that the input arrays might be members of a compact group.) One way to do this would be to use the hyperslab API to explicitly compute the input data on an appropriate hyperslab, then interpolate within that. However, some interpolators may support accessing the appropriate hyperslab of the input grid directly inside the interpolator. If this is supported, it's probably considerably cheaper than explicitly computing the hyperslab. If an interpolator supports this, it should use the following (optional) parameter-table entries to specify non-contiguous inputs: \begin{verbatim} const CCTK_INT input_array_offsets[N_input_arrays]; /* the next 3 table entries are shared by all input arrays */ const CCTK_INT input_array_strides [N_dims]; const CCTK_INT input_array_min_subscripts[N_dims]; const CCTK_INT input_array_max_subscripts[N_dims]; \end{verbatim} Then the interpolator would access the input using the generic subscripting expression \begin{verbatim} input_array[in][offset + i*stride_i + j*stride_j + k*stride_k] \end{verbatim} where \begin{verbatim} offset = input_array_offsets[in] (stride_i,stride_j,stride_k) = input_array_strides[] \end{verbatim} and where \verb|(i,j,k)| run from \verb|input_array_min_subscripts[]| to \verb|input_array_max_subscripts[]| inclusive (n.b.~this is an {\em inclusive\/} range, i.e.\ $\verb|min| \le \verb|(i,j,k)| \le \verb|max|$). The defaults are that each input array is contiguous in memory, i.e.\ \verb|input_array_offsets[]| = 0, \verb|stride| determined from \verb|input_array_dims[]| in the usual Fortran manner, \verb|input_array_min_subscripts[]| = all 0s, and \verb|input_array_max_subscripts[]| = \verb|input_array_dims[]|-1. If the stride and max subscript are both specified explicitly, then the explicit \verb|input_array_dims[]| argument to \verb|CCTK_InterpLocalUniform()| is ignored. At present all the interpolators take the output arrays to be contiguous 1-D arrays. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Derivatives} \label{sect-generic-options/derivatives} Some interpolators can optionally take derivatives as part of the interpolation, i.e.\ if we view the input data as being samples of a smooth function, then instead of estimating values of that function at the interpolation points, the interpolator can instead or additionally estimate values of various derivatives of that function at the interpolation points. We don't currently implement it, but one could also imagine interpolating more general molecule operations such as Laplacians, running integrals, etc. To specify such operations, an interpolator should use the parameter-table entries \begin{verbatim} const CCTK_INT operand_indices[N_output_arrays]; const CCTK_INT operation_codes[N_output_arrays]; \end{verbatim} The semantics here are that \begin{verbatim} output_array[out] = op(input_array[operand_indices[out]]) \end{verbatim} where \verb|op| is an operator specified by the \verb|operation_codes[out]| value as described below. Note that \verb|operand_indices[]| doesn't directly name the inputs, but rather gives their (0-origin) subscripts in the list of inputs. This allows for a more efficient implementation in the (common) case where some of the input arrays have many different operations applied to them. Negative \verb|operation_codes[out]| values are reserved for future use. An \verb|operation_codes[out]| value which is $\ge 0$ is taken as a decimal integer encoding a coordinate partial derivative: each decimal digit means to take the coordinate partial derivative along that (1-origin) axis; the order of the digits in a number is ignored. For example: \begin{flushleft} \begin{tabular}{cl} \verb|operation_codes[out]| & What it means \\ \hline %----------------------------------------------------------------------- 0 & no derivative, ``just'' interpolate the input array \\ 1 & interpolate $\partial \big/ \partial x^1$ of the input array \\ 2 & interpolate $\partial \big/ \partial x^2$ of the input array \\ 12 or 21 & interpolate $\partial^2 \big/ \partial x^1 \partial x^2$ of the input array \\ 33 & interpolate $\partial^2 \big/ \partial (x^3)^2$ of the input array %%%\\ \end{tabular} \end{flushleft} At present we do {\em not\/} have any \verb|#define|s for the operation codes in the Cactus header files. However, you can avoid most of the software-engineering problems of having ``magic numbers'' for the operation codes, by using the macro \begin{verbatim} #define DERIV(op) op \end{verbatim} to mark all such \verb|operation_codes[]| values in your code. There's an example of this in section~\ref{sect-example-derivatives}. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Jacobian and Domain of Dependence} \label{sect-generic-options/Jacobian} Sometimes we want to know the Jacobian of the interpolator, \begin{equation} \frac{\partial \hbox{\tt output\_array[out][pt]}} {\partial \hbox{\tt input\_array[in][(i,j,k)]}} \label{eqn-Jacobian} \end{equation} and/or ``just'' the set of \verb|(i,j,k)| for which this is nonzero (i.e.\ the sparsity structure of the Jacobian, equivalently the domain of dependence of the output result, or equivalently the interpolation molecule size and shape) for a given \verb|out|, \verb|in|, and \verb|pt|.%%% \footnote{%%% For something like a spline interpolator the Jacobian is generally nonzero for all {\tt (i,j,k)}, but exponentially small for most of them; in this case the Jacobian-query API would probably specify a cutoff to return an approximate Jacobian with reasonable sparsity. }%%% The complexity of doing this depends (strongly!) on the structure of the Jacobian, and in particular on the answers to the following questions: \begin{itemize} \item What molecule family is being used? \item Does the interpolation molecule size and/or shape depend on where the interpolation points are in the grid?%%% \footnote{%%% We can always make the interpolation molecules be the ``same size and shape'' by padding them with zero entries to bring them all up to the size of the largest molecule used anywhere in the grid, but this would be very inefficient if many molecules were padded in this way. }%%% \item If this interpolator supports computing derivatives as described in section~\ref{sect-generic-options/derivatives}, does the interpolation molecule size and/or shape depend on \verb|operation_codes[]|? \item Does the interpolation molecule size and/or shape depend on the actual floating-point values being interpolated? (Examples of this might include ENO (essentially nonoscillatory) and/or TVD (total-variation diminishing) interpolators for hydrodynamics calculations.) \item Do the actual floating-point values of the Jacobian depend on the actual floating-point values being interpolated? Equivalently, is the interpolation nonlinear? \end{itemize} Because the different cases differ so much in their complexity, we define several distinct APIs for obtaining the interpolator's Jacobian and/or domain of dependence. %%%%%%%%%%%%%%%%%%%% \subsubsection{Determining the Jacobian's structure} \label{sect-generic-options/Jacobian/structure} To allow generic code to determine which of the different Jacobian-structure cases applies, (and thus which APIs to use), an interpolator which supports Jacobian operations should report its Jacobian structure using the following parameter-table entries: The parameter \verb|molecule_family| may may be used to query what molecule family is being used. This is described in detail in section~\ref{sect-generic-options/molecule-family}. If the interpolation molecule size and/or shape vary with the interpolation coordinates, the interpolator should set the parameter \begin{verbatim} CCTK_INT MSS_is_fn_of_interp_coords; \end{verbatim} to 1. Otherwise (i.e.\ if the interpolation molecule size and shape are independent of the interpolation coordinates) it should set this parameter to 0. If the interpolator supports computing derivatives as described in section~\ref{sect-generic-options/derivatives}, and if the interpolation molecule's size and/or shape varies with \verb|operation_codes[]|, the interpolator should set the parameter \begin{verbatim} CCTK_INT MSS_is_fn_of_which_operation; \end{verbatim} to 1. Otherwise (i.e.\ if the interpolator doesn't support computing derivatives, or if the interpolator does support computing derivatives but the interpolation molecule size and shape are independent of the \verb|operation_code[]| values), it should set this parameter to 0. Note that this query tests whether the molecule size and/or shape depend on \verb|operation_codes[]| in general, independent of whether there are in fact any distinct values (or even any values at all) passed in \verb|operation_codes[]| in this particular interpolator call. In other words, this is a query about the basic design of the interpolator, not about this particular call. If the interpolation molecule's size and/or shape varies with the actual floating-point values of the input arrays, the interpolator should set the parameter \begin{verbatim} CCTK_INT MSS_is_fn_of_input_array_values; \end{verbatim} to 1. Otherwise (i.e.\ if the interpolation molecule size and shape are independent of the input array values; this is a necessary, but not sufficient, condition for the interpolation to be linear), it should set this parameter to 0. If the actual floating-point values of the Jacobian~$(\ref{eqn-Jacobian})$ (for a given \verb|out|, \verb|in|, and \verb|pt|) depend on the actual floating-point values of the input arrays (i.e.\ if the interpolation is nonlinear), the interpolator should set the parameter \begin{verbatim} CCTK_INT Jacobian_is_fn_of_input_array_values; \end{verbatim} to 1. Otherwise (i.e.\ if the interpolation is linear) it should set this parameter to 0. %%%%%%%%%%%%%%%%%%%% \subsubsection{Fixed-Size Hypercube-Shaped Molecules} \label{sect-generic-options/Jacobian/fixed-sized-hypercube} The simplest case (and the only one for which we have defined an API at present) is when the molecules are hypercube-shaped and of (typically small) fixed size, independent of the interpolation coordinates and the actual floating-point values in the input arrays (though presumably depending on the interpolation order and on \verb|operation_code|). In other words, this case applies if (and only if) the Jacobian structure information described in section~\ref{sect-generic-options/Jacobian/structure} returns \begin{verbatim} MSS_is_fn_of_interp_coords = 0 MSS_is_fn_of_which_operation = 0 MSS_is_fn_of_input_array_values = 0 Jacobian_is_fn_of_input_array_values = 0 \end{verbatim} The following parameters may be used to query the molecule size: \begin{verbatim} CCTK_INT const molecule_min_m[N_dims]; CCTK_INT const molecule_max_m[N_dims]; \end{verbatim} The semantics of these are that if both of these keys are present (the values don't matter), then the interpolator will (re)set the values to give the (inclusive) minimum and maximum \verb|m|~molecule coordinates. (Note that either both of these keys should be present, or neither should be present. This simplifies the logic in the interpolator slightly.) The following parameter may be used to query the molecule positions: \begin{verbatim} CCTK_INT *const molecule_positions[N_dims]; \end{verbatim} The semantics of this is that the caller should set \verb|molecule_positions[]| to an array of \verb|N_dims| pointers to (caller-supplied) arrays of \verb|N_interp_points| \verb|CCTK_INT|s each. If this key exists, then the interpolator will store the molecule positions in the pointed-to arrays. The following parameters may be used to query the Jacobian~$(\ref{eqn-Jacobian})$ itself: \begin{verbatim} CCTK_REAL* const Jacobian_pointer[N_output_arrays]; const CCTK_INT Jacobian_offset [N_output_arrays]; /* optional */ /* the next 3 table entries are shared by all Jacobians */ const CCTK_INT Jacobian_interp_point_stride; const CCTK_INT Jacobian_m_strides[N_dims]; const CCTK_INT Jacobian_part_stride; /* optional */ \end{verbatim} All the optional entries default to 0 if omitted. Then for each \verb|out| where \verb|Jacobian_pointer[out] != NULL|,%%% \footnote{%%% That is, setting {\tt Jacobian\_pointer[out] = NULL} supresses the Jacobian query. If (as is often the case) the Jacobian is independent of {\tt out}, then it's a good idea to avoid unnecessary computations by supressing the queries in this manner for any remaining output arrays. }%%% the interpolator would store the Jacobian~$(\ref{eqn-Jacobian})$ in \begin{verbatim} Jacobian_pointer[out][offset + pt*Jacobian_interp_point_stride + mi*stride_i + mj*stride_j + mk*stride_k + part*Jacobian_part_stride] \end{verbatim} where \begin{verbatim} offset = Jacobian_offset[out] (stride_i,stride_j,stride_k) = Jacobian_m_strides[] \end{verbatim} and where \verb|part| is 0 for real values and the real parts of complex values, and 1 for the imaginary parts of complex values. By appropriately setting the various stride parameters, this allows a fairly wide variety of possible storage layouts for the Jacobian. An example may help to clarify this: Suppose we have a 1-D grid with 11~grid points, with integer subscripts~0 through~10 inclusive, and interpolation coordinates given by \verb|coord_origin = 0.0| and \verb|coord_delta = 0.1|. Suppose further that we're doing quadratic interpolation, using 3-point (vertex-centered) molecules, and that the interpolator centers the interpolation molecules as close to the interpolation point as possible subject to the constraint that the molecules never require data from outside the grid. Finally, suppose that we query the Jacobian molecule positions for the \verb|N_interp_points=14| interpolation points 0.0, 0.04, 0.06, 0.10, 0.14, 0.16, 0.20, 0.80, 0.84, 0.86, 0.90, 0.94, 0.96, 1.00. Then the queries might return \begin{verbatim} interp_molecule_min_m = -1 interp_molecule_max_m = +1 /* interp_x molecule */ interp_molecule_positions[0][ 0] = 1 /* 0.00 [0.0, 0.1, 0.2] */ interp_molecule_positions[0][ 1] = 1 /* 0.04 [0.0, 0.1, 0.2] */ interp_molecule_positions[0][ 2] = 1 /* 0.06 [0.0, 0.1, 0.2] */ interp_molecule_positions[0][ 3] = 1 /* 0.10 [0.0, 0.1, 0.2] */ interp_molecule_positions[0][ 4] = 1 /* 0.14 [0.0, 0.1, 0.2] */ interp_molecule_positions[0][ 5] = 2 /* 0.16 [0.1, 0.2, 0.3] */ interp_molecule_positions[0][ 6] = 2 /* 0.20 [0.1, 0.2, 0.3] */ interp_molecule_positions[0][ 7] = 8 /* 0.80 [0.7, 0.8, 0.9] */ interp_molecule_positions[0][ 8] = 8 /* 0.84 [0.7, 0.8, 0.9] */ interp_molecule_positions[0][ 9] = 9 /* 0.86 [0.8, 0.9, 1.0] */ interp_molecule_positions[0][10] = 9 /* 0.90 [0.8, 0.9, 1.0] */ interp_molecule_positions[0][11] = 9 /* 0.94 [0.8, 0.9, 1.0] */ interp_molecule_positions[0][12] = 9 /* 0.96 [0.8, 0.9, 1.0] */ interp_molecule_positions[0][13] = 9 /* 1.00 [0.8, 0.9, 1.0] */ \end{verbatim} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Caching} \label{sect-generic-options/caching} Some interpolators may support special ``caching'' optimizations to speed repeated interpolations where some or all of the interpolator arguments and/or parameters are the same. For example, when interpolating a tabulated equation of state the number of dimensions, the coordinate origin and grid spacing, and the input arrays (the tabulated equation of state data), will probably all be the same from one interpolator call to another. If an interpolator supports caching, the following parameters should be used to control this: \begin{verbatim} const char cache_type[]; /* set with Util_TableSetString() */ const char cache_control[]; /* set with Util_TableSetString() */ CCTK_INT cache_handle; \end{verbatim} There are three basic operations supported: \begin{description} \item[Create a Cache] To set up caching, call the interpolator with \verb|cache_type| set to describe what arguments and/or parameters will remain the same in future interpolator calls, \verb|cache_control| set to the string \verb|"create"|, and \verb|cache_handle| {\em not\/} in the parameter table. The interpolator will then do extra (possibly quite time-consuming) work to set up cached information. The interpolator will delete the key \verb|cache_control|, and return a handle to the cached information in \verb|cache_handle|; this allows multiple caches to be active concurrently. \item[Use a Cache] To use a cache (i.e.\ to make an interpolation with the hoped-for speedup), just call the interpolator with \verb|cache_handle| set to the value returned when the cache was created. Note that you still have to provide all the ``will be the same'' interpolator arguments and/or parameters; providing a cache handle is essentially just a promise that these will be the same as in the cache-create interpolator call. The details of what information is cached, and if/how the ``will be the same'' arguments are still used, are up to the interpolator. \item[Destroy a Cache] To destroy a cache (i.e.\ free any memory allocated when the cache was created), call the interpolator with \verb|cache_handle| set to the value returned when the cache was created, and \verb|cache_control| set to the string \verb|"destroy"|. The interpolator will delete the keys \verb|cache_handle| and \verb|cache_control|, and destroy the cache. \end{description} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{The Uniform Cartesian Interpolator} The uniform Cartesian interpolator is the one which used to live in the PUGHInterp thorn. It was written in C by Thomas Radke in early 2001, drawing on earlier Fortran code by Paul Walker. It supports the \verb|CCTK_InterpLocal()| API, and thus doesn't use any parameter table at all. It provides Lagrange polynomial interpolation of orders 1, 2, and 3, registered under the operator names \verb|"first-order uniform Cartesian"|, \verb|"second-order uniform Cartesian"|, and \verb|"third-order uniform Cartesian"| respectively. Each of these supports 1, 2, and 3-D arrays. The code is hard-wired for hypercube-shaped interpolation molecules, but it would be easy to add additional orders and/or dimensions if desired. Although the \verb|CCTK_InterpLocal()| API supports both uniform and nonuniform grids for the input data, the present implementation assumes a uniform grid (and silently gives wrong results for a nonuniform grid). See the Cactus User's Guide ``Full Description of Functions'' appendix for a full description of the \verb|CCTK_InterpLocal()| API, and some examples of how to use it. %%%%%%%%%%%%%%%%%%%% \subsection{Implementation} Internally, this interpolator always does 3-D interpolation, inserting zero coordinates as appropriate for lower dimensionalities. The interpolation is done by successive 1-D interpolations along each axis. See the \verb|README| file in the source code directory \verb|LocalInterp/src/UniformCartesian/| for further details. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{The Generalized Polynomial Interpolator} \label{sect-generalized-polynomial-interp} The generalized polynomial interpolator was written in C (plus Maple to generate the coefficients) by Jonathan Thornburg in winter 2001-2002. It provides Lagrange polynomial interpolation of orders 1-6 for 1-D arrays, and 1-4 for 2- and 3-D arrays, all registered under the operator name \verb|"generalized polynomial interpolation"|. (Again, it would be easy to add additional orders and/or dimensions if desired.) The code allows arbitrarily-shaped interpolation molecules, but at the moment only hypercube-shaped molecules are implemented. This interpolator supports a number of the features described in section~\ref{sect-generic-options}: \begin{itemize} \item interpolation order (this is a mandatory parameter) \item handling of out-of-range interpolation points (if there are multiple out-of-range points/axes, the one reported will be the first, i.e.\ the out-of-range point with the smallest \verb|pt|, and of that points out-of-range axes, the one with the smallest \verb|axis|) \item non-contiguous input arrays \item derivatives (when taking derivatives with this interpolator, it's most efficient to group all operations on a given input array together in the \verb|operand_indices| and \verb|operation_codes| arrays, as in the example in section~\ref{sect-example-derivatives}) \item querying the interpolation's Jacobian and/or domain of dependence \end{itemize} It also supports the additional feature: \begin{itemize} \item Savitzky-Golay smoothing of the input data as part of the interpolation process (described in section~\ref{sect-generalized-polynomial-interp/smoothing}) \end{itemize} The interpolation order is a mandatory parameter which must be present in the parameter table when the interpolator is called; all the other parameter-table oparameters are optional. This interpolator does not support any caching features; at present (May 2002) we have no plans to add these. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Smoothing} \label{sect-generalized-polynomial-interp/smoothing} The way the generalized polynomial interpolator is implemented it's easy to also do Savitzky-Golay smoothing.%%% \footnote{%%% See section 14.8 of Numerical Recipes 2nd edition for a general discussion of Savitzky-Golay smoothing. }%%% {} This is best described by way of an example: Suppose we're doing 1-D cubic interpolation, i.e.\ at each interpolation point we're using a cubic polynomial fitted to 4~surrounding data points. For Savitzky-Golay smoothing, we would instead {\em least-squares fit\/} a cubic polynomial to some {\em larger\/} number of surrounding data points. This combines interpolation with smoothing, so there's less amplification of noise in the input data in the interpolation outputs. The optional input \begin{verbatim} const CCTK_INT smoothing; \end{verbatim} specifies how much (how many points) to enlarge the interpolation molecule for this. The default is 0 (no smoothing). 1 would mean to enlarge the molecule by 1 point, e.g.\ to use a 5-point molecule instead of the usual 4-point one for cubic interpolation. 2 would mean to enlarge by 2 points, e.g.\ to use a 6-point molecule for cubic interpolation. Etc etc. Note that in $>1$~dimension, the default hypercube-shaped molecules already use more data points than the number of free parameters in the interpolating polynomials, i.e.\ they already do some Savitzky-Golay smoothing. For example, in 2-D a generic cubic polynomial $f(x,y) = \sum_{i+j \leq 3} c_{ij} x^i y^j$ has 10 free parameters $c_{ij}$, which we least-squares fit to the 16 data points in the $4 \times 4$ molecule. Savitzky-Golay smoothing is basically free apart from the increase in the molecule size, e.g. a \verb|smoothing|=2 cubic interpolation has exactly the same cost as any other 6-point--molecule interpolation. Alas, at the moment only the trivial case \verb|smoothing|=0 is implemented, but the framework is all there for more general cases. %%%%%%%%%%%%%%%%%%%% \subsection{Implementation} This interpolator's basic design is to use separate specialized code for each combination of $(\verb|N_dims|, \verb|molecule_family|, \verb|order|, \verb|smoothing|)$, i.e.\ in practice for each distinct choice of interpolation molecule. Maple is used to generate all the interpolation coefficients. The C preprocessor is then used to generate all the specialized code from a single master ``template'' file. The template code uses \verb|#ifdef|s to handle lower dimensions with no extra overhead, e.g.\ 1-D/2-D interpolation is basically just as efficient as in a purpose-built 1-D/2-D interpolator, with no extra overhead imposed by the interpolator also supporting higher-dimensional interpolation. The Maple code which generates the interpolation coefficients is quite general-purpose, and can handle an arbitrary dimensionality and molecule size/shape. Generating new coefficients can be rather time-consuming, though, e.g.\ the current coefficients for 3-D for orders~1-4 take about 8~cpu minutes to generate using Maple~7 on a 1.7~GHz~P4. See the \verb|README| file in the source code directory \verb|LocalInterp/src/UniformCartesian/| for further details on the implementation. Note that when compiling the code in this directory, you may get compiler warnings about casts discarding \verb|const| qualifiers from pointers in \verb|template.c|. Don't worry -- the code is actually ok, the problem is that C's \verb|const|-qualifier rules are overly strict in some cases. See question 11.10 in the (excellent!) \verb|comp.lang.c| FAQ list (\verb|http://www.eskimo.com/~scs/C-faq/top.html|) for details. (I think \hbox{C\hspace{-.05em}\raisebox{.4ex}{\tiny\bf ++}} has at least partially fixed the \verb|const|-qualifier rules.) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{A Simple Example of {\tt CCTK\_InterpLocalUniform} Usage} Here's a simple example in C, of interpolating a \verb|CCTK_REAL| and a \verb|CCTK_COMPLEX| $10 \times 20$ 2-D array, at 5 interpolation points, using cubic interpolation. Note that since C allows arrays to be initialized only if the initializer values are compile-time constants, we have to declare the \verb|interp_coords[]|, \verb|input_arrays[]|, and \verb|output_arrays[]| arrays as non-\verb|const|, and set their values with ordinary (run-time) assignment statements. In \hbox{C\hspace{-.05em}\raisebox{.4ex}{\tiny\bf ++}}, there's no restriction on initializer values, so we could declare the arrays \verb|const| and initialize them as part of their declarations. \begin{verbatim} #define N_DIMS 2 #define N_INTERP_POINTS 5 #define N_INPUT_ARRAYS 2 #define N_OUTPUT_ARRAYS 2 /* (x,y) coordinates of data grid points */ #define X_ORIGIN ... #define X_DELTA ... #define Y_ORIGIN ... #define Y_DELTA ... const CCTK_REAL origin[N_DIMS] = { X_ORIGIN, Y_ORIGIN }; const CCTK_REAL delta [N_DIMS] = { X_DELTA, Y_DELTA }; /* (x,y) coordinates of interpolation points */ const CCTK_REAL interp_x[N_INTERP_POINTS]; const CCTK_REAL interp_y[N_INTERP_POINTS]; const void* interp_coords[N_DIMS]; /* see note above */ /* input arrays */ /* ... note Cactus uses Fortran storage ordering, i.e. X is contiguous */ #define NX 10 #define NY 20 const CCTK_REAL input_real [NY][NX]; const CCTK_COMPLEX input_complex[NY][NX]; const CCTK_INT input_array_dims[N_DIMS] = { NX, NY }; const CCTK_INT input_array_type_codes[N_INPUT_ARRAYS] = { CCTK_VARIABLE_REAL, CCTK_VARIABLE_COMPLEX }; const void* input_arrays[N_INPUT_ARRAYS]; /* see note above */ /* output arrays */ CCTK_REAL output_real [N_INTERP_POINTS]; CCTK_COMPLEX output_complex[N_INTERP_POINTS]; const CCTK_INT output_array_type_codes[N_OUTPUT_ARRAYS] = { CCTK_VARIABLE_REAL, CCTK_VARIABLE_COMPLEX }; void* output_arrays[N_OUTPUT_ARRAYS]; /* see note above */ int operator_handle, param_table_handle; operator_handle = CCTK_InterpHandle("my interpolation operator"); if (operator_handle < 0) CCTK_WARN(-1, "can't get interpolation handle!"); param_table_handle = Util_TableCreateFromString("order=3"); if (param_table_handle < 0) CCTK_WARN(-1, "can't create parameter table!"); /* initialize the rest of the parameter arrays */ interp_coords[0] = (const void *) interp_x; interp_coords[1] = (const void *) interp_y; input_arrays [0] = (const void *) input_real; input_arrays [1] = (const void *) input_complex; output_arrays[0] = ( void *) output_real; output_arrays[1] = ( void *) output_complex; /* do the actual interpolation, and check for error returns */ if (CCTK_InterpLocalUniform(N_DIMS, operator_handle, param_table_handle, origin, delta, N_INTERP_POINTS, CCTK_VARIABLE_REAL, interp_coords, N_INPUT_ARRAYS, input_array_dims, input_array_type_codes, input_arrays, N_OUTPUT_ARRAYS, output_array_type_codes, output_arrays) < 0) CCTK_WARN(-1, "error return from interpolator!"); \end{verbatim} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{An Example of Interpolating Derivatives} \label{sect-example-derivatives} Consider the problem described earlier: the computation of all the 2nd~derivatives of the 3-metric at a set of interpolation points on a 2-sphere. This example shows how we could do this in C. [Since we're not using \hbox{C\hspace{-.05em}\raisebox{.4ex}{\tiny\bf ++}}, we again declare the pointer arrays non-\verb|const|, and ``initialize'' them with ordinary (run-time) assignment statements. However, in this example, for greater clarity we place these assignment statements right after the declarations. Since C allows declarations only at the start of a \verb|{ }| block, not in the middle of a block, we nest the rest of the program in extra blocks (with the \verb|{ }| indented 2 spaces to distinguish them from ``normal'' \verb|{ }| pairs) to allow for further declarations. The reader will have to decide whether this style is more or less ugly than separating the declarations and initializations of the pointer arrays.] \begin{verbatim} #define N_DIMS 3 /* interpolation points */ #define N_INTERP_POINTS 1000 const CCTK_REAL interp_x[N_INTERP_POINTS], interp_y[N_INTERP_POINTS], interp_z[N_INTERP_POINTS]; const void* interp_coords[N_DIMS]; /* see note above */ interp_coords[0] = (const void *) interp_x; interp_coords[1] = (const void *) interp_y; interp_coords[2] = (const void *) interp_z; { /* dimensions of the data grid */ #define NX 30 #define NY 40 #define NZ 50 /* input arrays */ /* n.b. we use Fortran storage order: X is contiguous, Z least so */ #define N_INPUT_ARRAYS 6 const CCTK_REAL gxx[NZ][NY][NX], gxy[NZ][NY][NX], gxz[NZ][NY][NX], gyy[NZ][NY][NX], gyz[NZ][NY][NX], gzz[NZ][NY][NX]; const CCTK_INT input_array_dims[N_DIMS] = {NX, NY, NZ}; const CCTK_INT input_array_type_codes[N_INPUT_ARRAYS] = { CCTK_VARIABLE_REAL, CCTK_VARIABLE_REAL, CCTK_VARIABLE_REAL, CCTK_VARIABLE_REAL, CCTK_VARIABLE_REAL, CCTK_VARIABLE_REAL }; const void* input_arrays[N_INPUT_ARRAYS]; /* see note above */ input_arrays[0] = (const void *) gxx; input_arrays[1] = (const void *) gxy; input_arrays[2] = (const void *) gxz; input_arrays[3] = (const void *) gyy; input_arrays[4] = (const void *) gyz; input_arrays[5] = (const void *) gzz; { /* output arrays */ #define N_OUTPUT_ARRAYS 36 CCTK_REAL dxx_gxx[N_INTERP_POINTS], dxy_gxx[N_INTERP_POINTS], dxz_gxx[N_INTERP_POINTS], dyy_gxx[N_INTERP_POINTS], dyz_gxx[N_INTERP_POINTS], dzz_gxx[N_INTERP_POINTS], dxx_gxy[N_INTERP_POINTS], dxy_gxy[N_INTERP_POINTS], dxz_gxy[N_INTERP_POINTS], dyy_gxy[N_INTERP_POINTS], dyz_gxy[N_INTERP_POINTS], dzz_gxy[N_INTERP_POINTS], dxx_gxz[N_INTERP_POINTS], dxy_gxz[N_INTERP_POINTS], dxz_gxz[N_INTERP_POINTS], dyy_gxz[N_INTERP_POINTS], dyz_gxz[N_INTERP_POINTS], dzz_gxz[N_INTERP_POINTS], dxx_gyy[N_INTERP_POINTS], dxy_gyy[N_INTERP_POINTS], dxz_gyy[N_INTERP_POINTS], dyy_gyy[N_INTERP_POINTS], dyz_gyy[N_INTERP_POINTS], dzz_gyy[N_INTERP_POINTS], dxx_gyz[N_INTERP_POINTS], dxy_gyz[N_INTERP_POINTS], dxz_gyz[N_INTERP_POINTS], dyy_gyz[N_INTERP_POINTS], dyz_gyz[N_INTERP_POINTS], dzz_gyz[N_INTERP_POINTS], dxx_gzz[N_INTERP_POINTS], dxy_gzz[N_INTERP_POINTS], dxz_gzz[N_INTERP_POINTS], dyy_gzz[N_INTERP_POINTS], dyz_gzz[N_INTERP_POINTS], dzz_gzz[N_INTERP_POINTS]; const CCTK_INT output_array_type_codes[N_OUTPUT_ARRAYS] = { CCTK_VARIABLE_REAL, CCTK_VARIABLE_REAL, CCTK_VARIABLE_REAL, CCTK_VARIABLE_REAL, CCTK_VARIABLE_REAL, CCTK_VARIABLE_REAL, CCTK_VARIABLE_REAL, CCTK_VARIABLE_REAL, CCTK_VARIABLE_REAL, CCTK_VARIABLE_REAL, CCTK_VARIABLE_REAL, CCTK_VARIABLE_REAL, CCTK_VARIABLE_REAL, CCTK_VARIABLE_REAL, CCTK_VARIABLE_REAL, CCTK_VARIABLE_REAL, CCTK_VARIABLE_REAL, CCTK_VARIABLE_REAL, CCTK_VARIABLE_REAL, CCTK_VARIABLE_REAL, CCTK_VARIABLE_REAL, CCTK_VARIABLE_REAL, CCTK_VARIABLE_REAL, CCTK_VARIABLE_REAL, CCTK_VARIABLE_REAL, CCTK_VARIABLE_REAL, CCTK_VARIABLE_REAL, CCTK_VARIABLE_REAL, CCTK_VARIABLE_REAL, CCTK_VARIABLE_REAL, CCTK_VARIABLE_REAL, CCTK_VARIABLE_REAL, CCTK_VARIABLE_REAL, CCTK_VARIABLE_REAL, CCTK_VARIABLE_REAL, CCTK_VARIABLE_REAL }; void* output_arrays[N_OUTPUT_ARRAYS]; /* see note above */ output_arrays[ 0] = (void *) dxx_gxx; output_arrays[ 1] = (void *) dxy_gxx; output_arrays[ 2] = (void *) dxz_gxx; output_arrays[ 3] = (void *) dyy_gxx; output_arrays[ 4] = (void *) dyz_gxx; output_arrays[ 5] = (void *) dzz_gxx; output_arrays[ 6] = (void *) dxx_gxy; output_arrays[ 7] = (void *) dxy_gxy; output_arrays[ 8] = (void *) dxz_gxy; output_arrays[ 9] = (void *) dyy_gxy; output_arrays[10] = (void *) dyz_gxy; output_arrays[11] = (void *) dzz_gxy; output_arrays[12] = (void *) dxx_gxz; output_arrays[13] = (void *) dxy_gxz; output_arrays[14] = (void *) dxz_gxz; output_arrays[15] = (void *) dyy_gxz; output_arrays[16] = (void *) dyz_gxz; output_arrays[17] = (void *) dzz_gxz; output_arrays[18] = (void *) dxx_gyy; output_arrays[19] = (void *) dxy_gyy; output_arrays[20] = (void *) dxz_gyy; output_arrays[21] = (void *) dyy_gyy; output_arrays[22] = (void *) dyz_gyy; output_arrays[23] = (void *) dzz_gyy; output_arrays[24] = (void *) dxx_gyz; output_arrays[25] = (void *) dxy_gyz; output_arrays[26] = (void *) dxz_gyz; output_arrays[27] = (void *) dyy_gyz; output_arrays[28] = (void *) dyz_gyz; output_arrays[29] = (void *) dzz_gyz; output_arrays[30] = (void *) dxx_gzz; output_arrays[31] = (void *) dxy_gzz; output_arrays[32] = (void *) dxz_gzz; output_arrays[33] = (void *) dyy_gzz; output_arrays[34] = (void *) dyz_gzz; output_arrays[35] = (void *) dzz_gzz; { /* integer codes to specify the derivatives */ /* (for best efficiency we group all operations on a given input together) */ const CCTK_INT operand_indices[N_OUTPUT_ARRAYS] = { 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 5, 5, 5, 5, 5, 5 }; #define DERIV(x) x const CCTK_INT operation_codes[N_OUTPUT_ARRAYS] = { DERIV(11), DERIV(12), DERIV(13), DERIV(22), DERIV(23), DERIV(33), DERIV(11), DERIV(12), DERIV(13), DERIV(22), DERIV(23), DERIV(33), DERIV(11), DERIV(12), DERIV(13), DERIV(22), DERIV(23), DERIV(33), DERIV(11), DERIV(12), DERIV(13), DERIV(22), DERIV(23), DERIV(33), DERIV(11), DERIV(12), DERIV(13), DERIV(22), DERIV(23), DERIV(33), DERIV(11), DERIV(12), DERIV(13), DERIV(22), DERIV(23), DERIV(33) }; int operator_handle, param_table_handle; operator_handle = CCTK_InterpHandle("my interpolation operator"); if (operator_handle < 0) CCTK_WARN(-1, "can't get interpolation handle!"); param_table_handle = Util_TableCreate(UTIL_TABLE_FLAGS_DEFAULT); if (param_table_handle < 0) CCTK_WARN(-1, "can't create parameter table!"); if (Util_TableSetInt(param_table_handle, 3, "order") < 0) CCTK_WARN(-1, "can't set order in parameter table!"); if (Util_TableSetIntArray(param_table_handle, N_OUTPUT_ARRAYS, operand_indices, "operand_indices") < 0) CCTK_WARN(-1, "can't set operand_indices array in parameter table!"); if (Util_TableSetIntArray(param_table_handle, N_OUTPUT_ARRAYS, operation_codes, "operation_codes") < 0) CCTK_WARN(-1, "can't set operation_codes array in parameter table!"); if (CCTK_InterpLocalUniform(N_DIMS, operator_handle, param_table_handle, origin, delta, N_INTERP_POINTS, CCTK_VARIABLE_REAL, interp_coords, N_INPUT_ARRAYS, input_array_dims, input_array_type_codes, input_arrays, N_OUTPUT_ARRAYS, output_array_type_codes, output_arrays) < 0) CCTK_WARN(-1, "error return from interpolator!"); } } } \end{verbatim} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Acknowledgments} Thanks to Ian Hawke for helpful comments on the documentation, to Tom Goodale and Thomas Radke for many useful design discussions, and to all the Cactus crew for a great infrastructure! %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % Automatically created from the ccl files % Do not worry for now. % \include{interface} \include{param} \include{schedule} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \end{document}