aboutsummaryrefslogtreecommitdiff
path: root/doc
diff options
context:
space:
mode:
authorrideout <rideout@b61c5cb5-eaca-4651-9a7a-d64986f99364>2002-04-04 16:19:29 +0000
committerrideout <rideout@b61c5cb5-eaca-4651-9a7a-d64986f99364>2002-04-04 16:19:29 +0000
commit554f49fe8631ccef219d19471c9f4098b4befb13 (patch)
treea8ae6860a57273445b8a24310a49c804ebcd7d10 /doc
parent75cc8f682ed1965c8bd1f235a5242beba39daa14 (diff)
Fixed some typos / minor 'bugs'.
Removed empty "Load Balancing" section (this documentation is provided by the "Processor Decomposition" section). git-svn-id: http://svn.cactuscode.org/arrangements/CactusPUGH/PUGH/trunk@372 b61c5cb5-eaca-4651-9a7a-d64986f99364
Diffstat (limited to 'doc')
-rw-r--r--doc/documentation.tex39
1 files changed, 20 insertions, 19 deletions
diff --git a/doc/documentation.tex b/doc/documentation.tex
index 47d7926..7912058 100644
--- a/doc/documentation.tex
+++ b/doc/documentation.tex
@@ -95,17 +95,16 @@ pugh::local_ny = 20
}
-
\section{Periodic Boundary Conditions}
PUGH can implement periodic boundary conditions during the synchronization
-of grid functions. Although this may a first seem a little confusing, and
+of grid functions. Although this may at first seem a little confusing, and
unlike the usual use of boundary conditions which are directly called from
evolution routines, it is the most efficient and natural place for periodic
boundary conditions.
-PUGH applied periodic conditions by simply communicating the appropriate
-ghostzones between "end" processors. For example, for a 1D domain with two
+PUGH applies periodic conditions by simply communicating the appropriate
+ghostzones between ``end'' processors. For example, for a 1D domain with two
ghostzones, split across two processors, Figure~\ref{pugh::fig1} shows the implementation of periodic boundary conditions.
\begin{figure}[ht]
@@ -132,8 +131,8 @@ driver::periodic = "yes"
\end{verbatim}
}
-To apply periodic boundary conditions in just the x- and y- directions or
-a 3 dimension domain, use
+To apply periodic boundary conditions in just the x- and y- directions in
+a 3 dimensional domain, use
{\tt
\begin{verbatim}
@@ -142,10 +141,10 @@ driver::periodic_z = "no"
\end{verbatim}
}
-\section{Processor Decomposition}
-\section{Load Balancing}
+\section{Processor Decomposition}
+%\section{Load Balancing}
By default PUGH will distribute the computational grid evenly across
all processors (as in Figure~\ref{pugh::fig2}a). This may not be
efficient if there is a different computational load on different
@@ -169,9 +168,9 @@ as in Figure~\ref{pugh::fig2}b. (Note that the type of partitioning
shown in Figure~\ref{pugh::fig2}c is not possible with {\tt PUGH}).
The computational grid can be manually distributed using the
-parameters {\tt
-partition[\_1d\_x|\_2d\_x|\_2d\_y|\_3d\_x|\_3d\_y|\_3d\_z]}. To manual
-specify the load distribution, set {\tt pugh::partition = ``manual''}
+parameters\\ {\tt
+partition[\_1d\_x|\_2d\_x|\_2d\_y|\_3d\_x|\_3d\_y|\_3d\_z]}. To manually
+specify the load distribution, set {\tt pugh::partition = "manual"}
and then, depending on the grid dimension, set the remaining
parameters to distribute the load in each direction. Note that for
this you need to know apriori the processor decomposition.
@@ -189,20 +188,22 @@ use the parameters
{\tt
\begin{verbatim}
-pugh::partition=''manual''
-pugh::partition_2d_x=''20:10''
-pugh::partition_2d_y=''15:15''
+pugh::partition="manual"
+pugh::partition_2d_x="20:10"
+pugh::partition_2d_y="15:15"
\end{verbatim}
}
+Note that an empty string for a direction will apply the automatic
+distribution.
+
-Note that an empty string for a direction will apply the automatic distribution.
\section{Understanding PUGH Output}
\label{pugh_understanding}
PUGH reports information about the processor decomposition to standard output
-at the start of a job. This section described how to interpret that output.
+at the start of a job. This section describes how to interpret that output.
\vskip .3cm
@@ -268,8 +269,8 @@ optimisation:
\item[{\tt pugh::enable\_all\_storage}]
Enables storage for all grid variables (that is, not only
- those set in a thorns {\tt schedule.ccl} file). Try this parameter
- if you are getting segmentation faults, if enabling all storage
+ those set in a thorn's {\tt schedule.ccl} file). Try this parameter
+ if you are getting segmentation faults. If enabling all storage
removes the problem, it most likely means that you are accessing
a grid variable (probably in a Fortran thorn) for which storage
has not been set.
@@ -297,7 +298,7 @@ optimisation:
size of the storage allocated by Cactus. Note that this total
does not include storage allocated independently in thorns.
-\item[{\tt timer\_output}]
+\item[{\tt pugh::timer\_output}]
This parameter can be set to provide the time spent communicating
variables between processors.