Old News

New News...

March 30, 2009: We have ported Carpet to the BlueGene/P architecture, using the Surveyor system at the ALCF. The graph to the right shows preliminary performance and scaling results, comparing different compilers and options (gcc, IBM's XL compilers without OpenMP, and IBM's XL compilers with OpenMP, which required reducing the optimisation level). For these benchmarks, the problem size was reduced to about one eighth of the standard size, using 133 grid points per core. The results show that Carpet scales fine up to the size of the total machine (4k cores), but further work on compiler options is required.

AMR benchmark results

March 20, 2009: Carpet can now perform performance experiments by artificially increasing the size or the number of MPI messages exchanged between processes. This can help determine whether either the communication bandwidth or the communication latency are a bottleneck of a particular simulation. The figure to the right shows results for the standard McLachlan AMR benchmark run on the Cray XT4 Kraken, using 253 grid points per core. These results indicate that the additional latency from increasing the number of messages has no significant effect, and hence the benchmark is bandwidth limited for this problem size.

AMR benchmark results

March 16, 2009: Erik Schnetter and Steve Brandt published a white paper Relativistic Astrophysics on the SiCortex Architecture. This paper expands on a webinar by Erik and Steve that was hosted by SiCortex.

The graph at the right shows Carpet's parallel scalability using the McLachlan code with nine levels of AMR for a set of current HPC systems. The results have been rescaled to the architectures' theoretical single-core peak performance. This makes it possible to compare Carpet's scalability on different architectures. (It is not possible to compare the systems' absolute performance in this figure.)

AMR benchmark results


November 9, 2008: In the context of the XiRel project, we re-designed Carpet's communication layer to avoid many operations that had a cost of O(N), growing linearly with the number of MPI processes. Such costs are generally not acceptable when running on several thousand cores, and have to be reduced e.g. to O(log N). Carpet now stores the communication schedule (mostly) in a distributed manner, increasing performance and reducing its memory requirement. These improvements are currently being tested; preliminary scaling results are shown in the figure to the right.

AMR benchmark results

June 25, 2008: We are happy to announce the Simulation Factory, a tool to help access remote HPC systems, manage source trees, and submit and control simulations. The Simulation Factory contains a set of abstractions of the tasks which are necessary to set up and successfully finish numerical simulations using the Cactus framework. These abstractions hide tedious low-level management tasks, they capture "best practices" of experienced users, and they create a log trail ensuring repeatable and well-documented scientific results. Using these abstractions, many types of potentially disastrous user errors are avoided, and different supercomputers can be used in a uniform manner.

March 29, 2008: We have benchmarked McLachlan, a new BSSN-type vacuum Einstein code, using Carpet for unigrid and AMR calculations. We compare several current large machines: Franklin (NERSC), Queen Bee (LONI), and Ranger (TACC).

Unigrid benchmark results

AMR benchmark results

March 1, 2008: Carpet has a logo! This logo is a Sierpiński carpet, which is a fractal pattern with a Hausdorff dimension of 1.89279.

Carpet logo (a Sierpiński
              carpet)

March 1, 2008: We have improved the development version of Carpet significantly:

More details can be found here. These improvements are largely due to Erik Schnetter (LSU), Thomas Radke (AEI), and Christian D. Ott (UA). Special thanks go to Christian Reisswig and Luca Baiotti.

March 1, 2008: The development version of Carpet is now maintained using git instead of darcs. Git offers a very similar set of features to darcs, most importantly supporting decentralised development. Git has a much larger user community than darcs, and we hope that this makes it easier to use. The download instructions contain details on using git to obtain Carpet, and point to further information. (The darcs repository for the development version will not see any further changes.)

March 1, 2008: The repository for the development version of Carpet moved today to a new server. The stable versions of Carpet continue to be served from the old server for the time being. We plan to move all of carpetcode.org to this new server in the future. The new server is a courtesy of Christian D. Ott.

January 14, 2008: Carpet's communication infrastructure has been improved significantly, making Carpet scale to at least 4,000 processors, including mesh refinement. Using "friendly user time" on Ranger, the new 60,000 core TeraGrid supercomputer at TACC, we measured the benchmark results below for a numerical relativity kernel solving the BSSN equations. These benchmarks emply a hybrid communication scheme combining MPI and OpenMP, using the shared memory capabilities of Ranger's nodes to reduce the memory overhead of parallelisation. We are grateful for the help we received from Ranger's support team.

The graph below shows weak scaling tests for both unigrid and mesh refinement benchmarks. The problem size per core was kept fixed, and there were 4 OpenMP threads per MPI process, with 1 MPI process per socket. The benchmark was also run with the PUGH driver for comparison for certain core counts. As the graphs show, this benchmark scales near perfectly for unigrid, and has only small variations in run time for nine levels of mesh refinement.

Scaling graph for Ranger


October 3, 2007: Carpet's timing infrastructure has been extended to automatically measure both time spent computing and time spent in I/O. The performance of large simulations depends not only on the computational efficiency and communication latency, but also on the throughput to file servers. These new statistics give a real-time overview and can point out performance problems. The statistics are collected in the existing Carpet::timing variables.

August 30, 2007: So far this year, ten of the publications from three research groups examining the dynamics of binary black hole systems are based on simulations performed with Cactus and Carpet:
          Astrophys. J. 661, 430-436 (2007) (arXiv:gr-qc/0701143)
          Phys. Rev. Lett. 99, 041102 (2007) (arXiv:gr-qc/0701163)
          Astrophys. J. 659, L5-L8 (2007) (arXiv:gr-qc/0701164)
          Phys. Rev. Lett. 98, 231102 (2007) (arXiv:gr-qc/0702133)
          Class. Quantum Grav. 24, 3911-3918 (2007) (arXiv:gr-qc/0701038)
          arXiv:0705.3829 [gr-qc]
          arXiv:0706.2541 [gr-qc]
          arXiv:0707.2559 [gr-qc]
          arXiv:0708.3999 [gr-qc]
          arXiv:0708.4048 [gr-qc]
These publications mainly examine the spin dynamics and the gravitational wave recoil in BBH systems. Since not all research groups use Cactus and Carpet, this represents only part of the published work on this subject.

August 26, 2007: In experiments with hybrid communication schemes combining MPI and OpenMP, we found a 20% speed improvement when using a single node of Abe at NCSA, and a substantial scaling improvement when using 1024 and more CPUs. (Abe has 8 CPUs per node.) These experiments included cache optimisations when traversing the 3D arrays. The tests were performed with a modified version of the Cactus WaveToy example application without using I/O or analysis methods.

Scaling graph for Abe

August 15, 2007: We are happy to hear that our proposal ALPACA: Cactus tools for Application Level Profiling And Correctness Analysis will be funded by NSF's SDCI programme for three years. The ALPACA project is aiming at developing complex, collaborative scientific applications, appropriate for highly scalable hardware architectures, providing fault tolerance, advanced debugging, and transparency against new developments in communication, programming, and execution models. Such tools are especially rare at the application level, where they are most critically needed.

July 31, 2007: We are happy to hear that our proposal XiRel: Cyberinfrastructure for Numerical Relativity will be funded by NSF's PIF programme for three years. XiRel is collaborative proposal by LSU, PSU, and UTB (now RIT). The central goal of XiRel is the development of a highly scalable, efficient, and accurate adaptive mesh refinement layer based on the current Carpet driver, which will be fully integrated and supported in Cactus and optimised for numerical relativity.

February 26, 2007: The thorn LSUPETSc implements a generic elliptic solver for Carpet's multi-patch infrastructure, based on PETSc. It assumes touching (not overlapping) patches, and uses inter-patch interface conditions very similar to those developed by Harald Pfeiffer. LSUPETSc can solve "arbitrary" systems of coupled, non-linear elliptic equations. It does not support mesh refinement.

January 12, 2007: In order to be able to restructure some of Carpet's internals without disturbing ongoing production simulations, we have created an experimental version. The main goals of this experimental version are to improve its performance on many (>100) processors and to re-arrange some internal details to simplify future development. Few new features are planned, but some of the changes may be incompatible.


December 15, 2006: The AEI hosted a small workshop to improve the performance of the AEI/LSU CCATIE code for binary black hole simulations, which uses Carpet as AMR driver. We examined especially the effect of various grid structures on accuracy and speed and speeded up the wave extraction routine. We were able to improve the overall performance of the code by a factor of six for a certain benchmark problem simulating a QC-0 configuration.

September 26, 2006: We are preparing a new release of Carpet. This will be Carpet version 3. Among other things, this version makes it easier to use dynamic grid structures, shows better scaling behaviour than version 2, and has better support for multiple patches. A detailed list of changes is here. The the downloading instructions for Carpet explain how to access this version.

February 26, 2006: We have started to collect a list of publications and theses that use Carpet. Please tell us if you have written a publication or a thesis using Carpet.

February 25, 2006: Christian Ott has contributed code to Carpet, making the refined regions track apparent horizon centroids, merging and un-merging refined regions as necessary. (Movie, animated gif, 730 kB.) After Burkhard Zink's mechanism which tracks the density maximum in a star, this is the second implementation of a production level adaptive mesh refinement criterion in Carpet.

February 25, 2006: The official Cactus benchmarks now include benchmarks with Carpet. You can assess Carpet's scaling and compare its performance on different machines by generating graphs from the benchmark result database on these pages.


July 15, 2005: We have now a page that links to all past montly status reports.

June 6, 2005: We have updated the downloading instructions for Carpet.

June 6, 2005: Version 1.0.3 of the pre-compiled darcs binary is now available.

April 13, 2005: Thomas Radke has implemented a new communication scheme in Carpet. Instead of sending many small messages in an interleaved manner, Carpet now collects all messages into an internal buffer and sends only one big message with MPI. This circumvents certain problems with internal limitations of MPICH, and it also improves the performance greatly.

March 9, 2005: We have started to move towards a new stable version of Carpet.


December 7, 2004: Jonathan Thornburg is organising a Carpet Design Walkthrough, which will take place December 13 to 15 at the AEI and will be broadcast via the accessgrid and/or telephone.

Sepbember 18, 2004: There is now a new repository for the development version of Carpet. This repository is managed by darcs instead of CVS. Darcs has a number of advantages, such as being able to use it while offline, or keeping some changes to yourself while developing. This development version is publicly available, and we encourage you to contribute. Note that the stable version of Carpet is still distributed via CVS.

August 24, 2004: The version of Carpet in the CVS repository is now stable. That means that this version will see no substantial further development. One of its main goal is to not change, so that parameter files continue to work unchanged with identical results. We will continue to correct errors that we find in this version of Carpet; however, if this would necessitate major changes, and there is a work-around, then this might not happen in the interest of stability.

July 31, 2004: Carpet seems to have reached a point where it is stable enough to be useful for at least some projects. Consequently, people expressed the wish to have a version of Carpet which is stable and sees no disrupting development. The idea is to have two "branches" of Carpet: a stable version for production use, and a development version which might not be as stable. We plan to make the split in about three weeks. The discussion about this is held on the mailing list; your input is welcome.

April 7, 2004: Up to now, all Carpet thorns have been living in a single arrangement for Cactus. This caused problems, because stable thorns, development thorns, and outdated thorns were sitting next to each other, confusing newcomers. We have moved the Carpet arrangement to a new repository and split it into four. Access to the old Carpet arrangement has been disabled.

March 3, 2004: We have recently had trouble with I/O throuth the FlexIO library. We suspect that it might have a bug that causes HDF5 output to fail under certain, random conditions. We have written a new thorn CarpetIOHDF5 which uses the HDF5 library directly, while remaining compatible to the FlexIO file format. Please test this thorn, and report any problems or incompatibilities you find.

In January 2004, Daniel Kobras set up Bugzilla for Carpet. Bugzilla is a bug-tracking system that will, so we hope, help us remember what is missing or broken in Carpet.


In October 2003, Erik Schnetter, Scott H. Hawley, and Ian Hawke published the preprint "Evolutions in 3D numerical relativity using fixed mesh refinement" as gr-qc/0310042. Its main point is to present tests of Carpet with the BSSN code (AEI's spacetime evolution code), and to show that mesh refinement does not introduce instabilities.

In August 2003, these web pages were created.

May 2003 has informally been termed "Carpet month". In a flurry of activity, bugs were fixed and some features added. The BSSN code of the numerical relativity group at the AEI now works together with Carpet.


Created with XEmacs! Best Viewed With Any Browser Valid XHTML 1.0!

Erik Schnetter

Last modified: Feb 15 2011