Main Page

From Carpet Workshop

Carpet Developer Workshop Announcement

Dear Colleagues,

We would like to invite you to attend a Carpet developer workshop to be hosted by the Center for Computational Relativity and Gravitation at the Rochester Institute of Technology on August 23th, 24th, 25th and 26th (2010). This workshop is aimed at experienced Carpet users that want to gain further expertise on the internal workings of the Carpet driver, and, thereby, to be able to contribute to the infrastructure development. This workshop will be led by Erik Schnetter, the main architect for the Carpet AMR infrastructure, and will count with the presence of an HPC specialist. One important goal of this workshop will be to discuss and implement ways to improve the performance for Cactus-based applications for current and future high-end petascale architectures. Due to space limitations, we encourage you to contact the organizers as soon as possible to register your presence.

Sincerely, the organizers:

  • Manuela Campanelli (manuela@astro.rit.edu)
  • Bruno C. Mundim (bcmsma@astro.rit.edu)
  • Erik Schnetter (schnetter@cct.lsu.edu)

Program

The first two days of the workshop will be dedicated to the theory of adaptive mesh refinement (AMR) and related algorithms. The last two days is reserved for hands-on work on the development of the Carpet infrastructure. Details of the program are still to be announced. The program will be agreed upon during the first day of the workshop.

We will break for lunch sometime around 12PM EDT.

Cookies and coffee will always be served at 9:30AM, and---most likely---the workshop will start at around 10AM.


First Day:
Workshop commences at 10AM.
Talk: Erik Schnetter "Introduction to Carpet/AMR" (1:30pm)
Talk: Justin Luitjens "Introduction to Uintah/AMR" (2:30pm) Uintah_Framework.pdf
Talk: Erik Schnetter "Carpet Details" (3:30pm)


Second Day:

Third Day:
Hands-on Carpet work:

  • 10am: Round table where each participant reported what they have been working on so far.


Fourth Day:
Hands-on Carpet work.

Discussions

Please vote on which of these you want and will spend time helping to implement by adding your name after an item.

Wish list:

  • Carpet Mercurial
    • Buffer zone mask (ICH, CDO, FL, BCM) (done).
    • Shadow hierarchy.
    • Support for high number of refinement levels. [FL] (problem identified but fix postponed)
    • Simple AMR scheme that uses CarpetRegrid2 and just enlarges/shrinks refined regions based on some criterion.
    • Cell centred algorithm. (SCN)
      • Refluxing. (SCN)
    • Better diagnostics:
      • System and library memory use: Ian has a freely available thorn SystemStatistics in AEIThorns for monitoring memory usage (ICH).
      • More user friendly error messages (RH).
      • Mask for interpolation errors.
      • Mask telling which the finest level would have a particular coordinate (there is a new function in Mercurial version: locate_position).
      • Better error reporting: Output stderr from all processors to the same file, prefixed with the MPI rank of the process. Make sure there is no stderr unless there is a really serious problem that needs to be addressed.
      • Output a backtrace on crash
  • Scalability
    • More efficient regridding (recomposing?)(BCM).
    • Load balance (ICH, PD).
    • Performance model for communication in Carpet (ICH, PD).
  • Efficient I/O
    • Write an API for F5 file format (ES, CR).
    • Faster recovery; Processor decomposition on recovery (ICH).
    • Use a single file for all variables.
    • Add an option to keep the hdf5 open longer (ICH).
    • Omit buffer zones when outputting (optionally) (ICH, RH).
    • Better documentation - both for users and for developers of Carpet (ICH)

Ideas

  • Replace malloc/new to poison memory
  • See if there is a compiler option to poison the stack

Carpet Overview

MONDAY

Erik presented the main Carpet data classes:

  • defs.hh declares templates for (small) vector, bounding box, set of bounding boxes and tree classes.
  • region.hh declares grid regions and operators over them. What Carpet calls a region was previously called component. It is also known in the literature as patch.
  • gh.hh declares the refinement hierarchy (no ghost zones included). It describes the own region, so buffer and outer boundary regions are included.
  • dh.hh declares the data hierarchy (grid hierarchy plus ghost zones) class. It extends gh class to take care of communication.
TUESDAY
  • tt is a "time hierarchy" which keeps track of which level has which time
  • Ordering of scheduled functions is respected on a given level, but because of the ordering of levels in Berger-Oliger, not between levels
  • Carpet_Regrid(Maps) is the function that you provide to determine the new grid structure. It is called automatically by Carpet in the CCTK_REGRID bin. The existing grid structure is passed in and you should modify them to change the grid structure. In this function you need to do the processor decomposition as well.
  • Would be nice to have a function which took in a bboxset on each processor and returned the bboxsets on all the processors (normalised) - though this would have scaling problems
  • In order to determine mesh refinement boxes locally, you currently need something like the above. What we would like is to only have each process know about its own local bboxes. Then you could locally construct the refined regions that you need and give them to Carpet, and Carpet would do whatever is necessary.
  • Modes in Carpet:
    • Modes are a hack introduced because Cactus scheduler doesn't know about mesh refinement
    • They are a way of telling Carpet how you want a routine to be called (on all levels? just once? etc)
    • Meta mode: Called once per process; this is not used currently so is equivalent to Global mode for our purposes.
    • Global mode: Called once per "resolution". The "time" is the time of the finest existing grid.
    • Level mode: Called once per level
    • Singlemap mode: Called once per patch
    • Local mode: Called once per component. Only here do you have access to the grid function data.
    • In any mode, you can switch mode, both up and down. There are three sets of macros for modes: ENTER_X_MODE, BEGIN_Y_LOOP, and a variant which allows you to select just a single Z (component, level etc) You also need to call the looping macros. Moving up needs thought, but is perfectly fine and can be useful for some things.
  • You can tell Carpet the minimum number of points per process; empty components will be created
  • Recompose.cc does the load balancing - there is a function called cost which computes the cost of a region with a particular shape. At the moment the estimated cost is based only on the total number of active grid points.
  • Carpet perform 3 general operations:
    • Initialization
    • Evolve
    • Shutdown
  • For now assume "mg" = 1 (number of multigrid levels)
  • Evolve operation:
    • 1) user routine that defines the regridding algorithm; (Carpet_Regrid Carpet_RegridMaps)
    • 2) Regrid (in Carpet) (handles interface between that user routine and Carpet)
    • 3) Recompose (handles grid function initialization on grid structure and load balancing, Recompose.cc)
  • Definitions:
    • component = 3D volume of points that live on a processor
    • patch/map = collection of components that follow "arbitrary" (e.g. cubed sphere) coordinate systems
    • level = set of all components/patches at a given spatial discretization
    • simulation/grid-hierarchy (global view) = set of levels
  • Vertex-centered vs. Cell-centered AMR:
    • There are routines for restricting and prologing face-centered, vertex-cented and cell-centered functions, but no gereral interface
    • It is up to the user (right now) to interpret where in the cell the grid function is located, so the user must choose how each function is restricted/prolonged and how the evolution scheme interprets a particular element's meaning
  • MASK:
    • In order to create a mask that flags those cells, which need to be "fixed" from a failed update, we need to locate the coarse points that lie sufficiently within refined regions and so lie outside the sphere of influence of the rest of the coarse domain; these coarse points are part of the "shrunk" domain and can be safely ignored when an error occurs in it is updated.
    • The final mask will include all the problematic points minus those "causally disconnected" points.
WEDNESDAY
  • Workshoppers logged their work flow in the below section called "Work". We are individually working on our own tasks today.
  • Erik provided a brief description of how to use Mercurial (hg). Please see Mercurial for notes on "hg".
THURSDAY

A few words about the Carpet grid structure statistics. The following is the output of a short pedagogical run in a laptop using two MPI jobs:

INFO (Carpet): Grid structure statistics:
INFO (Carpet): GF: rhs: 25k active, 33k owned (+30%), 74k total (+124%), 6 steps/time
INFO (Carpet): GF: vars: 69, pts: 11M active, 14M owned (+30%), 31M total (+124%), 1.0 comp/proc
INFO (Carpet): GA: vars: 271, pts: 0M active, 0M total (+0%)

Lets go through the output:

  • First note that GF stands for grid functions and GA for grid arrays.
  • The first output line shows the number of grid points used by the active (in the sense of being used) grid functions entering the right hand side of MOL thorn. The example above shows that 25,000 (25k) active grid points are used for the whole grid hierarchy; 33k are owned grid points (active + buffer) by the set of all MPI jobs (or components/regions represented by a particular MPI process), resulting in an overhead of 30% with respect to the number of active grid points; adding the ghost regions to the active and buffer regions results into a total of 74k grid points, an overhead of 124% with respect to the owned region. The last piece of information says that the time integrator (in this case a 4th order Runge-Kutta integrator) performed 6 iteration steps per time step per delta_x (Courant factor is preserved).
  • The second and third lines of the output refer to the memory used by the grid functions and grid arrays respectively. The variable vars refers to the total number of active grid functions times their total number of time levels. pts stands for grid points. In the second line, the active grid functions use approximately 11 Megabytes (11M). The owned ones, 14M, again an overhead of 30% with respect to the active ones. The total uses 31M, an overhead of 124%, i.e. the memory that is communicated is 124% larger than the memory used to evolve and prolongate (owned region). At last it says that this job had only one component or region per processor (MPI job) on average.

Also it may be useful to comment about the Carpet group and variable statistics output:

INFO (Carpet): Group and variable statistics:
INFO (Carpet):    There are 358 grid functions in 51 groups
INFO (Carpet):    There are 233 grid scalars in 52 groups
INFO (Carpet):    There are 40 1-dimensional grid arrays in 9 groups
INFO (Carpet):    There are 1 2-dimensional grid arrays in 2 groups
INFO (Carpet):    There are 3 3-dimensional grid arrays in 1 groups
INFO (Carpet):    (The number of variables counts all time levels)
  • the 358 grid functions in 51 groups output above in the statistics, for example, correspond to all grid functions scheduled, including all time levels.
  • Note that it differs from the grid structure output value, 69, above. There only the active grid functions are taken into account.

Work

Please add a paragraph here to describe what you have been working on and what you plan to work on in the coming days.

Bruno C. Mundim (BCM)
  • I am looking at the data structure for the class bboxset. At the moment it is just a list of bounding boxes distributed over all processors. This data structure may be responsible for the regridding bottleneck identified on the strong scaling. Maybe the best approach would change it to a binary tree data structure. Also it would be interesting that each of the processes would receive (or generate) a local tree only, saving then the overhead of sending this data structure over thousands of processes.
  • Discussions on how to implement the buffer mask. The main complications come from the symmetry boundaries.
  • I would like also to help with the documentation of the discussions: summaries reporting the problem, how it is done at the moment on Carpet, some figures to help understanding better the current implementation and new ideas coming from the discussion, if there is any.
Ian Hinder (ICH)
  • Have generated doxygen documentation for Carpet. This describes all the Carpet files, functions and classes and their relationships based on the C++ source. It uses comments in a particular format in the code. There is a search box and a tree view for the classes, as well as links to HTML versions of the source files. See current status at http://www.aei.mpg.de/~ianhin/carpet-doc/html/index.html. To-do: Decide if this should go on the Carpet web page and be automatically generated each night.
  • Designed and implemented a mechanism for arranging the timer information in a tree. Currently the timer information is stored in a flat list of timers so it is very difficult to see the hierarchical relationship between the timers. The tree is written out in XML format so that existing tools can present it in tree form easily. Justin has coded up the timer tree class and sent it to us saying we are free to use it. Working prototype implementation and some Mathematica examples of parsing it into an expandable/collapsable tree view and also a nested pie chart. To-do: Tidy up the code and think about whether we need to use separate timers, clarify the licensing of the contributed code, check performance impact, commit. See Media:TimerPlots.pdf for a rough example of the timers. Output file is Media:timers.0.xml and notebook is Media:timers.nb
  • Have checked out latest Carpet and built it. Tried to run BBH test-case but it was throwing a std::out_of_range exception. Core dump did not have a valid address. This used to work in March so something must have changed somewhere. Am working on methods for debugging Carpet. Have added a parameter to wait on startup for the existence of a particular file. This allows you time to attach a debugger process before the run dies. Unfortunately, the attached gdb was just as useless as the core dump for getting a backtrace. Have got Carpet to print a backtrace when it receives a signal. Updated Carpet after Erik committed a lot of stuff today and now it runs OK. I now get a warning "CarpetInterp2: The Carpet grid structure was changed since this fasterp_setup was created" but this might be harmless.
  • Have written some quick-and-dirty proof-of-concept code for Carpet to print a backtrace when it receives a signal (e.g. after an assertion failure). This means you can get more information about a problem without having to have a core dump. Justin gave us some of his code for doing this, including C++ name-demangling. To-do: Tidy up the code, sort out the licensing questions, and commit it to Carpet.


  • To-do: Output information about how much time is spent on each refinement level. Maybe how much time is spent in each mode as well.
  • To-do: Make it easy to look at actual load imbalance in RHS functions
  • To-do: Try to solve the "processor decomposition changing on recovery" bug.
Scott Noble (SCN)
  • Contributing to notes
  • Will help test cell-centered AMR and refluxing
Frank Löffler (FL)
  • Made runs with multiple threads work again by reallocating more memory if needed.
  • Implemented 'unused mask'. Set
    Carpet::use_unusedpoints_mask = "yes"
    which sets
    Carpet::carpet_unusedpoints_mask
    to 1 at points which will be overwritten by a finer level later and which do not influence the evolution at that level through the time integrator sub-steps, and to 0 everywhere else. Go ahead and suggest a better name for the parameter and mask if you are not happy with the current one. I would be happy to rename it to something better. We could improve this scheme even more by resetting the mask at every time integrator sub-step. The advantage would be that the mask would be bigger at later sub-steps and could potentially mask problems there. However, it is not clear if this is worth the effort.
  • Looked into higher numbers of refinement levels: seems the code does use 'int' for the size which is 32 bit, even on 64 bit machines. Possible solutions are:
    • changing ibbox to use at least long if not long long
    • storing the bounds of boxes not as global indices, but as local indices - devided by the stride or a part of the stride. Note: storing the stride as is (global, int) will overflow at about 30 refinement levels.
  • Added some checks (e.g. malloc) here and there, some documentation here and there
  • Learned a lot about mercurial, may it be good or bad.
Peter Diener (PD)
  • Trying to understand the load balancing/processor decomposition routine in Carpet. Interested in helping with improving this.
  • Interested in developing a performance model of the Carpet communication.
Roland Haas (RH)
  • optionally omit buffer and ghost zones from 3d output (ie. honour out3d_ghosts)
  • more user friendly error messages
Christian Reisswig (CR)
  • Implement new high-level library for reading/writing simulation data to HDF5 files, very similar to Werner Benger's F5 format. The reason for reimplementing this is that Werner's light++ license is somewhat ill-posed for our purposes and restricts the application of F5 such that we would be unable to use it for e.g. VisIt.
  • First, I will design an API for dumping and reading datasets (including coordinates and other metadata).
  • Second, this API can be directly used within Carpet and VisIt to write or read data in a very simple manner.
Erik Schnetter (ES)
  • Commited some outstanding changes to Carpet, mostly dealing with parallelisation
  • Made refluxing work (unfortunately for single processes only)
Christian Ott (CO)
  • got commit messages for carpet mercurial to work
  • work on poor-man's amr thorn for single-star simulations
Josh Faber (JF)

Extended Program

After the carpet workshop, on Friday 27th, CCRG is hosting two talks for a general audience:

References

We list below the main references on adaptive/guided/fixed mesh refinement:

  • Adaptive Mesh Refinement for Hyperbolic Partial Differential Equations - by Marsha J Berger, Joseph Oliger Journal of Computational Physics 53, March 1984, pp 484-512 (SPIRES) For your convenience, the current citations for this pioneer paper are grouped here
  • Evolutions in 3D numerical relativity using fixed mesh refinement - by Erik Schnetter, Scott H. Hawley and Ian Hawke - Class.Quant.Grav.21:1465-1488,2004. e-Print: gr-qc/0310042 (SPIRES) (arXiv).
  • Adaptive mesh refinement for characteristic codes - by Frans Pretorius, Luis Lehner J.Comput.Phys.198:10-34,2004. - gr-qc/0302003 (SPIRES) (arXiv)
  • Adaptive mesh refinement for coupled elliptic-hyperbolic systems - by Frans Pretorius, Matthew W. Choptuik J.Comput.Phys.218:246-274,2006 - gr-qc/0508110 (SPIRES) (arXiv)
  • AMR, stability and higher accuracy - by Luis Lehner, Steven L. Liebling, Oscar Reula - Class.Quant.Grav.23:S421-S446,2006. - gr-qc/0510111 (SPIRES) (arXiv)


Internal Links

Some useful information to help you to navigate around RIT/Rochester:

External Links

Some useful links about AMR codes and their applications:


Some development/debugging/profiling tools and libraries of possible interest:

Participants

RIT

Visitors

Via EVO teleconference

Group photo focused on working...

From right to left: Marcelo Ponce (RIT), Christian Reisswig (Caltech), Christian D. Ott (Caltech), Justin Luitjens (Utah), Ian Hinder (Albert Einstein Institute), Roland Haas (GA Tech), Joshua Faber (RIT), Yosef Zlochower (RIT), Hiroyuki Nakano (RIT), Prabath Pereis (RIT), Frank Löffler (LSU), Erik Schnetter (LSU), Peter Diener (LSU), Bruno Mundim (RIT), Scott Noble (RIT), Manuela Campanelli's laptop (RIT).

Whiteboard discussion photo

Getting started

Sponsors

The workshop is co-sponsored by:

  • The RIT's Center for Computational Relativity and Gravitation ([1])
  • NSF PRAC award OCI-0941653 [2]
  • NSF PRAC award OCI-0832606 [3].