Chaco: Algorithms and Software for Partitioning Meshes


Description of Project:

Developing parallel software is difficult for a number of reasons. Invoking the right communication calls at the right point in a program is challenging enough without having to contend with the idiosyncrasies of a particular machine. The scientist or engineer would rather focus on the physics of an application than on arcane topics like how to divide the workload among processors. Unfortunately, the performance of the resulting code depends critically on this division of labor, and near optimal decompositions are difficult to find.

To resolve this problem, we have developed Chaco, a suite of algorithms for decomposing grids and assigning the pieces to processors. High performance requires that the pieces be equally sized and that the boundary between them is as small as possible. Performance is further improved by assigning pieces to processors in a way that minimizes contention for communication channels in the parallel machine.

Chaco contains a wide variety of algorithms and options, many of which were invented by the authors. Some of the algorithms exploit the geometry of the mesh, others its local connectivity or its global structure as captured by eigenvectors of a related matrix. These methods can be mixed and matched in several ways, and combinations often prove to be more effective than any single technique in isolation. All these algorithms are accessed via a simple user interface, or a call from other software. Innovations in Chaco include

  • Development of multilevel graph partitioning. This widely imitated approach has become the premiere algorithm combining very high quality with short calculation times.
  • Extension of spectral partitioning to enable the use of 2 or 3 Laplacian eigenvectors to quadrisect of octasect a graph.
  • Generalization of the Kernighan-Lin/Fiduccia-Mattheyses algorithm to handle weighted graphs, arbitrary number of sets and lazy initiation.
  • Development of terminal propagation to improve the mapping of a graph onto a target parallel architecture.
  • Highly efficient and robust eigensolvers for use with spectral graph algorithms.
  • Various post-processing options to improve partitions in a number of ways.
  • Chaco has also been used for several other partitioning problems including the design of space-efficient circuit layouts and reordering sparse matrices to reduce computation. The code can also be used more generally to identify data locality. This capability has been applied to problems ranging from genomic sequencing to database design.

    Chaco-1.0 was released in 1993 and the substantially enhanced Chaco-2.0 appeared in October 1995. Chaco is in use at most of the parallel computing sites in the country to assist with a wide variety of parallel applications. The code is available under research or commercial license, and enquiries should be directed to the authors.

    The authors of Chaco are:

    Related projects:


    Last Updated: February 12, 1996
    WWW Administration (www-admin@www.cs.sandia.gov) Bruce A. Hendrickson (bah@cs.sandia.gov)


    Page 1