Chaco: Software for Partitioning Graphs


A decomposition of a finite element part prior to a parallel calculation. The different colored regions will be assigned to different processors. For high performance, the regions must be identically sized, and the interface between them (gray color) must be small.

Before a calculation can be performed on a parallel computer, it must first be decomposed into tasks which are assigned to different processors. Efficient use of the machine requires that each processor have about the same amount of work to do and that the quantity of interprocessor communication is kept small. Finding an optimal decomposition is provably hard, but due to its practical importance, a great deal of effort has been devoted to developing heuristics for this problem.

The decomposition problem can be addressed in terms of graph partitioning. Rob Leland and I have developed a variety of algorithms for graph partitioning and implemented them into a package we call Chaco. The code is being used at most of the major parallel computing centers around the world to simplify the development of parallel applications, and to ensure that high performance is obtained. Chaco has contributed to a wide variety of computational studies including investigation of the molecular structure of liquid crystals, evaluating the design of a chemical vapor deposition reactor and modeling automobile collisions.

The algorithms developed for Chaco have also been successfully applied to a number of problems which have nothing to do with parallel computing. These include the determination of genomic sequences (a critical part of the Human Genome Project), the design of space-efficient circuit placements, organization of databases for efficient retrieval, and ordering of sparse matrices for efficient factorization. The code can also be used more generally to identify data locality.

Chaco contains a wide variety of algorithms and options, many of which were invented by the authors. Some of the algorithms exploit the geometry of the mesh, others its local connectivity or its global structure as captured by eigenvectors of a related matrix. These methods can be mixed and matched in several ways, and combinations often prove to be more effective than any single technique in isolation. All these algorithms are accessed via a simple user interface, or a call from other software. Innovations in Chaco include

  • Development of multilevel graph partitioning. This widely imitated approach has become the premiere algorithm combining very high quality with short calculation times.
  • Extension of spectral partitioning to enable the use of 2 or 3 Laplacian eigenvectors to quadrisect of octasect a graph.
  • Highly efficient and robust eigensolvers for use with spectral graph algorithms.
  • Generalization of the Kernighan-Lin/Fiduccia-Mattheyses algorithm to handle weighted graphs, arbitrary number of sets and lazy initiation.
  • Development of skewed partitioning to improve the mapping of a graph onto a target parallel architecture.
  • Various post-processing options to improve partitions in a number of ways.
  • We have written a number of papers discussing some of the algorithms in Chaco. The Chaco user's guide might be of particular interest.

    Chaco-1.0 was released in 1993 and the substantially enhanced Chaco-2.0 appeared in October 1995. The current version of the code is available under a GNU open source license, and can be obtained here. After gunzipping and untarring the code, you will find three subdirectories. The first subdirectory, "doc", contains several postscript files, including the important "Users_Guide.ps". We encourage you to print out and read this guide (or at least the quick overview section and the introduction) before making a serious effort to use the code. The second subdirectory, "code", contains the program makefile and various subdirectories with the actual source code. The makefile places the executable module (called "chaco") in the third subdirectory, "exec". The "exec" subdirectory also contains several sample input files to get you started.

    You might also be interested in the closely related Zoltan tool for dynamic load balancing.


    Other Graph Partitioning Software

  • METIS (Karypis and Kumar)
  • PARTY (Preis)
  • JOSTLE (Walshaw)
  • SCOTCH (Pellegrini)

  • Return to Bruce's home page