©2008 Sandia Corporation | Privacy
and Security | Sandia National Laboratories | CSRI | CCIM
Maintained by Bernadette Watts and Deanna Ceballos
Maintained by Bernadette Watts and Deanna Ceballos

Scalable computing is presently dominated by applications that use a single program multiple date (SPMD) programming model using distributed memory and the Message Passing Interface (MPI) or a close equivalent. For a brief time in the mid-1990s it appeared that MPI-only might not be sufficient for systems built with shared memory parallel (SMP) nodes. Instead we though that we would need a hybrid programming model where MPI was used for parallelism across nodes and a shared memory model would be needed on the SMP node. However, the development of SMP-aware MPI libraries made hybrid models unnecessary for all but a small fraction of scalable parallel applications. This allowed MPI-only to continue its dominance as the parallel programming model for scalable applications through the present day.
With the advent of multicore processors, graphics processing units (GPUs), and commodity co-processors, we once again face the possibility (probability?) that MPI-only will be insufficient to achieve acceptable performance on scalable computer systems. For any given application, three critical questions emerge:
Although multicore-based scalable systems are already available and future systems will be multicore-based, initial performance studies indicate that the MPI-only model might be sufficient for many applications even through the year 2013. At the same time, we must take advantage of the next few years to develop algorithms and programming models that will support scalable applications well into the future. Furthermore, GPUs and other co-processors increasingly promise substantial performance gains for certain applications, especially as native double-precision support comes online over the next few years.
Although architecture and programming models are critical aspects of scalable applications, algorithms are equally important. Fundamental algorithm development will be essential for full exploitation of new scalable systems. Fine-grain parallel algorithms that exploit shared memory. and mixed-precision algorithms that take advantage of the superior performance characteristics of single-precision floating point, are but two classes of algorithms that become attractive.
Our workshop will aim to stimulate an open discussion about the future of scalable applications specifically addressing the three questions listed above. We will have a diverse group of attendees with expertise in future scalable computer systems, scalable application development, scalable algorithms and future compiler and library technologies.
The goals of this workshop are to characterize and describe scalable application development for petascale and exascale systems and beyond. We will specifically identify the most promising possibilities for hybrid and alternate programming models to complement or replace MPI and describe conditions when MPI-only will be insufficient. We will also identify promising classes of algorithms and necessary compiler and library capabilities that are essential for success. The expected outcome of the workshop is a clear set of directions for scalable application development for the coming decade and beyond.