2009 Newsnotes
Previous
Newsnotes
Patent Awarded for Compute-Process-Allocator Algorithms Licensed to Cray for the XT3
Patent number US 7,565,657 B1 was awarded to Leung et al. on July 21, 2009 for "Allocating Application to Group of Consecutive Processors in Fault-Tolerant Deadlock-Free Routing Path Defined by Routers Obeying Same Rules for Path Selection." This patent is for some of the processor allocation algorithms that were licensed to Cray for the XT3/4 beginning in 2005. Those algorithms also won an R&D100 award in 2006. Those algorithms were open source licensed and one of them was included in SLURM in 2009. The key breakthrough in those algorithms was the reduction of the computational complexity of the problem from the dimension of the machine to a one dimensional problem while maintaining essentially all the fidelity of the full dimensional solution.

(Contact: Vitus Leung)
November 2009
2009-7375P
Large-Scale Simulations of Dynamic Brittle Fracture in Glass
The problem of simulating dynamic brittle fracture has been among the most important in the field of materials fracture and failure. Under certain conditions, straight cracks branch. Correctly resolving the crack patterns in dynamic brittle fracture is critical in simulating the phenomenon of crack propagation and obtaining the correct arresting time and energy consumed by the propagation process.
Peridynamics, a formulation of continuum mechanics that is oriented toward discontinuous deformations and fracture, has been implemented in PDLAMMPS (http://www.sandia.gov/~mlparks/software.html), utilizing the massively parallel molecular dynamics code LAMMPS (http://lammps.sandia.gov). On LLNL's Dawn (https://asc.llnl.gov/computing_resources/sequoia/) PDLAMMPS was used to simulate dynamic brittle fracture in glass, using up to 65k cores to perform the largest peridynamic simulations in history. Simulation results show qualitative agreement with physical experiment.
  |
Figure 1: This is a replication of an experiment described in (Bowden, et. al., Nature, 1967).
A thin glass plate is notched at the top middle and loaded from its left and right ends until it fractures. |
(Contacts: Michael Parks, Stewart Silling, Florin Bobaru - Univ. of Nebraska)
July 2009
2009-5230P
High Accuracy ab initio Molecular Dynamics with Coupled-Cluster Theory
Reactions important to hydrocarbon combustion are still not well-understood, including the initial stages of the formation of polycyclic aromatic hydrocarbons, a significant component of soot. Chemical reactions are dynamic processes and it is necessary to use dynamical simulations to understand the branching between different pathways of a complex reaction. For reactions where the electronic structure changes substantially, quantum mechanical energetics are required, leading to the "ab initio molecular dynamics" (AIMD) approach, where quantum mechanical forces drive the classical simulation of nuclear motion. Conventional AIMD uses density functional theory (DFT) forces, which can be computed relatively quickly and are accurate in many cases. However, for reactions with multiple, open-shell states, high energies and multiple competing pathways, such as combustion processes, the accuracy of DFT is insufficient for definitive simulations.
To address this deficiency, in recent work, Andrew Taube (von Neumann Fellow, 1435) has developed a novel theoretical and computational methodology, coupled-cluster molecular dynamics (CC-MD), to shed light on these types of problems. Coupled-cluster theory (CC) is known as the "gold standard" for static quantum mechanical calculations of moderately-sized molecules. Unfortunately, conventional CC calculations are far too computationally expensive for on-the-fly dynamics simulations. Andrew developed a modified form of coupled-cluster theory -- regularized linearized coupled-cluster theory (regularized LinCC) -- that substantially reduces the computational resources necessary to perform a coupled-cluster calculation with minimal loss of accuracy across a potential energy surface. Using regularized LinCC, direct dynamics using coupled-cluster forces becomes feasible, leading to the coupled-cluster molecular dynamics approach.
He implemented LinCC in its single- and double-excitation variant into the massively parallel, open source ACES III program suite (http://www.qtp.ufl.edu/ACES/), which allows scaling of the calculations to thousands of processors. Furthermore, he has implemented a direct dynamics capability, which uniquely allows the code to generate highly accurate trajectories for the dynamics of small molecules. Andrew is currently applying the method to small molecule reactions involved in combustion, both to understand the chemistry and provide a benchmark for less accurate, but faster, DFT-MD calculations.
(Contact: Andrew Taube)
July 2009
2009-4693P
Klein Bottle Discovered in Molecular Conformation Data
Sandia and the University of New Mexico recently applied methods from dimension reduction and computational topology to the analysis of molecular conformation data. The data described the conformation of a simple cyclo-octane ring and was initially selected to provide a test bed for the algorithms. This "simple" dataset was discovered to have a surprisingly rich mathematical structure.
The dataset consisted of ~1M points in 72 dimensions, with each point corresponding to a molecular conformation of cyclo-octane (see Figure 1). This dataset was previously thought to have a manifold structure, locally equivalent to a smooth surface. However, we discovered that it in fact had an algebraic structure, smooth almost everywhere, but with singularities in the form of self-intersections. By understanding these self-intersections, we were able to split the algebraic structure into two manifold components (see Figure 2). We used methods from computational topology to identify the components. The first was a sphere, and the second (much to our surprise) was a Klein bottle.
A Klein bottle is a surface with only one side. It is a mathematical object that exists in four dimensions -- any three dimension visualization will have certain artificial singularities (as can be seen in Figure 2). We are currently investigating the physical meaning of this Klein bottle in terms of the molecular conformations.
This is joint work with Shawn Martin (1415), W. Michael Brown (1412), Evangelos Coutsias (UNM), and Jean-Paul Watson (1412).
 |
Figure 1. Conformation Space of Cyclo-Octane. The set of conformations of cyclo-octane can be represented as a surface in a high dimensional space. On the left, we show various conformations of cyclo-octane. In the center, these conformations are represented by the 3D coordinates of their atoms. On the right, a dimension reduction algorithm is used to obtain a lower dimensional visualization of the data. |
 |
Figure 2. Decomposing Cyclo-Octane. The cyclo-octane conformation space has an interesting decomposition. The local geometry of a self-intersection consists of a cylinder (top left) and a Mobius strip (top right), while the self-intersection is a ring traversing the middle of each object (shown in red). Globally, cyclo-octane conformations can be separated into a sphere (bottom left) and a Klein bottle (bottom right). |
(Contact: Shawn Martin)
June 2009
2009-4109P
New Sensitivity Analysis and Mesh Refinement Capabilities in SIERRA for Thermal/Fluid Simulations
In a joint Center 1500/1400 project, the thermal/fluid computational simulation capabilities of the SIERRA toolkit have been enhanced to perform internal sensitivity analysis calculations, along with automatic adaptive refinement of a SIERRA finite element mesh. These new capabilities provide several important payoffs: (1) the adaptive mesh refinement reduces SIERRA’s computational expense by 1-2 orders of magnitude versus conventional (uniform mesh refinement) calculations, (2) these methods provide automatic error estimation on the quantities of interest computed by SIERRA which is key information needed for quantifying margins and uncertainties (QMU), and (3) they enable SIERRA to be used with highly efficient “intrusive” design optimization algorithms that require detailed knowledge of the mathematical structure and physics parameter values within SIERRA during its execution. These new SIERRA capabilities were made possible by leveraging prior Sandia investment in developing the Trilinos and Encore software tools.
A recent demonstration of these capabilities was made using SIERRA and Encore on a design optimization problem involving a chemically reacting flow through a serpentine channel. This is relevant to many MEMS applications such as those involving micro-scale chemical detection and characterization devices. In this study, SIERRA and Encore computed the optimal surface reaction rates along the walls of the channel in order to match a user-prescribed reactant concentration profile in the channel. That is, SIERRA/Encore performed automatic determination of the channel wall reactant rates while SIERRA was simultaneously computing the fluid flow properties in the entire channel. This new capability will enable the design of future MEMS-based environmental sensing devices using high-fidelity physics simulation tools in a computationally affordable manner. (POCs: R. Bartlett, Org. 01411, B. Carnes, Org. 01544)
Figure Description: A simulation of chemically reacting fluid flow through a serpentine channel that shows (left) an adaptive finite element model that automatically changes to allow for higher resolution in areas of high flow and/or reaction complexity, and (right) a color map showing contours of reactant concentration level in the channel (red = highest concentration). The flow in the channel is left-to-right, entering at the upper left edge of the channel.

New Sensitivity Analysis and Mesh Refinement Capabilities in SIERRA for
Thermal/Fluid Simulations.
(Contacts: James Stewart & Anthony Giunta)
May 2009
2009-2756P
Demonstration of Mesh Independent Material Strength Modeling
in ALEGRA
The materials research community has struggled for many years to capture size effects in their modeling and simulation work. These effects, which usually result in a weaker material response as samples get larger, have traditionally been accounted for by applying rules of thumb to scale the material properties. The U.S. Army Research Laboratory (ARL) recently ran a suite of simulations with a model developed jointly by ARL and Sandia that incorporates size effects by applying a size-dependent statistical distribution to the material strengths. These new results demonstrated, for the first time for a ceramic material, that a model calibrated with tests at one scale could be used, without modification, to successfully predict results at another scale. In addition, the results exhibited computational mesh convergence on the key experimental metric, which is also a first for production simulations involving damage to this brittle material.

Size dependence of ceramic specimens is determined through experimental testing, and the size parameters in the model are calibrated through direct numerical simulation of the experiments
(Contact: Erik Strack)
April 2009
2009-2376P
Zia receives CD0 approval
The New Mexico Alliance for Computing at Extreme Scale (ACES) is an alliance between Sandia National Laboratories and Los Alamos National Laboratory formed in order to provide the High Performance Capability Computing assets required by NNSA’s stockpile stewardship mission. ACES has responsibility for deploying Zia, the ASC Program’s next generation capability computing platform. Zia is intended to support production Tri-Lab Capability Computing Campaigns and will replace functionality now provided by the ASC Purple and Red Storm platforms. Zia will be located at Los Alamos National Laboratory and delivery is expected to be operational in Q4FY10 and production ready Q2FY11.
ACES has accomplished a key milestone in its process to deploy Zia. On December 15, 2008, ACES received DOE approval for Zia's Mission Need Package, Critical Decision 0. This is the first step in satisfying the NNSA's Project Execution Model. This achievement allowed ACES to move forward on January 29, 2009 and issue a draft version of the Zia Request For Proposal (RFP) to prospective vendors and technology providers for comment.
The key technical design goal for the Zia system is for a balanced computing resource that is 6 to 8 times more powerful than ASC Purple on the ASC integrated weapons codes. These codes encompass a wide variety of applications involving shock physics, radiation transport, materials aging and design, computational fluid dynamics (CFD), and solid mechanics. Computationally these codes utilize structured and unstructured grids, explicit solvers, sparse matrix solvers, and must scale to full size of the system. This performance improvement is intended to be representative of individual large, integrated physics and engineering simulations running across the full size of the system (or a large fraction) with total run times in excess of one week, and is intended to encompass total time to solution, including node performance, node-to-node communications, I/O time for periodic check-pointing, and time spent monitoring and restarting jobs due to processor, node, or system failures.
(Contact: Doug Doerfler)
March 2009
2009-1486P
The 2008 International Workshop on RSPt and the Full Potential Linear Muffin-Tin Orbital Method
August 25-29, 2008, researchers from SNL, LANL, UNM, Sweden, and the Netherlands, gathered in downtown Albuquerque at the Hyatt Regency for the “2008 International Workshop on RSPt and the Full Potential Linear Muffin-Tin Orbital (FP LMTO) Method”, sponsored by the Sandia National Laboratories’ Computer Science Research Institute (CSRI) and organized by Ann E Mattsson (1435). This conference was the successor to the successful FP LMTO workshop held in Belem, Brazil in November 2007.
The goal of the workshop was to bring together researchers developing and applying the FP-LMTO method to discuss (1) the development of the open-source RSPt code and its possible evolution; (2) to present the formalism and technical details of the most recent implementations; (3) to highlight recent advanced use of RSPt; and (4) to identify new needs and especially to enhance international collaborations on method and software development.
RSPt is used at Sandia for generating high compression data for Density Functional Theory (DFT) based equation of state (EOS) development, mainly in support of High Energy Density Physics modeling and simulation at the Z-machine. Another application of RSPt is to verify the pseudopotentials used in other DFT codes at Sandia such as VASP, SeqQuest, and Socorro. RSPt has the potential to also be important for developing improved models of EOS and strength for complicated materials within focused tri-lab projects such as the Dynamic Plutonium Experiments and the National Boost Initiative, and to be helpful in investigating properties of materials within the Advanced Nuclear Fuels initiative. Extensive early development of RSPt was performed by John Wills (LANL) and the code is used at LANL and LLNL for EOS development.
The RSPt code is further described at its web site http://www.rspt.net, and the web pages for the workshop are located at http://dft.sandia.gov/FPLMTOworkshop2008/index.html.
This was the third annual RSPt workshop; the next workshop will be held at Università dell’Aquila, Italy.
(Contact: Ann Mattsson)
February 2009
2009-0988P
Joint SNL/UT Healthcare Policy Evaluator Demo
Sandia and the University of Texas recently completed a prototype study that demonstrated the feasibility of modeling the US health care system via agent-based modeling. This collaboration involved teams of computer scientists at Sandia paired with subject matter experts in various aspects of health policy modeling at the University of Texas. A prototype was developed with the subject of diabetes mellitus (Type 2) progression in mind and used simulated data that mirrored the detailed demographics taken from the Brownsville, TX metropolitan area. The result was a detailed assessment of the net health effects of different hypothetical health policies as a function of time, at the individual level. More importantly, the prototype demonstrated that significantly more comprehensive models of the US health care system are computationally feasible, able to model effects at the level of the individual and accounting for various physiological, sociological, and economic variability in a population. See figures attached.
(Contact:Danny Rintoul)
February 2009
Dynamic neurological modeling constraints
To understand how complex neurological systems work, one must appreciate how the fundamental component neurons behave and how they are connected together. Any complex neurological system model will contain hundreds to millions of neurons. Thus, the soundness of the full system model will depend on how well the neuron components are modeled. Much like electrical component modeling, one must have good model parameters to accurately model individual neuron behavior.
We have developed computational and statistical tools that enable uncertainty quantification of neural simulations by extending fourier-based techniques to use experimental data in the refinement of neuron simulations. The neuron simulations are posed as neuron-circuits and solved with Sandia's parallel circuit simulator, Xyce. Uncertainty quantification of the neuron model parameters is orchestrated by Sandia's optimization package, Dakota, which then calls on Xyce to do the computational work. Because both Xyce and Dakota are designed for large scale computing, gigabytes of experimental data can be used to refine the simulations which is critical as the experimental signals are inherently very noisy.
Posing and solving this problem required the combined efforts of staff from Cognitive Systems (6341 & 6443) and Electrical & Microsystem Modeling (1437). The result of this collaboration is unique capability allowing neuroscientits to fully utilize large experimental data sets and have insight into the accuracy limits of neural simulations. Additionally, we have opend new opportunities for Sandia to address the neuroscience community at the Society for Neurology national meeting (Washington DC, Nov. 2008).
 |
Figure: Comparing the frequency response of a neuron cell culture with simulated neuron circuits to estimate and constrain the model parameters. |
(Contact:Richard Schiek)
January 2009
Sandia's scalable informatics toolkit - Titan - is now operational on Red Storm
One of the goals of the Network LDRD Grand Challenge is to harness HPC for informatics. We want to take the scalable computing capability we apply to engineering problems and bring it to bear on analyses of large, complex datasets. A recent technical achievement by a diverse team of Sandia researchers has brought together foundational work on the Titan informatics toolkit, HPC systems development and advanced database architectures to demonstrate this new capability.
We have demonstrated an end-to-end system that uses the parallel Netezza database architecture for efficient storage and retrieval of very large datasets and the lightweight kernel on Red Storm for optimized execution of parallel statistical analysis algorithms. This sort of distributed analysis allows us to assign each part of the task to a hardware architecture that is specialized for exactly that sort of computation.
The Titan toolkit's modular architecture made this work possible. Its component-oriented design allows us to connect, swap and optimize pieces of a system to bring the most appropriate algorithms, architectures, and interactive analysis tools to bear on any given problem. In this case, we added Titan components to integrate Sandia's Portals and LWFS libraries. This allowed the Red Storm service partition to act as a bridge between the remotely located Netezza database and the lightweight kernel running on Red Storm's compute nodes.
This demonstration of Titan running natively on Red Storm paves the way for HPC-powered analysis of informatics data sets of unprecedented scale and complexity arising from problems of critical national importance. It is a crucial step toward building an end-to-end system where an interactive, visual tool running on a desktop PC can exploit the full power of HPC platforms and high-speed parallel databases.
(Contact:David Rogers)
January 2009
Networks Grand Challenge LDRD External Advisory Board Meeting
The Networks Grand Challenge Laboratory Directed Research and Development project focuses on research in informatics analysis for large data sets with complex relationships -- specifically for use in cyber security and nonproliferation areas with national security mission impact. Researchers and analysts from six vice-presidencies, across Sandia technology and mission organizations, are partnering and combining their talents to develop transformational capabilities for the future.
The project team met for a second time with the External Advisory Board (EAB) -- a carefully selected group of academics, industrial scientists, and members of the intelligence community -- to present research advancements and to demonstrate a first Networks prototype of integrated informatics capabilities. In the outbrief and a subsequent report on the LDRD research, the EAB unanimously reiterated their view of the significance of the work for national problems and Sandia's position to make a meaningful contribution to the field. The EAB recommended that the Networks GC team focus on the interconnections among the technical research areas, with an aggressive use of collaborations and partnerships. The EAB noted the need for advances in the state of the art in uncertainty, prediction, and prediction. We agree with the EAB's assessment and have restructured the team to move forward into a more challenging second year. Our new technical leadership is very strong and well-equipped to carry the Networks GC forward.
In a further tie between Sandia research and mission organizations, Networks GC managers are collaborating on Sandia cyber initiatives to leverage the informatics research for new opportunities in the national interest.
(Contact:Suzanne Rountree)
January 2009
|