CCIM LogoSandia National Laboratories Logo
HomeCapabilitiesOrganizationAwardsPublications and PresentationsCareer OpportunitiesCSRIPlatforms


2006 Newsnotes


Advanced Computational Technology for MicroSystems

Several years of collaborative research and development by staff in the Electrical and MicroSystems Modeling Department has led to the development and release of four new advanced software tools for microsystem fabrication process analysis and Micro-Electro-Mechanical Systems (MEMS) design.

ChISELS 1.0 models the detailed surface chemistry and concomitant surface evolutions occurring during microsystem fabrication processes in 2D or 3D. Examples of modeled processes are low pressure chemical vapor deposition (LPCVD), plasma enhanced chemical vapor deposition (PECVD), and reactive ion etching. ChISELS employs a ballistic transport and reaction model coupled with CHEMKIN software [1] to model the coupled interacting physics and chemistry that drive the surface evolution. The dynamically evolving surface is captured by the level-set method, a flexible and robust method for capturing large changes in surface topography. Designed for efficient use on both single-processor workstations and massively parallel computers, ChISELS 1.0 leverages many recent advances in dynamic mesh refinement, load balancing and scalable solution algorithms.

Chisels 1.0 has been released under an open source GNU Lesser General Public License.
(Contact: Lawrence Musson)
Website: http://www.cs.sandia.gov/~wchisels

1. R. J. Kee et. al, CHEMKIN, release 4.0.2. Reaction Design, San Diego, CA, 2005.

SummitView 1.0 is a computational tool designed to quickly generate a 3D solid model, amenable to visualization and meshing, of the end state of a microsystem fabrication process.  This capability has become critical to designers because of the very complex 3D MEMS device designs that can now be created with advanced multi-level micro-fabrication technologies. Because SummitView is based on 2D instead of 3D data structures and operations, it has significant speed and robustness advantages over previous tools. Tests comparing SummitView 1.0 with the Sandia’s first generation 3D Geometry modeler demonstrated a consistent speedup of approximately two orders-of-magnitude.

SummitView 1.0 will be commercially licensed by Sandia as part of the Sandia MEMS Design Tools.
(Contact: Rod Schmidt)
 
GBL-2D 1.0 is a 2D geometric Boolean library that consists of a set of C++ classes - to represent geometric data and relationships, and routines - containing algorithms for geometric Boolean operations and utility functions. Although developed as the modeling engine for SummitView 1.0, the library is designed as a robust and efficient general purpose software library for performing a variety of standard Boolean operations (e.g. Union(OR), XOR, Intersection and Difference) on 2D data objects (e.g. lines, arcs, edgeuses, and loops).

GBL-2D is being released under an open source GNU Lesser General Public License.
(Contact: Rod Schmidt)

Faethm 1.0 is a novel new design tool that, when given a three-dimensional object, can infer from the object’s topology the two-dimensional masks needed to produce that object with surface micromachining. Faethm implements an algorithm recently developed and copyrighted at Sandia that performs essentially the inverse of that performed by SummitView 1.0. The masks produced by Faethm can be generic, process independent masks or, if given process constraints, specific for a target process allowing 3D designs to be carried across multiple processes. With the release of Faethm 1.0, a fundamentally new design paradigm has been made available for MEMS designers to explore. Figure 2 compares the standard design path utilizing SummitView with the new design path utilizing Faethm.

Faethm will be commercially licensed by Sandia as part of the Sandia MEMS Design Tools.

Figure 1 Chisels simulation of a 15-cycle Deep Reactive Ion Etching (DRIE) experiment.

 

Figure 2 Ilustration of MEMS design paths using SummitView and Faethm.

(Contact: Richard Schiek)
December 2006


Optimization for Integrated Stockpile Evaluation

The Integrated Stockpile Evaluation (ISE) program has been organized to deal with the challenge of meeting the data needs of weapons system evaluation in an environment of shrinking budgets. Departments 1415 and 8962 are supporting this program by developing models to optimize resource allocation for ISE. These models can be used to experiment with different testing plans, and can be customized to address various objectives. For example, given a specific collection of samples, the optimization model can find an allocation of testing resources that maximizes the coverage of data needs. If cost information is available, then the model could incorporate a budget limit and re-evaluate the maximum possible coverage. On the other hand, given a hard limit on the coverage of data needs, the same basic model can determine the minimum number of samples required. 

This research has resulted in a prototype optimization capability that runs on synthetic data. Large synthetic instances have been solved using the PICO integer programming solver developed by 1415. The next steps are to pilot this basic optimization capability on real ISE data and to extend the current resource allocation models in order to take into account temporal information. 

(Contact: Jon Berry )
December 2006


R&D 100 Award for Compute Process Allocator

Vitus Leung (1415), Michael Bender (SUNYSB), David Bunde (UIUC/Knox), Kevin Pedretti (1423), and Cindy Phillips (1415) won a prestigious 2006 R&D 100 award for the Compute Process Allocator (CPA).  Vitus received the plaque for the CPA at R&D Magazine's gala awards banquet in Chicago on October 19.  Rick Stulen, Sandia’s Chief Technology Officer and Vice President of Science, Technology and Engineering will host a celebration to honor this achievement at the Steve Schiff Auditorium Lobby on December 14 at 10am.  A ceremonial hanging of the plaque will take place.  Below is the editorial on the CPA in the September 2006 awards issue of R&D Magazine.

Optimizing Resource Allocation


Parallel processing on supercomputers gives rise to the problem of resource allocation.  To address this issue, researchers at Sandia National Laboratories, Albuquerque, N.M., collaborated with researchers from the State Univ. of New York, Stony Brook, and the Univ. of Illinois, Urbana, to develop the Compute Process Allocator (CPA).  CPA is the first allocator to balance individual job allocation with future allocation over 10,000 processors, allowing jobs to be processed faster and more efficiently.

In simulations and experiments, CPA increased the locality and throughput on a parallel computer by 23% over simpler one-dimensional allocators.  In simulations, CPA increased the locality on a parallel computer by 1% over more time-consuming higher-dimensional allocators.  CPA is distributed and scales to over 10,000 nodes, while non-distributed allocators have been scaled to only 4,096 nodes.

(Contact: Vitus Leung)
November 2006


Solution Verification and Uncertainty Quantification Coupled

Sandia has demonstrated a coupling of solution verification (through error estimation) with uncertainty quantification. This approach has benefits in terms of accuracy, computational expense, computational reliability, and convenience. In terms of accuracy, controlling or correcting for errors leads to higher confidence in the uncertainty analysis and probabilistic design recommendations. In terms of computational expense, the use of error-correction on coarse meshes instead of non-error-corrected but fully converged fine meshes results in a 10x speedup, while still maintaining the same solution accuracy. In terms of computational reliability, the parameter-adaptive nature of this approach avoids the pitfall of legacy approaches where computational model results that are converged for one set of parameters are blindly assumed to be converged for another set of parameters. And in terms of convenience, this approach may eliminate the need for manual convergence studies, significantly reducing the overhead for analysts and designers.

no verification

simulation-level off-line

study-level off-line

on-line EE/Adapt

Accuracy

L

M

H

H

Efficiency

H

M

L

H

Reliability

L

M

H

H

Convenience

H

M

L

H

Figure 1. Comparison of solution verification strategies: H = high, M = medium, L = low.

This project pulled together a cross-disciplinary team from Centers 1400, 1500, and 1700. The key contributions were uncertainty analysis and probabilistic design capabilities from DAKOTA, global norm and quantity of interest error estimates from Coda, nonlinear mechanics analysis from Aria, data structures and h-refinement algorithms from SIERRA, and MEMS model development from MESA. Many new capabilities were developed within these codes for this demonstration milestone.

 

Figure 2. Alternate paths to compute approximate and reference force-displacement curves.

The nonlinear structural analysis of MEMS systems was demonstrated using both global-norm and quantity-of-interest error estimators. Two approaches for uncertainty quantification were developed: an error-corrected approach, in which simulation results are directly corrected for discretization errors, and an error-controlled approach, in which error-estimators drive adaptive h-refinement of the mesh. The former requires error estimates that are quantitatively accurate, whereas the latter can employ any estimator that is qualitatively accurate. Each of these techniques treats solution verification and uncertainty analysis as a coupled problem, recognizing that the simulation errors may be influenced by, for example, conditions present in the tails of input probability distributions. Combinations of these approaches were also explored. The most effective and affordable of these approaches were used in design studies for a robust and reliable bistable MEMS device.

(Contact: Mike Eldred or Jim Stewart )
November 2006


Unconstrained Plastering: All-Hexahedral Mesh Generation via Advancing Front Geometry Decomposition

The search for a reliable all-hexahedral mesh generation algorithm has been a constant area of international research for more than two decades. Many researchers have abandoned the search in favor of the widely available and highly robust tetrahedral mesh generation algorithms. However, analysts searching for highly accurate solutions still prefer all-hexahedral meshes for many applications. The Cubit mesh generation software team at Sandia National Laboratories is currently researching a new algorithm, called Unconstrained Plastering, which has the goal of generating high-quality all-hexahedral meshes on any arbitrary geometry assembly. This research leverages more than a decade of hexahedral mesh generation research at Sandia National Laboratories.

Hexahedral mesh generation is constrained by 1.) strict global topology requirements of hexahedral elements, and 2.) geometric features of the model being meshed. If either one of these is not fully considered, the result is either algorithm failure or poor quality elements, both of which compromise the ability to perform accurate computational analysis. Through the years, dozens of algorithms for hexahedral mesh generation have been published. However, none have adequately considered both of these constraints. As a result, none have resulted in a robust solution to generating all hexahedral meshes on arbitrary geometry.

Unconstrained Plastering is the next generation of hexahedral mesh generation research, which is roughly based on a mixture of the advantages of numerous previously published algorithms. Unconstrained Plastering is a geometry decomposition method which uses an advancing front technique from an unmeshed volume boundary. Each front advancement partitions from the volume what will eventually become a topological sheet of hexahedra. Hexahedral elements have, by definition, three degrees of freedom. Previous published advancing front hexahedral meshing algorithms constrain all three degrees of freedom with each front advancement. In contrast, Unconstrained Plastering only constrains the single degree of freedom in the direction of the front advancement. The remaining two degrees of freedom are left unconstrained until they are either constrained by subsequent nearby front advancements or until adjacent geometry decompositions are recognized as meshable with one of several well-known primitive meshing algorithms. By delaying the definition of degrees of freedom, Unconstrained Plastering is better able to conform to the previously described constraints of global hexahedral element topology and geometric feature conformity.

Research on Unconstrained Plastering is in its second year of funding. The example mesh image below illustrates that Unconstrained Plastering is able to generate high quality all-hexahedral meshes on non-trivial non-planar concave models. Research on Unconstrained Plastering is currently focused on complex front interactions encountered in complex geometric models and on creating conformal meshes on assembly models. If successful, this focus will allow Unconstrained Plastering to generate high-quality all-hexahedral meshes on increasingly complex geometries.

(Contact: Matt Staten )
October 2006


 

LDRDView v1.0 Released and Development of TITAN Technology

The Data Analysis and Visualization Department has released advanced software for information visualization, and is advancing development of its TITAN technology – and infrastructure of development of parallel information visualization.

Advanced Information Visualization
LDRDView v1.0, released at end of FY06, is an information visualization tool which directly supports Sandia’s LDRD program office. This tool is an advanced application for information visualization that builds on previous work in Sandia’s VxInsight tool. Many of the final features added to LDRDView are the direct result of interacting with the end users, making the final tool a responsive, effective application for investigating abstract data.

LDRDView’s purpose is to help users gain insight into the structure and relationships present within their data. It can analyze any set of documents such as LDRD proposals, email messages, news stories, scientific literature, problem reports, or patent filings. These documents are grouped by conceptual similarity and laid out in a three-dimensional landscape. Peaks within this landscape correspond to clusters of documents that are similar to one another [see figure 1]. This application supports keyword queries, concept queries, link highlighting, and other intuitive interactions with the data – all in support of investigating the structures and relationships hidden in the data.

Figure 1: Screen capture of the current LDRDView application, showing the landscape view, and the many detailed information views that all work in concert.

TITAN is Department 1424’s scalable information visualization infrastructure that forms the core of our information visualization technology. Building on our expertise in scalable scientific visualization, we are adding components – visualization algorithms, data filters, and ways of looking at data – to a common core. This allows components to be swapped in and out, per customer needs. For example, we currently support the STANLEY text analysis engine, but customers may substitute their own, either through directly linking to the TITAN infrastructure, or by creating files that could be imported by an application like LDRDView. Development is rapidly proceeding on TITAN, we have delivered our first prototype TITAN application to an external customer.

Figure 2: End-user application shown in context of the TITAN infrastructure. A common visualization component, incorporated into the end-user application, provides a landscape view of data coming from text comparison technologies – for example the STANLEY engine. TITAN supports swapping these elements, so customers can substitute their own particular text analysis engine. This promotes flexible use and exchange of a variety of technologies.

(Contact: David Rogers)
October 2006


Simulation Study of Head Impact Leading to Traumatic Brain Injury

Traumatic brain injury, or TBI, is an unfortunate consequence of many civilian accident and military related scenarios. Examples include head impact sustained in sports activities and automobile accidents as well as blast wave loading from improvised explosive devices (IEDs). Depending on the extent of the damage, TBI is associated with a loss of the functional capability of the brain to perform cognitive and memory tasks, process information, and a variety motor and coordination problems. In many instances, the person involved in the event will not experience the full loss of brain function until days or weeks after the event has occurred. This suggests the existence of threshold levels and/or conditions of mechanical stress experienced by the brain that, if exceeded, lead to evolving symptoms of TBI in the days or weeks following an accident.

To avoid a trial-and-error approach involving large-scale medical testing of laboratory animals to study various scenarios leading to TBI, we have developed numerical simulation models of the human head to study various impact and blast wave conditions that lead to the onset of TBI. To accomplish this task, we have recently established a collaborative effort between Sandia researchers and the Mental Illness and Neuroscience Discovery (MIND) Institute, at the University of New Mexico. This collaboration permits us to create accurate models of the various tissues and geometries of the human head as well as to conduct simulations of head impact in order to establish a correlation between the incipient levels and durations of stress and strain energy experienced by the brain and the onset of TBI.

In this note, we present the results of a small study that simulates the early-time wave interactions occurring within the human head as a result of impact of an unrestrained person with the windshield of an automobile in a 30 mph head-on collision into a stationary barrier. We have conducted various simulations of this scenario over the past few years; however, the current work has been carried out with a higher fidelity head model over an extended simulation timescale.

Our 3-D head model was developed by importing a segmented interpretation (displaying distinct biological materials of bone, brain, and fluid) of a CT scan from a healthy female head into the shock physics hydrocode CTH. Specific material models were created for the skull, brain, cerebral spinal fluid (CSF), and windshield glass. The simulations were run on a parallel architecture computer employing 64 processors for each simulation run.

The results of the simulations demonstrate the complexities of the wave interactions that occur between the skull, brain, and CSF fluid as a result of a frontal impact with the glass windshield. These interactions lead to focused regions within the brain that experience significant levels of pressure and deviatoric (shearing) stress. In particular, the pressure waves focus roughly 30 bars of compression in the brain at the impact site (Fig. 1) and 5 bars of tension at the site opposite (contra-coup) the impact point (Fig. 2). Furthermore, our simulations predict up to 30 bars of deviatoric (shearing) stress at the interface between the brain and the ventricles that conduct the CSF fluid within the brain (Figs. 3 & 4). This interaction leads to a tearing effect on the brain tissue if the stress level is sufficiently high. The geometric complexities of the skull interior are such that there are a variety of sites that experience stress focusing, which can readily be seen in computer animations of the simulations. The significant of these results is the fact that they occur on a time scale of roughly 1-2 ms and capture the early-time wave interactions that are potentially damaging to the brain and before any coarse body motion has begun, which can lead to additional damage resulting from a “sloshing” motion of the head. The results of this study have been summarized in an article to appear in the proceedings of the 25th Army Science Conference, which will convene in November, 2006.

An immediate goal of this collaborative effort is to establish a quantitative correlation between specific levels of stress/strain energy and the onset of TBI under a variety of accident conditions. This effort involves studying conditions under which accident victims experience conditions that lead to TBI, as diagnosed by medical tools such as structural and functional MRI, and conducting accurate simulations of the event. Once such correlations exist, this approach can be used to investigate mitigating strategies to minimize the conditions under which TBI occurs. Future studies are planned to investigate the occurrence of TBI as it is experienced by blast victims from improvised explosive devices (IEDs). This is a significant topic of concern for the U.S. Army and consequently, we are pursuing funding support through the Department of Defense to address this problem.

Figures 1&2. Distribution of compressive & tensile pressures in the sagittal plane
(side view)
 

Figures 3&4. Distribution of deviatoric (shearing) stress in the sagittal and axial planes
(side & top views)

October 27th Sandia Lab News Article

(Contact: Paul Taylor with Dr. with Dr. Corey Ford, Department of Neurology and MIND Imaging Center, University of New Mexico Health Sciences Center, NM 87131)
October 2006


(Left) Libyan Desert Glass is found in an area spanning 6500 km2, in the Great Sand Sea of the Western Desert of Egypt, near the border with Libya. In 1998, an Italian mineralogist showed that a carved scarab in King Tut’s breastplate was made out of this glass.

High Performance Computing Provides Clues to Scientific Mystery
Enigmatic silica glass in the Sahara desert has survived nearly 30 million years. How did it form?

Most natural glasses are volcanic in origin and have chemical compositions consistent with equilibrium fractional melting. The rare exceptions are tektites formed by shock melting associated with the hypervelocity impact of a comet or asteroid. Libyan Desert Glass does not fall into either category, and has baffled scientists since its discovery by British explorers in 1932. The 1994 collision of Comet Shoemaker-Levy 9 with Jupiter provided Sandia with a unique opportunity to model a hypervelocity atmospheric impact. Insights gained from those simulations and astronomical observations of the actual event have led to a deeper understanding of the geologic process of impacts on Earth and presented a likely scenario for the formation of Libyan
Desert Glass.

High-resolution hydrocode simulations, requiring huge amounts of memory and processing power, support the hypothesis that the glass was formed by radiative heating and ablation of sandstone and alluvium near ground zero of a 100 Megaton or larger explosion resulting from the breakup of a comet or asteroid.

Using Sandia’s Red Storm supercomputer, we ran CTH shock-physics simulations to show how a 120-meter asteroid entering the atmosphere at 20 km/s (effective yield of about 110 Megatons) breaks up just before hitting the ground. This generates a fireball that remains in contact with the Earth’s surface at temperatures exceeding the melting temperature of quartz for more than 20 seconds. Moreover, the air speed behind the blast wave exceeds several hundred meters per second during this time. These conditions are consistent with melting and ablation of the surface followed by rapid quenching to form the Libyan Desert Glass. These simulations require the massive parallel processing power provided with Red Storm.

The risk to humans from such impacts is small but not negligible. Because of the low frequency of these events, the probability and consequences are both difficult to determine. The most likely scenario that would cause damage and casualties would not be a craterforming impact, but a large aerial burst similar to the one that created this unusual natural glass. This research is forcing risk assessments to recognize and account for the process of large
aerial bursts.

Red Storm was used to simulate the airburst and impact of a 120-meter diameter stony asteroid. Ablated meteoritic vapor mixes with the atmosphere to form an opaque fireball with a temperature of thousands of degrees. The hot vapor cloud expands to a diameter of 10 km within seconds, remaining in contact with the surface, with velocities of several 100 m/s. Simulations suggest strong coupling of thermal radiation to the ground, and efficient ablation of the resulting melt by the high-velocity shear flow.

 

 

National Geographic Documentary

In February, 2006, Mark Boslough participated in an expedition to the site of the Libyan Desert Glass (LDG). The glass has a fission-track age of about 29.5 Ma. There is little doubt that the glass is the product of an impact event, but the precise mechanism for its formation is still a matter of debate. This lively discussion was a featured element of the documentary.

Evidence for a direct impact includes the presence of shocked quartz grains and meteoritic material within the glass. However, the vast expanse of the glass and lack of an impact structure suggests the possibility of radiative/convective heating from an aerial burst.

“ Ancient Asteroid” will be shown on the National Geographic Channel on Sept. 21, 2006.

Camp was set up in “corridor B” in the southern part of the Great Sand Sea, within the area of LDG concentration. Corridors consist of quaternary gravel and alluvium and are separated by linear dunes. The lower photograph is looking southeast. Geologic setting is shown by inset map. LDG sits on silica-rich weathered remains of Upper Cretaceous Nubia-Group sandstones. The main area of concentration is 20 km across.

Left: 120-meter asteroid explodes over the Egyptian desert in 2006 National Geographic documentary
Ancient Asteroid.

Right: Documentary animators used Red Storm simulation to visualize the effect of an asteroid explosion in the atmosphere above the city of London.

View this article as PDF

September 15th Sandia Lab News Article

(Contact: Mark Boslough)
September 2006


Red Storm’s impact on the High Performance Computing community is due in part to Sandia’s focus on interconnect performance for MPP systems

For over a decade, Sandia has pushed for increasing the performance of the high speed interconnect fabric of massively parallel processor (MPP) systems for scientific computing. Sandia’s Red Storm system reflects the priority we have for interconnect performance with its custom Seastar network interface/router chip. The success of Cray Inc. with their XT3 product, the commercial version of Red Storm, is testament to the soundness of our technical priorities. Cray has sold at least nine XT3 systems in the last year including the 10TF Pittsburgh Supercomputer Center system and the 40TF Atomic Weapons Establishment, UK system. Recent XT3 wins in competitive procurements by the DOE Office of Advanced Scientific Computing Research include the ORNL Petascale contract and the LBNL-NERSC 100 TF contract, the only two major acquisition awards made in the last 6 months.

The traditional measures of interconnect network performance are bandwidth and latency. However a high performance interconnect network is not the goal, rather it is a means to an end. That end goal is high application performance and scalability. For MPP systems like Red Storm, this means scientific applications written to use the Message Passing Interface (MPI) standard. The ability to overlap computation with communication is an important performance characteristic that directly impacts an MPI application’s scalability. Recent work by Doug Doerfler, 1422 and Ron Brightwell, 1423 raised the visibility of host processor overhead as another important measure of interconnect performance that, while widely recognized, has been difficult to quantify. Doug and Ron published and presented a paper summarizing their results at a recent conference that was cited by HPCwire article in their August 18 issue - http://www.hpcwire.com/hpc/815242.html. They have also established a website where the Sandia MPI Micro-Benchmark Suite (SMB) can be downloaded - http://www.cs.sandia.gov/smb.

(Contact: James A. Ang)
September 2006


A Tool for Testing Supercomputer Software

Software testing and debugging has long been a challenge for parallel and distributed systems. Testing is a tedious and time-consuming task, often requiring as much as much as two-thirds of the overall cost of software production. On today’s DOE supercomputers with tens of thousands of processors, testing and debugging is further complicated by the number of places a fault can occur.

Sandia National Laboratories researchers have developed a software tool called APITest to evaluate systems software components for Teraflop-scale supercomputers. APITest is unique among testing frameworks because it was designed to meet the specific needs of large-scale systems. For example, APITest is capable of “isolation testing”–allowing the developer to evaluate a component without worrying about the correctness of other components. APItest also allows the user to provide arbitrary definitions of success, and it can pass conditional tests based on statistical results. For example, the user can declare a test successful if 70% of sub-tests succeed.
APITest was developed as part of a five-year project (ending in FY2006) to develop Scalable Systems Software (SSS) for Teraflop-scale systems. The SSS project was funded by the Office of Science under the Scientific Discovery for Advanced Computation (SciDAC) initiative and was a collaboration between eight DOE laboratories.

Although APITest was designed primarily to evaluate systems software, APITest is useful for the validation and testing of a wide range of software. For example, APITest is the primary test software for Cluster Resources Inc.–developer of the Moab cluster suite that includes software for scheduling, managing, and allocating resources for production computing clusters. APITest is also in consideration by MPICH, the premier open-source implementation of the Message Passing Interface (MPI) standard, made available by Argonne National Laboratory.

APITest represents a significant success of Sandia’s efforts in the SciDAC project. We expect the use of APITest to grow as we evolve towards Petaflop systems.


Figure 1: Supercomputer components and their relationships.
Components in the gray cloud are accessed by all components


(Contact: Neil Pundit)
August 2006


Mesh Tying Discretizations

Mesh-Tying
Predictive simulations of certain multiphysics problems require high fidelity finite element discretizations for independently meshed computational domains. The motivating application for “mesh-tying” methods is coupling dissimilarly meshed subdomains in one complicated physical model, such as contact problems and structures with components from different Labs simulated using Sandia’s Adagio and Salinas software packages. Other applications come from multiphysics codes with distinct physical models and parallel mesh generation. Even on simple problems with flat interfaces, existing methods cannot predict the response in energy norms. The finite element “patch test” was developed in the 1960’s to characterize the limitations of nonconforming finite element methods. Figure 1 shows a traditional mesh-tying method applied to a flat interface and the undesirable oscillatory error caused by the interface. We have developed computationally efficient mesh-tying algorithms that prevent these errors providing solutions on tied meshes of the same quality obtained on a single conforming mesh. These methods will be provided through Trilinos in the Moertel package.

Mortar Methods and the Trilinos Package Moertel
Moertel supplies the tools that applications codes need in order to implement nonconforming mesh tying and contact formulations with Mortar methods in comples three-dimensional geometries as shown in Figure 2. Mortar methods couple different physics or discretizations or triangulations along interior surfaces. Weak continuity at interfaces is imposed using dual Lagrange multipliers. These Lagrange multipliers are locally eliminated in a way that leads to linear systems suitable for multilevel preconditioners such as those in the Trilinos package ML. Mortar methods are optimal for flat interfaces only. Curved domain boundaries have regions of gap (void) and overlap (penetration) occurring as a consequence of the dissimilarly meshed grids, see Figure 3(a). We have developed two methods applicable to problems with curved interfaces.

Least-Squares Finite Element Method
By replacing the Galerkin variational form with a least-squares variational form and using overlapping meshes, we have developed a new method [1] that handles homogeneous materials automatically. Least-squares methods are simple and theoretically complete option for future simulation codes and this approach lead to mesh tying methods that satisfy patch tests of arbitrary order.

Generalized Lagrange-Multiplier Method
We have also developed a dual domain decomposition algorithm that is compatible with existing Galerkin discretizations. A necessary condition for a Galerkin method to pass a first-order patch test is that the signed areas of gaps and overlaps sum to zero. Initially, nodes along the contact boundary are perturbed to enforce the zero-area condition. Then the Galerkin formulation is extended to also compute a matching generalized flux along the non-coincident contact interface. The method achieves the same errors that would arise from a conforming mesh and also produces exact solutions for patch test problems [2]. Figure 3(b) shows a charac-teristic patch-test solution.

References
[1] P. B. BOCHEV AND D. M. DAY, A least-squares method for consistent mesh tying, International Journal of Numerical Analysis and Modeling, 65 (2006).

[2] P. B. BOCHEV, M. L. PARKS, AND L. A. ROMERO, A generalized-Lagrange multiplier method for mesh tying. In preparation, 2006.


(a) Example problem with flat interface

(b) Oscillatory solution error

Figure 1: Limitations of existing mesh-tying methods


(a) Overall Geometry and Mesh

(b) Junction closeup with
contours of displacement
Figure 2: Mesh Tying in complex three-dimensional geometries
using Moertel


(a) Voids and overlaps

(b) Sample patch test solution
Figure 3: Mesh showing overlap region and sample solution that passes the patch test.

(Contact: Pavel Bochev)
July 2006


2006 CIS External Review Highly Successful

A highly successful review of the technical program of the Computer and Information Sciences (CIS) ST&E Research Foundation was held on June 7 – 9, 2006 in Albuquerque. This was the first CIS External Review conducted under the direction of the University of Texas (UT), in the person of David Watson, Associate Vice Chancellor for Sandia Operations, and his assistant, Ms. Dawne Settercerri. UT took the lead on handling the logistics, while the CIS Council, comprising Center 1400 and Group 8960, managed all of the technical content of the review. The review panel of external experts was again chaired by Michael Levine, Director of the Pittsburgh Supercomputing Center and Professor of Physics at Carnegie Mellon University. He was joined by:

  • Bill Blake, Senior VP for Product Development, NETEEZA Corp.
  • Steven Castillo, Dean of Engineering, NM State University
  • John Hopson, Director, Advanced Simulation and Computing Program, LANL
  • Peter Kogge, Computer Science & Engineering, Notre Dame U., Associate Dean of Engineering for Research, and the Ted H. McCourtney Professorship
  • A. B. (Barney) Maccabe, Professor of Computer Science, U. New Mexico and Director of UNM Center for High Performance Computing
  • Marc Snir, Head of Computer Science, U. Illinois, Urbana-Champaign and the Michael Faiman and Saburo Muroga Professorship
  • Danny Sorensen, Chair of Computation and Applied Mathematics, Rice University and the Noah G. Harding Professorship

The review was successful both technically and operationally. The UT and CIS organizers worked well together, managed the complications of transitioning to a new process, and produced a well organized, efficient review. The CIS Research Foundation presented a set of well developed and effective talks on a broad sampling of its technical R&D efforts. The panel was complementary of both of these aspects of the review.

The committee was impressed with the generally excellent level of work, and evidence of good integration within CIS and collaboration with customers. It applauded the vertical integration of the groups, which pursue complementary, synergistic projects, and noted that the CIS program continues to play an exceptional, leading role in demonstrating the growing dependence of technological developments on high performance computing and simulation. It made particular note of outstanding examples of world-class research and leadership in:

  • Computer Systems – making Red Storm a success. This is “a project of national importance.”
  • Enabling Technologies – Current emphasis and future themes for ASC Verification and Validation.
  • Applications Development – ALEGRA-HEPD Simulations for Z-Machine Applications. Excellent example of experiment-simulation collaboration on this important project for SNL and the U.S.A.

For more information, contact Bill Camp (1400) and Len Napolitano (8900).

(Contact: Bill Camp and Len Napolitano)
June 2006


Converged Simulations of a TeraHertz Oscillator

A collaboration between researchers at NC State and Sandia’s CCIM center 1400 has led to a unique computational capability for modeling a Resonant Tunneling Diode (RTD). An RTD is a nanoscale electronic device (see Fig. 1) that is believed to autonomously oscillate under certain conditions by quantum tunneling effects. The TeraHertz frequency of the oscillations makes the RTD of interest for numerous applications, including medical imaging, sensing, chemical analysis, and applications of interest to DoD.

The Wigner-Poisson equations govern the behavior of the electrons in the device, where even a one-dimensional geometric model results in a two-dimensional calculation over space and momentum. The equations have both integral and differential components that lead to a dense interaction matrix between all nodes. A typical solution of the model is an entire current-voltage (I-V) curve, which can include hysteresis effects and regions of oscillations. Before this collaboration, the state-of-the-art was a 6 Thousand node model where oscillatory states were observed by long time integration.

Under a CSRI-funded collaboration, the Wigner-Poisson code was interfaced to the Trilinos solver framework to make use of the continuation, bifurcation, eigensolver, linear solver, nonlinear solver, and parallel data structure packages. A parallelization strategy was devised and implemented (Fig. 2) to increase the problem sizes that could be tackled, and an eigensolver was used to detect the onset of oscillations via a Hopf bifurcation.

Parallel computations on 48 processors of the ICC cluster were sufficient to perform mesh refinement studies for the entire I-V curve up to 2 Million unknowns, and demonstrate mesh convergence of the model for the first time. An analysis found that the algorithm was 95% parallel and 5% sequential. This scalability is adequate for this model, (with ~60% parallel efficiency on 16 processors) but shows that the algorithm would not scale well if convergence required the resources of RedStorm.

The collaboration has led to 3 journal publications, and has spawned a new collaboration under NECIS funding.

Fig 1: Schematic of Resonant
Tunneling Diode

Fig 2: Parallel Computation Strategy


Fig 3: Change in Current-Voltage diagrams with mesh refinement
from 6 Thousand nodes to a converged solution of 2 Million nodes

(Contact: Andy Salinger 505-845-3523, with Prof. C.T. Kelley and Matthew Lasater of NC State)
June 2006


Sandia's Maps of Science at the New York Public Library

Kevin Boyack (1412), a founding member of the Advisory Board of Places & Spaces, has contributed three of the maps currently being shown at the Science, Industry and Business Library (SIBL) of the New York Public Library, located at Madison and 34th in Manhattan (http://www.nypl.org/research/calendar/exhib/sibl/celistsibl.cfm). All three maps were done in collaboration with Richard Klavans, SciTech Strategies, Berwyn, PA, and were enabled by work performed under an LDRD project whose purpose was to generate large-scale maps of science with associated indicators at very fine-grained levels.

The first of the Boyack/Klavans maps is entitled “The Structure of Science” and was based on papers indexed in 2002. In it, clusters of papers are overlaid like a galaxy on clusters of journals. The second map, “Map of Scientific Paradigms,” differed from the first in that it was based on papers indexed in 2003, and was generated from papers only without regard to journals.

Their third entry is an interactive illuminated diagram showing the “Map of Scientific Paradigms” together with a world map and projectors casting light patterns onto the wall-mounted posters, all of which is computer controlled with a touchscreen. The touchscreen allows the selection of topic areas (e.g. nanotechnology) and key scientists (e.g., Einstein, Watson & Crick) after which the areas of science, and the geographic locations in the world associated with that science are simultaneously illuminated. There is also a default mode which sweeps like a radar through the scientific paradigms and their associated geographic locations. The Places & Spaces exhibit at SIBL will be shown through August 31, 2006.

(Contact: Kevin Boyack)
June 2006


New Xyce capability demonstrated: integrated electro-mechanical system simulation

View PDF file for details.

(Contact: Scott Hutchinson and Eric Keiter)
June 2006


Designing Contaminant Warning Systems

As part of the Sandia Water Initiative, Sandia National Laboratories is partnering with the Environmental Protection Agency's (EPA's) National Homeland Security Research Center within the EPA's Threat Assessment Vulnerability Program to develop contaminant warning systems for protection of our nation’s water distribution systems. These warning systems use real-time water sensors to monitor water quality and provide early detection of chemical or biological contaminants. A central design challenge is to determine the location of real-time water sensors so as to maximally protect human life and minimize detection delays. Sandia’s discrete mathematics group (1415) has recently developed computational algorithms that can quickly find sensor placements, and which can analyze the optimality of a solution by exploiting the mathematical structure of this problem. Sandia's sensor placement solvers are actively being used to support the EPA's Water Sentinel Program. These methods have been effectively applied to water distribution systems that are 500 times larger than water networks considered for other sensor placement methods, and they are being used to design contaminant warning systems that will be deployed during 2006.

(Contacts: Bill Hart, Ray Finley)
June 2006


DAKOTA 4.0 Deploys Optimization for Robust Designs

Version 4.0 of the DAKOTA Optimization and Uncertainty Quantification Toolkit was released in May 2006. DAKOTA is used throughout the Tri-Labs for design, surety, and validation studies. DAKOTA performs a multitude of iterative system analyses: it closes the design and analysis loop by using simulation codes as virtual prototypes to explore design scenarios and the effect of uncertainties.

DAKOTA 4.0 provides a significant step forward in usability with a JAVA-based graphical user interface, support for algebraically-defined problems with AMPL, a library mode for seamless simulation integration, and upgrades to documentation and configuration management. DAKOTA is both an ASC production code and a framework for innovative research. DAKOTA 4.0 deploys many new algorithms, such as the latest in probabilistic design capabilities, which are being applied to enable microsystem designs to perform well regardless of manufacturing variations. DAKOTA furthers Sandia mission areas in DP, QASPR, MESA, HEDP, NISAC, and others. DAKOTA is used with Sandia's high performance simulation codes such as Aria, Alegra, Xyce, and SIERRA. DAKOTA is open-source and has 3000 registered installations from sites all over the world. DAKOTA is led by Department 1411 and has contributors from across Centers 1400, 1500, 6600, and 8900. See http://endo.sandia.gov/DAKOTA/software.html

(Contacts: Scott Mitchell and Mike Eldred)
May 2006


CUBIT 10.1 introduces major improvements in geometry preparation for simulation

The CUBIT Team is pleased to announce Version 10.1 of the CUBIT Geometry and Mesh Generation Toolkit. Geometry preparation continues to be the major bottleneck in developing models for simulation at Sandia. Cubit 10.1 introduces several new operations in geometry improvement and simplification to address these needs.

Modeling of complex assemblies using CAD geometry is an important aspect of weapons design. The imprint operation is a geometric operator that facilitates the joining of components of an assembly so that a continuous domain can be represented. This vital operation, traditionally handled by commercial third party geometry libraries, is notoriously susceptible to geometric tolerance problems, and often results in sliver surfaces and extraneous details that can take hours or days to manually resolve. Cubit 10.1 introduces a new tolerant imprint operation that overcomes most of the issues common to third party geometry kernels, reducing what was once a tedious and cumbersome process of hours, to an automatic process of seconds.

CAD models developed using commercial modeling systems can also frequently contain extraneous features such as small edges or surfaces that are not needed for simulation. These features, typically discovered through tedious trial and error, can often have detrimental effects on the finite element mesh and the resulting simulation. CUBIT 10.1 introduces new capability for identifying and cleaning up poor geometry. Expanding its existing virtual geometry capability, new operations that allow collapsing edges and surfaces provide a powerful solution to this problem.

CUBIT continues to be the focus of state-of-the-art geometry and mesh generation research and development at Sandia National Laboratories. Sandia has long recognized the vital role geometry and meshing plays in computational engineering simulation. CUBIT represents a significant investment in improving tools needed to reduce the time to analysis. The geometry repair and simplification tools introduced in CUBIT 10.1 represent an example of how the CUBIT team is helping to facilitate science based engineering transformation at Sandia through improved modeling and simulation tools.

 
Tolerant imprinting used to avoid creation of sliver surfaces.
Resulting mesh shown after tolerant imprint.

(Contact: Steve Owen)
April 2006


POP Science Runs
A series of runs were recently made using LANL’s Parallel Ocean Program (POP) model. Mark Taylor (1433) in collaboration with Mat Maltrud (LANL) performed two 10 year simulations on 5000 processors on Red Storm. The model resolution was 1/10th of a degree (3600x2400x40) which resulted in 350 million grid points and 1.5 TB of output data. Results from these runs showed improved results for the Gulf Stream separation, NW corner, Agulhas rings, and Kuroshio current.

N Atlantic
NW Pacific

A series of benchmark runs were also made to compare the performance of POP on Red Storm (RS), Blue Gene Light (BGL), and the Earth Simulator (ES). Some of the resulting data are shown in the Table 1. Here, the numbers of processors required to yield particular real-time simulation-time ratios (simulation rates) on the three machines are given. It is interesting to note that ES requires roughly 1/4th the number of processors that RS needs for simulation rates of 1yr/day and 3 yr/day. On the other hand, only RS can make calculations for simulation rates of 6 yr/day due to the fact that the peak performances of BGL and ES have been exceeded such that adding more processors actually increases run times.

Red Storm
Blue Gene Light
Earth Simulator

Table 1 Number of CPUs Required to Achieve Various Simulation Rates

(Contacts: Mark Taylor and Jim Strickland)
April 2006


ALEGRA-HEDP

The ML solver team has resolved an extremely challenging technical issue with solution of singular and ill-conditioned H(curl) matrices for Z-pinch simulations. In a key verification and validation test of the ALEGRA high energy density physics code, noise-level values for the z-component of the magnetic field generated by the algebraic multigrid solver triggered spurious magnetic Rayleigh-Taylor instabilities in the overall solution of a liner implosion problem. Resolution of this issue required a sustained focus over several months, a high level of expertise in the mathematics of the solution and the details of the numerical implementation, innovative thinking about the underlying physics and numerical issues, and contributions from team members in several areas.

As a result of the team’s efforts, this simulation recently and for the first time ran through the peak of the main power pulse without exhibiting magnetic Rayleigh Taylor instability induced by background noise. The intended symmetric magnetic field solution was produced, and the resulting current and inductance histories are correct and in agreement with similar quasi-1D simulations and with experiment. This achievement is an important step toward modeling and simulation of Z-pinch phenomena.

This particular simulation used an imposed symmetry to isolate possible causes of experimental measurements of asymmetric axial power. Subsequent simulations will allow analysts to determine the effects of slots and gaps on system behavior, which will be relevant in determining how to redesign the load to eliminate undesirable features in experiments on the Z-machine. The solver enhancements have been incorporated in the Trilinos package and represent a substantial advance in the solvability of H(curl) systems arising during Z-pinch simulations.

(Contact: Randy Summers)
March 2006


ParaView

The Data Analysis and Visualization department has made advances in visualization technology in recent months and has been able to show case these to several high profile external visitors including Ambassador Linton Brooks. The new advances were made in visualization capabilities supporting Red Storm science calculations and in information visualization directly supporting Sandia’s LDRD program office. During successful completion of a level II milestone for the NNSA’s Advanced Simulation and Computing (ASC) Program, the ParaView (www.paraview.org) software, of which Sandia has contributed the scalable parallel rendering algorithms, broke world rendering records in achieving rates of 8 billion triangles per second. The importance of this record is demonstrated by ParaView being used to visualize the largest simulations generated on Red Storm. Several images from ParaView are shown in Figure 1.

Figure 1 Visualizations in ParaView of billion cell calculations of the destruction of the asteroid Golevka (left) and the breakdown of the polar vortex (right). The models required extreme scalability of ParaView just to render these results. ParaView utilized over 100 visualization computers to render these results. The results were delivered through the corporate network at interactive rates to a standard Windows workstation allowing analysts to interact with the data in real time to facilitate understanding and science.

These results were part of a joint NVIDIA Corp (a leading graphics card vendor) and Sandia press release (http://www.nvidia.com/object/IO_27539.html). The press release was picked-up by other press agencies around the world. Unlimited release movies of these simulation results are available on an internal website (www.ran.sandia.gov/viz).

A recent press conference held in 1424’s visualization facility in building 899 (JCEL) also featured these results. The press conference centered on the accomplishments of Red Storm and had speeches from Sandia’s President Tom Hunter and Ambassador Linton Brooks of NNSA. Local news media were on hand and some of 1424’s visualizations and facilities were featured in local news media broadcasts. These broadcasts are available on an internal website (http://www.ran.sandia.gov/viz/vizrnd/redstormPress.html).

Recent high-level visitors from ExxonMobil were shown Sandia’s advanced visualization capabilities including demonstrations of ParaView and a new information visualization tool that is being built for the LDRD office.

The new information visualization tool is based on the previous Sandia tool VxInsightTM. This new tool provides Sandia’s LDRD office the ability to view past and present LDRD projects in order to better perform overall portfolio management of the program. Besides ExxonMobil, companies and agencies like Goodyear and NIH are interested in it for business management. Its uses are also intended for homeland security applications, biology, and stockpile stewardship. A screen shot of an early proto-type is shown in Figure 2.



Figure 2 Early proto-type of information visualization tool for the LDRD office.


(Contact: David R. White)
March 2006


Support Grows for dual-core Opteron Nodes on Red Storm

A Sandia development team led by Sue Kelly and John Van Dyke is modifying our Catamount light weight kernel technology to make efficient use of dual core Opteron processors. These processors are socket compatible with the current Opteron single-core processors in Red Storm. This team is also working with Cray Corp. to integrate this dual-core support into Cray's operating system software release v1.4. Outside of Sandia's development testing, the first place where this system software will be used at scale is likely to be in England.

On January 24, 2006, a public announcement was issued that the UK's Atomic Weapons Establishment selected a Cray XT3 system with dual-core Opteron processors. The XT3 is Cray's Commercial version of our Red Storm system. This selection represents a departure from AWE's recent use of IBM systems that were similar to the ASC-funded systems at LLNL. This choice also represents a strategic win for both Cray and Sandia, as it provides an important independent validation/confirmation of the value of Red Storm's system architecture and the priority it places on interconnect performance. Finally, this selection also strengthens Sandia's argument for a Red Storm upgrade to exchange the original 2.0 GHz single-core processors with 2.4 GHz dual core processors.

(Contact: James Ang)
February 2006


Red Storm, Z-pinch Simulations, Testing Scalable Supercomputing Systems, V&V, etc.

  1. Red Storm – Sandia’s leadership is a major enabler of Cray XT3 popularity worldwide. It strikes a balance between computation and communication in such a way to allow applications to scale well. While there are other machines that rank higher than Red Storm on the TOP500 list, our analysis shows that a more enlightened benchmark reflecting large real applications, would improve the ranking of Red Storm.
  2. Red Storm Capability Visualization Milestone – Not only that the ASC Level II milestone has been met successfully by a team comprising of centers 4300 and 1400, a new world record in visualization has been set. The visualization record now stands at 8 billion triangles per second. Another groundbreaking record of 15 GB/s for I/O has been set.
  3. Z-Pinch Simulations – A technical team of Centers 1400 and 1600 has come up with innovative algorithm to overcome the instability problems in Z-Pinch Simulations. This has enabled a key verification and validation test.
  4. Breakthrough in Analyzing Intelligence Problems –Many intelligence problems consist of billions of relationships between individuals, places and events. A graph-theoretic network is a common approach in analyzing the problem for probable implication or exoneration. A technical team led by Center 1400 in collaboration with LLNL has accomplished recognition of the breakthrough through a finalist paper in the Gordon Bell Prize at the recent Supercomputing Conference.
  5. Testing Scalable Supercomputing Systems Software – A technical team of Center 1400 has developed a test suite for testing the interactions between components of a supercomputing system. Through successive refinements this test suite has been adopted by all participating DOE Labs numbering 8. Further utilization is expected from many DOE large codes where the components of the code can be tested for successful interaction.
  6. Systems Software Verification and Validation – Nuclear stockpile stewardship under the test ban relies on complex simulations on supercomputers. It is commonly assumed that the underlying systems software do not affect the results. Lately, we have begun to examine this assumption by undertaking a limited effort in verification and validation of systems software. Early progress has enabled improvements in regression testing. In future we expect to enhance the confidence in simulation results.
  7. Software Productivity – The current state-of-the-art in software development recognizes a need for greater productivity. An effort in this direction is to devise a higher level language using global address space programming which has an inherent risk of adoption. We have made progress in carving out an incremental migration path whereby our investments in the existing codes can be leveraged toward productivity gains. These are summarized in two recent SAND Reports.
  8. Utilizing FPGA in Supercomputing – Field Programmable Gate Arrays can be selectively deployed for performance gains in supercomputing. A current limitation has been the floating point arithmetic using FPGA. Our team has significantly enhanced the floating point capability, and attracted commercial interest.
  9. Quantum Computing – Through an LDRD effort we are developing awareness, interest, and capability to apply quantum computing in limited domains. A multi-center team has started simulating molecular properties through quantum computing.
  10. Future Supercomputers – A real obstacle in multi-petaflops supercomputing is recognized to be the interconnect technology where we need orders of magnitude improvements in bandwidth and latency associated with communication among large number of processors. We have started on a path to address these challenges going beyond current Red Storm limits. Also, we have started an effort to address the design of supercomputers through computer simulation. Our efforts are receiving attention of our peers and raising Sandia’s visibility.

    (Contact: Neil Pundit)
    February2006

Atomistic-to-Continuum (AtC) Multiscale Analysis

Sandians Pavel Bochev and Rich Lehoucq received a DOE three year award A Mathematical Analysis of Atomistic-to-Continuum (AtC) Coupling Methods starting fiscal year 2006. The award is a collaborative proposal with Don Estep (CSU), Jacob Fish and Mark Shephard (RPI), and Max Gunzburger (FSU). The research is funded under the Office of Science_s Multiscale Mathematics program. The program addresses those science problems that span many time scales--from femtoseconds to years--and many length scales--from the atomic level to the macroscopic.

Materials, and in particular, nanostructured materials are governed by processes that are often controlled by the coupling of structures and dynamics spanning many length and time scales. Theoretical and computational treatment of multi-scale phenomena, including the development of predictive capabilities, is therefore of fundamental importance. Synthesizing, or coupling, atomistic and continuum descriptions of physical phenomena is an attempt to ameliorate the overwhelming computational costs associated with an all atomistic simulation.

Atomistic-to-Continuum (AtC) coupling enables a continuum calculation to be performed over the majority of a domain of interest while limiting the more expensive atomistic simulation over a subset of the domain. Unfortunately, combining atomistic and continuum calculations is challenging because the former is based on individual non-local force interactions between atoms or molecules while continuum calculations deal with bulk quantities that represent the average behavior of millions of atoms or molecules.

Past research in Atomistic-to-Continuum (AtC) models and algorithm development has paid off in the formulation of procedures that address specific applications. This previous research has also begun to lead to some degree of generalization. However, much less effort has been directed at the fundamental mechanics and/or mathematical theory of AtC methods. For example, a rigorous mechanical formulation with error, stability, convergence analysis and uncertainty quantification of coupling atomistic and continuum models is lacking. As a result, a mathematical and mechanical framework that can provide a unified theoretical foundation for the formulation, analysis, and implementation of AtC coupling methods is needed. The goal of our research is to understand and quantify the limits in AtC coupling methods and the resulting impact on multiscale simulations.

The impact of our research is to improve the efficiency and fidelity of multiscale simulation efforts. For example, carbon nanotubes (CNTs) possess unique properties arising from their structure but unavoidable defects have a major influence. The need to understand the multiscale behavior of nanotubes prompted the development of equivalent continuum theories, which are only valid for perfect structures. Various methods couple molecular and continuum models in an attempt to account for defects in CNTs. Our research is to examine these AtC techniques and to place them within a rigorous mathematical framework.

(Contacts: Rich Lehoucq and Pavel Bochev)
February 2006


Automated Force Field Fitting: Proof of Principle Achieved

Empirical force fields (FF) (a.k.a. inter-atomic potentials) are essential for classical atomistic simulation methods like molecular dynamics (MD) and Monte Carlo (MC), which are widely used for computational materials research at Sandia. The FF determines the accuracy of the simulation and controls its ability to be predictive. However, identifying the functional form for a FF and determining parameter values to describe a particular material is a continual challenge and weakness of classical simulations. The value of an objective and automated method for performing these tasks is generally recognized, but ideas for achieving it have been lacking. Under a CSRF project, we have been pursuing the possibility of using a computer program capable of "learning" to create and fit a FF to a set of inter-atomic potential data on-the-fly. Specifically, we are applying a general Artificial Intelligence method call Genetic Programming (GP) to the FF fitting problem, as well as to other agent-based problems in security and sociology. A preliminary milestone for the FF fitting problem was achieved when our program successfully created a FF that accurately interpolated discrete atomic potential data provided as input by creating a GP FF; a simple Lennard-Jones function was used to generate the synthetic atomic potential data. This proof-of-principle is encouraging, but much work remains to reach the overall goal of a robust, accurate, automated FF fitting capability. Next steps include interpretation of the resulting GP trees – much larger and more complicated versions of the example at the right, – acceleration of the genetic algorithm (GA) search, and using more accurate electronic structure methods to generate discrete atomic potential data.



(Contact: Alexander Slepoy)
January 2006


Newsnotes | Info and Events (internal - SNL only) | Open-Source Software Downloads | Privacy and Security
Sandia National Laboratories Home Page - External or Internal (SNL only)

Maintained by: Bernadette Watts
Modified on: June 10, 2011