![]() ![]() ![]() |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Advanced Computational Technology for MicroSystems Several years of collaborative research and development by staff in the Electrical and MicroSystems Modeling Department has led to the development and release of four new advanced software tools for microsystem fabrication process analysis and Micro-Electro-Mechanical Systems (MEMS) design. ChISELS 1.0 models the detailed surface chemistry and concomitant surface evolutions occurring during microsystem fabrication processes in 2D or 3D. Examples of modeled processes are low pressure chemical vapor deposition (LPCVD), plasma enhanced chemical vapor deposition (PECVD), and reactive ion etching. ChISELS employs a ballistic transport and reaction model coupled with CHEMKIN software [1] to model the coupled interacting physics and chemistry that drive the surface evolution. The dynamically evolving surface is captured by the level-set method, a flexible and robust method for capturing large changes in surface topography. Designed for efficient use on both single-processor workstations and massively parallel computers, ChISELS 1.0 leverages many recent advances in dynamic mesh refinement, load balancing and scalable solution algorithms. Chisels 1.0 has been released under an open source GNU Lesser General Public License. SummitView 1.0 is a computational tool designed to quickly generate a 3D solid model, amenable to visualization and meshing, of the end state of a microsystem fabrication process. This capability has become critical to designers because of the very complex 3D MEMS device designs that can now be created with advanced multi-level micro-fabrication technologies. Because SummitView is based on 2D instead of 3D data structures and operations, it has significant speed and robustness advantages over previous tools. Tests comparing SummitView 1.0 with the Sandia’s first generation 3D Geometry modeler demonstrated a consistent speedup of approximately two orders-of-magnitude. SummitView 1.0 will be commercially licensed by Sandia as part of the Sandia MEMS Design Tools. GBL-2D is being released under an open source GNU Lesser General Public License. Faethm 1.0 is a novel new design tool that, when given a three-dimensional object, can infer from the object’s topology the two-dimensional masks needed to produce that object with surface micromachining. Faethm implements an algorithm recently developed and copyrighted at Sandia that performs essentially the inverse of that performed by SummitView 1.0. The masks produced by Faethm can be generic, process independent masks or, if given process constraints, specific for a target process allowing 3D designs to be carried across multiple processes. With the release of Faethm 1.0, a fundamentally new design paradigm has been made available for MEMS designers to explore. Figure 2 compares the standard design path utilizing SummitView with the new design path utilizing Faethm. Faethm will be commercially licensed by Sandia as part of the Sandia MEMS Design Tools.
(Contact: Richard Schiek) Optimization for Integrated Stockpile Evaluation The Integrated Stockpile Evaluation (ISE) program has been organized to deal with the challenge of meeting the data needs of weapons system evaluation in an environment of shrinking budgets. Departments 1415 and 8962 are supporting this program by developing models to optimize resource allocation for ISE. These models can be used to experiment with different testing plans, and can be customized to address various objectives. For example, given a specific collection of samples, the optimization model can find an allocation of testing resources that maximizes the coverage of data needs. If cost information is available, then the model could incorporate a budget limit and re-evaluate the maximum possible coverage. On the other hand, given a hard limit on the coverage of data needs, the same basic model can determine the minimum number of samples required. This research has resulted in a prototype optimization capability that runs on synthetic data. Large synthetic instances have been solved using the PICO integer programming solver developed by 1415. The next steps are to pilot this basic optimization capability on real ISE data and to extend the current resource allocation models in order to take into account temporal information. (Contact: Jon Berry ) R&D 100 Award for Compute Process Allocator Vitus Leung (1415), Michael Bender (SUNYSB), David Bunde (UIUC/Knox), Kevin Pedretti (1423), and Cindy Phillips (1415) won a prestigious 2006 R&D 100 award for the Compute Process Allocator (CPA). Vitus received the plaque for the CPA at R&D Magazine's gala awards banquet in Chicago on October 19. Rick Stulen, Sandia’s Chief Technology Officer and Vice President of Science, Technology and Engineering will host a celebration to honor this achievement at the Steve Schiff Auditorium Lobby on December 14 at 10am. A ceremonial hanging of the plaque will take place. Below is the editorial on the CPA in the September 2006 awards issue of R&D Magazine.
Parallel processing on supercomputers gives rise to the problem of resource allocation. To address this issue, researchers at Sandia National Laboratories, Albuquerque, N.M., collaborated with researchers from the State Univ. of New York, Stony Brook, and the Univ. of Illinois, Urbana, to develop the Compute Process Allocator (CPA). CPA is the first allocator to balance individual job allocation with future allocation over 10,000 processors, allowing jobs to be processed faster and more efficiently. In simulations and experiments, CPA increased the locality and throughput on a parallel computer by 23% over simpler one-dimensional allocators. In simulations, CPA increased the locality on a parallel computer by 1% over more time-consuming higher-dimensional allocators. CPA is distributed and scales to over 10,000 nodes, while non-distributed allocators have been scaled to only 4,096 nodes. (Contact: Vitus Leung) Solution Verification and Uncertainty Quantification Coupled
This project pulled together a cross-disciplinary team from Centers 1400, 1500, and 1700. The key contributions were uncertainty analysis and probabilistic design capabilities from DAKOTA, global norm and quantity of interest error estimates from Coda, nonlinear mechanics analysis from Aria, data structures and h-refinement algorithms from SIERRA, and MEMS model development from MESA. Many new capabilities were developed within these codes for this demonstration milestone.
The nonlinear structural analysis of MEMS systems was demonstrated using both global-norm and quantity-of-interest error estimators. Two approaches for uncertainty quantification were developed: an error-corrected approach, in which simulation results are directly corrected for discretization errors, and an error-controlled approach, in which error-estimators drive adaptive h-refinement of the mesh. The former requires error estimates that are quantitatively accurate, whereas the latter can employ any estimator that is qualitatively accurate. Each of these techniques treats solution verification and uncertainty analysis as a coupled problem, recognizing that the simulation errors may be influenced by, for example, conditions present in the tails of input probability distributions. Combinations of these approaches were also explored. The most effective and affordable of these approaches were used in design studies for a robust and reliable bistable MEMS device. (Contact: Mike Eldred or Jim Stewart ) Unconstrained Plastering: All-Hexahedral Mesh Generation via Advancing Front Geometry Decomposition The search for a reliable all-hexahedral mesh generation algorithm has been a constant area of international research for more than two decades. Many researchers have abandoned the search in favor of the widely available and highly robust tetrahedral mesh generation algorithms. However, analysts searching for highly accurate solutions still prefer all-hexahedral meshes for many applications. The Cubit mesh generation software team at Sandia National Laboratories is currently researching a new algorithm, called Unconstrained Plastering, which has the goal of generating high-quality all-hexahedral meshes on any arbitrary geometry assembly. This research leverages more than a decade of hexahedral mesh generation research at Sandia National Laboratories. Hexahedral mesh generation is constrained by 1.) strict global topology requirements of hexahedral elements, and 2.) geometric features of the model being meshed. If either one of these is not fully considered, the result is either algorithm failure or poor quality elements, both of which compromise the ability to perform accurate computational analysis. Through the years, dozens of algorithms for hexahedral mesh generation have been published. However, none have adequately considered both of these constraints. As a result, none have resulted in a robust solution to generating all hexahedral meshes on arbitrary geometry. Unconstrained Plastering is the next generation of hexahedral mesh generation research, which is roughly based on a mixture of the advantages of numerous previously published algorithms. Unconstrained Plastering is a geometry decomposition method which uses an advancing front technique from an unmeshed volume boundary. Each front advancement partitions from the volume what will eventually become a topological sheet of hexahedra. Hexahedral elements have, by definition, three degrees of freedom. Previous published advancing front hexahedral meshing algorithms constrain all three degrees of freedom with each front advancement. In contrast, Unconstrained Plastering only constrains the single degree of freedom in the direction of the front advancement. The remaining two degrees of freedom are left unconstrained until they are either constrained by subsequent nearby front advancements or until adjacent geometry decompositions are recognized as meshable with one of several well-known primitive meshing algorithms. By delaying the definition of degrees of freedom, Unconstrained Plastering is better able to conform to the previously described constraints of global hexahedral element topology and geometric feature conformity. Research on Unconstrained Plastering is in its second year of funding. The example mesh image below illustrates that Unconstrained Plastering is able to generate high quality all-hexahedral meshes on non-trivial non-planar concave models. Research on Unconstrained Plastering is currently focused on complex front interactions encountered in complex geometric models and on creating conformal meshes on assembly models. If successful, this focus will allow Unconstrained Plastering to generate high-quality all-hexahedral meshes on increasingly complex geometries.
(Contact: Matt Staten )
LDRDView v1.0 Released and Development of TITAN Technology The Data Analysis and Visualization Department has released advanced software for information visualization, and is advancing development of its TITAN technology – and infrastructure of development of parallel information visualization. Advanced Information Visualization LDRDView’s purpose is to help users gain insight into the structure and relationships present within their data. It can analyze any set of documents such as LDRD proposals, email messages, news stories, scientific literature, problem reports, or patent filings. These documents are grouped by conceptual similarity and laid out in a three-dimensional landscape. Peaks within this landscape correspond to clusters of documents that are similar to one another [see figure 1]. This application supports keyword queries, concept queries, link highlighting, and other intuitive interactions with the data – all in support of investigating the structures and relationships hidden in the data.
TITAN is Department 1424’s scalable information visualization infrastructure that forms the core of our information visualization technology. Building on our expertise in scalable scientific visualization, we are adding components – visualization algorithms, data filters, and ways of looking at data – to a common core. This allows components to be swapped in and out, per customer needs. For example, we currently support the STANLEY text analysis engine, but customers may substitute their own, either through directly linking to the TITAN infrastructure, or by creating files that could be imported by an application like LDRDView. Development is rapidly proceeding on TITAN, we have delivered our first prototype TITAN application to an external customer.
(Contact: David
Rogers) Simulation Study of Head Impact Leading to Traumatic Brain Injury Traumatic brain injury, or TBI, is an unfortunate consequence of many civilian accident and military related scenarios. Examples include head impact sustained in sports activities and automobile accidents as well as blast wave loading from improvised explosive devices (IEDs). Depending on the extent of the damage, TBI is associated with a loss of the functional capability of the brain to perform cognitive and memory tasks, process information, and a variety motor and coordination problems. In many instances, the person involved in the event will not experience the full loss of brain function until days or weeks after the event has occurred. This suggests the existence of threshold levels and/or conditions of mechanical stress experienced by the brain that, if exceeded, lead to evolving symptoms of TBI in the days or weeks following an accident. To avoid a trial-and-error approach involving large-scale medical testing of laboratory animals to study various scenarios leading to TBI, we have developed numerical simulation models of the human head to study various impact and blast wave conditions that lead to the onset of TBI. To accomplish this task, we have recently established a collaborative effort between Sandia researchers and the Mental Illness and Neuroscience Discovery (MIND) Institute, at the University of New Mexico. This collaboration permits us to create accurate models of the various tissues and geometries of the human head as well as to conduct simulations of head impact in order to establish a correlation between the incipient levels and durations of stress and strain energy experienced by the brain and the onset of TBI. In this note, we present the results of a small study that simulates the early-time wave interactions occurring within the human head as a result of impact of an unrestrained person with the windshield of an automobile in a 30 mph head-on collision into a stationary barrier. We have conducted various simulations of this scenario over the past few years; however, the current work has been carried out with a higher fidelity head model over an extended simulation timescale. Our 3-D head model was developed by importing a segmented interpretation (displaying distinct biological materials of bone, brain, and fluid) of a CT scan from a healthy female head into the shock physics hydrocode CTH. Specific material models were created for the skull, brain, cerebral spinal fluid (CSF), and windshield glass. The simulations were run on a parallel architecture computer employing 64 processors for each simulation run. The results of the simulations demonstrate the complexities of the wave interactions that occur between the skull, brain, and CSF fluid as a result of a frontal impact with the glass windshield. These interactions lead to focused regions within the brain that experience significant levels of pressure and deviatoric (shearing) stress. In particular, the pressure waves focus roughly 30 bars of compression in the brain at the impact site (Fig. 1) and 5 bars of tension at the site opposite (contra-coup) the impact point (Fig. 2). Furthermore, our simulations predict up to 30 bars of deviatoric (shearing) stress at the interface between the brain and the ventricles that conduct the CSF fluid within the brain (Figs. 3 & 4). This interaction leads to a tearing effect on the brain tissue if the stress level is sufficiently high. The geometric complexities of the skull interior are such that there are a variety of sites that experience stress focusing, which can readily be seen in computer animations of the simulations. The significant of these results is the fact that they occur on a time scale of roughly 1-2 ms and capture the early-time wave interactions that are potentially damaging to the brain and before any coarse body motion has begun, which can lead to additional damage resulting from a “sloshing” motion of the head. The results of this study have been summarized in an article to appear in the proceedings of the 25th Army Science Conference, which will convene in November, 2006. An immediate goal of this collaborative effort is to establish a quantitative correlation between specific levels of stress/strain energy and the onset of TBI under a variety of accident conditions. This effort involves studying conditions under which accident victims experience conditions that lead to TBI, as diagnosed by medical tools such as structural and functional MRI, and conducting accurate simulations of the event. Once such correlations exist, this approach can be used to investigate mitigating strategies to minimize the conditions under which TBI occurs. Future studies are planned to investigate the occurrence of TBI as it is experienced by blast victims from improvised explosive devices (IEDs). This is a significant topic of concern for the U.S. Army and consequently, we are pursuing funding support through the Department of Defense to address this problem.
October 27th Sandia Lab
News Article
High Performance
Computing Provides Clues to Scientific Mystery Most natural glasses
are volcanic in origin and have chemical compositions consistent with equilibrium
fractional melting. The rare exceptions are tektites formed by shock melting
associated with the hypervelocity impact of a comet or asteroid. Libyan
Desert Glass does not fall into either category, and has baffled scientists
since its discovery by British explorers in 1932. The 1994 collision of
Comet Shoemaker-Levy 9 with Jupiter provided Sandia with a unique opportunity
to model a hypervelocity atmospheric impact. Insights gained from those
simulations and astronomical observations of the actual event have led
to a deeper understanding of the geologic process of impacts on Earth and
presented a likely scenario for the formation of Libyan High-resolution hydrocode simulations, requiring huge amounts of memory and processing power, support the hypothesis that the glass was formed by radiative heating and ablation of sandstone and alluvium near ground zero of a 100 Megaton or larger explosion resulting from the breakup of a comet or asteroid. Using Sandia’s
Red Storm supercomputer, we ran CTH shock-physics simulations to show how
a 120-meter asteroid entering the atmosphere at 20 km/s (effective yield
of about 110 Megatons) breaks up just before hitting the ground. This generates
a fireball that remains in contact with the Earth’s surface at temperatures
exceeding the melting temperature of quartz for more than 20 seconds. Moreover,
the air speed behind the blast wave exceeds several hundred meters per
second during this time. These conditions are consistent with melting and
ablation of the surface followed by rapid quenching
to form the Libyan Desert Glass. These simulations require the massive
parallel processing power provided with Red Storm.
National Geographic Documentary Evidence
for a direct impact includes the presence of shocked quartz grains
and meteoritic
material within the glass. However, the vast expanse of the glass
and lack of an impact structure suggests the possibility of radiative/convective
heating from an aerial burst.
View this article as PDF September 15th Sandia Lab
News Article Red Storm’s impact on the High Performance Computing community is
due in part to Sandia’s focus on interconnect performance for MPP
systems A Tool for Testing Supercomputer Software Software testing and debugging
has long been a challenge for parallel and distributed systems. Testing
is a tedious and time-consuming task, often
requiring as much as much as two-thirds of the overall cost of software production.
On today’s DOE supercomputers with tens of thousands of processors,
testing and debugging is further complicated by the number of places a fault
can occur.
Mesh-Tying Mortar Methods and the Trilinos Package Moertel Least-Squares Finite Element Method Generalized Lagrange-Multiplier Method References [2] P. B. BOCHEV, M. L. PARKS, AND L. A. ROMERO, A generalized-Lagrange multiplier method for mesh tying. In preparation, 2006.
(Contact: Pavel
Bochev) 2006 CIS External Review Highly Successful A highly successful review of the technical program of the Computer and Information Sciences (CIS) ST&E Research Foundation was held on June 7 – 9, 2006 in Albuquerque. This was the first CIS External Review conducted under the direction of the University of Texas (UT), in the person of David Watson, Associate Vice Chancellor for Sandia Operations, and his assistant, Ms. Dawne Settercerri. UT took the lead on handling the logistics, while the CIS Council, comprising Center 1400 and Group 8960, managed all of the technical content of the review. The review panel of external experts was again chaired by Michael Levine, Director of the Pittsburgh Supercomputing Center and Professor of Physics at Carnegie Mellon University. He was joined by:
The review was successful both
technically and operationally. The UT and CIS organizers worked well together,
managed the complications of transitioning
to a new process, and produced a well organized, efficient review. The CIS
Research Foundation presented a set of well developed and effective talks
on a broad sampling of its technical R&D efforts. The panel was complementary
of both of these aspects of the review.
For more information, contact Bill Camp (1400) and Len Napolitano (8900). (Contact: Bill
Camp and Len Napolitano) Converged Simulations of a TeraHertz Oscillator A collaboration between researchers at NC State and Sandia’s CCIM center 1400 has led to a unique computational capability for modeling a Resonant Tunneling Diode (RTD). An RTD is a nanoscale electronic device (see Fig. 1) that is believed to autonomously oscillate under certain conditions by quantum tunneling effects. The TeraHertz frequency of the oscillations makes the RTD of interest for numerous applications, including medical imaging, sensing, chemical analysis, and applications of interest to DoD. The Wigner-Poisson equations govern the behavior of the electrons in the device, where even a one-dimensional geometric model results in a two-dimensional calculation over space and momentum. The equations have both integral and differential components that lead to a dense interaction matrix between all nodes. A typical solution of the model is an entire current-voltage (I-V) curve, which can include hysteresis effects and regions of oscillations. Before this collaboration, the state-of-the-art was a 6 Thousand node model where oscillatory states were observed by long time integration. Under a CSRI-funded collaboration, the Wigner-Poisson code was interfaced to the Trilinos solver framework to make use of the continuation, bifurcation, eigensolver, linear solver, nonlinear solver, and parallel data structure packages. A parallelization strategy was devised and implemented (Fig. 2) to increase the problem sizes that could be tackled, and an eigensolver was used to detect the onset of oscillations via a Hopf bifurcation. Parallel computations on 48 processors of the ICC cluster were sufficient to perform mesh refinement studies for the entire I-V curve up to 2 Million unknowns, and demonstrate mesh convergence of the model for the first time. An analysis found that the algorithm was 95% parallel and 5% sequential. This scalability is adequate for this model, (with ~60% parallel efficiency on 16 processors) but shows that the algorithm would not scale well if convergence required the resources of RedStorm. The collaboration has led to 3 journal publications, and has spawned a new collaboration under NECIS funding.
(Contact: Andy
Salinger 505-845-3523, with Prof. C.T. Kelley and Matthew Lasater
of NC State) Sandia's Maps of Science at the New York Public Library Kevin Boyack (1412), a founding member of the Advisory Board of Places & Spaces, has contributed three of the maps currently being shown at the Science, Industry and Business Library (SIBL) of the New York Public Library, located at Madison and 34th in Manhattan (http://www.nypl.org/research/calendar/exhib/sibl/celistsibl.cfm). All three maps were done in collaboration with Richard Klavans, SciTech Strategies, Berwyn, PA, and were enabled by work performed under an LDRD project whose purpose was to generate large-scale maps of science with associated indicators at very fine-grained levels.
The first of the Boyack/Klavans maps is entitled “The Structure of Science” and was based on papers indexed in 2002. In it, clusters of papers are overlaid like a galaxy on clusters of journals. The second map, “Map of Scientific Paradigms,” differed from the first in that it was based on papers indexed in 2003, and was generated from papers only without regard to journals.
Their third entry is an interactive illuminated diagram showing the “Map of Scientific Paradigms” together with a world map and projectors casting light patterns onto the wall-mounted posters, all of which is computer controlled with a touchscreen. The touchscreen allows the selection of topic areas (e.g. nanotechnology) and key scientists (e.g., Einstein, Watson & Crick) after which the areas of science, and the geographic locations in the world associated with that science are simultaneously illuminated. There is also a default mode which sweeps like a radar through the scientific paradigms and their associated geographic locations. The Places & Spaces exhibit at SIBL will be shown through August 31, 2006.
(Contact: Kevin
Boyack) New Xyce capability demonstrated: integrated electro-mechanical system simulation View PDF file for details. (Contact: Scott
Hutchinson and Eric Keiter) Designing Contaminant Warning Systems As part of the Sandia Water Initiative,
Sandia National Laboratories is partnering with the Environmental Protection
Agency's (EPA's) National Homeland Security
Research Center within the EPA's Threat Assessment Vulnerability Program to
develop contaminant warning systems for protection of our nation’s water
distribution systems. These warning systems use real-time water sensors to
monitor water quality and provide early detection of chemical or biological
contaminants. A central design challenge is to determine the location of real-time
water sensors so as to maximally protect human life and minimize detection
delays. Sandia’s discrete mathematics group (1415) has recently developed
computational algorithms that can quickly find sensor placements, and which
can analyze the optimality of a solution by exploiting the mathematical structure
of this problem. Sandia's sensor placement solvers are actively being used
to support the EPA's Water Sentinel Program. These methods have been effectively
applied to water distribution systems that are 500 times larger than water
networks considered for other sensor placement methods, and they are being
used to design contaminant warning systems that will be deployed during 2006. DAKOTA 4.0 Deploys Optimization for Robust Designs Version 4.0 of the DAKOTA Optimization and Uncertainty Quantification Toolkit was released in May 2006. DAKOTA is used throughout the Tri-Labs for design, surety, and validation studies. DAKOTA performs a multitude of iterative system analyses: it closes the design and analysis loop by using simulation codes as virtual prototypes to explore design scenarios and the effect of uncertainties.
DAKOTA 4.0 provides a significant step forward in usability with a JAVA-based graphical user interface, support for algebraically-defined problems with AMPL, a library mode for seamless simulation integration, and upgrades to documentation and configuration management. DAKOTA is both an ASC production code and a framework for innovative research. DAKOTA 4.0 deploys many new algorithms, such as the latest in probabilistic design capabilities, which are being applied to enable microsystem designs to perform well regardless of manufacturing variations. DAKOTA furthers Sandia mission areas in DP, QASPR, MESA, HEDP, NISAC, and others. DAKOTA is used with Sandia's high performance simulation codes such as Aria, Alegra, Xyce, and SIERRA. DAKOTA is open-source and has 3000 registered installations from sites all over the world. DAKOTA is led by Department 1411 and has contributors from across Centers 1400, 1500, 6600, and 8900. See http://endo.sandia.gov/DAKOTA/software.html (Contacts: Scott
Mitchell and Mike
Eldred) CUBIT 10.1 introduces major improvements in geometry preparation for simulation The CUBIT Team is pleased to announce Version 10.1 of the CUBIT Geometry and Mesh Generation Toolkit. Geometry preparation continues to be the major bottleneck in developing models for simulation at Sandia. Cubit 10.1 introduces several new operations in geometry improvement and simplification to address these needs. Modeling of complex assemblies using CAD geometry is an important aspect of weapons design. The imprint operation is a geometric operator that facilitates the joining of components of an assembly so that a continuous domain can be represented. This vital operation, traditionally handled by commercial third party geometry libraries, is notoriously susceptible to geometric tolerance problems, and often results in sliver surfaces and extraneous details that can take hours or days to manually resolve. Cubit 10.1 introduces a new tolerant imprint operation that overcomes most of the issues common to third party geometry kernels, reducing what was once a tedious and cumbersome process of hours, to an automatic process of seconds. CAD models developed using commercial modeling systems can also frequently contain extraneous features such as small edges or surfaces that are not needed for simulation. These features, typically discovered through tedious trial and error, can often have detrimental effects on the finite element mesh and the resulting simulation. CUBIT 10.1 introduces new capability for identifying and cleaning up poor geometry. Expanding its existing virtual geometry capability, new operations that allow collapsing edges and surfaces provide a powerful solution to this problem. CUBIT continues to be the focus of state-of-the-art geometry and mesh generation research and development at Sandia National Laboratories. Sandia has long recognized the vital role geometry and meshing plays in computational engineering simulation. CUBIT represents a significant investment in improving tools needed to reduce the time to analysis. The geometry repair and simplification tools introduced in CUBIT 10.1 represent an example of how the CUBIT team is helping to facilitate science based engineering transformation at Sandia through improved modeling and simulation tools.
(Contact: Steve
Owen) POP
Science Runs
A series of benchmark runs were also made to compare the performance of POP on Red Storm (RS), Blue Gene Light (BGL), and the Earth Simulator (ES). Some of the resulting data are shown in the Table 1. Here, the numbers of processors required to yield particular real-time simulation-time ratios (simulation rates) on the three machines are given. It is interesting to note that ES requires roughly 1/4th the number of processors that RS needs for simulation rates of 1yr/day and 3 yr/day. On the other hand, only RS can make calculations for simulation rates of 6 yr/day due to the fact that the peak performances of BGL and ES have been exceeded such that adding more processors actually increases run times.
(Contacts: Mark
Taylor and Jim Strickland) The ML solver team has resolved an extremely challenging technical issue with solution of singular and ill-conditioned H(curl) matrices for Z-pinch simulations. In a key verification and validation test of the ALEGRA high energy density physics code, noise-level values for the z-component of the magnetic field generated by the algebraic multigrid solver triggered spurious magnetic Rayleigh-Taylor instabilities in the overall solution of a liner implosion problem. Resolution of this issue required a sustained focus over several months, a high level of expertise in the mathematics of the solution and the details of the numerical implementation, innovative thinking about the underlying physics and numerical issues, and contributions from team members in several areas. As a result of the team’s efforts, this simulation recently and for the first time ran through the peak of the main power pulse without exhibiting magnetic Rayleigh Taylor instability induced by background noise. The intended symmetric magnetic field solution was produced, and the resulting current and inductance histories are correct and in agreement with similar quasi-1D simulations and with experiment. This achievement is an important step toward modeling and simulation of Z-pinch phenomena. This particular simulation used an imposed symmetry to isolate possible
causes of experimental measurements of asymmetric axial power. Subsequent
simulations will allow analysts to determine the effects of slots and gaps
on system behavior, which will be relevant in determining how to redesign
the load to eliminate undesirable features in experiments on the Z-machine.
The solver enhancements have been incorporated in the Trilinos package and
represent a substantial advance in the solvability of H(curl) systems arising
during Z-pinch simulations. (Contact: Randy
Summers) The Data Analysis and Visualization
department has made advances in visualization technology in recent
months and has been able to show case these to several
high profile external visitors including Ambassador Linton Brooks. The
new advances were made in visualization capabilities supporting Red Storm
science
calculations and in information visualization directly supporting Sandia’s
LDRD program office. During successful completion of a level II milestone
for the NNSA’s Advanced Simulation and Computing (ASC) Program,
the ParaView (www.paraview.org) software, of which Sandia has contributed the
scalable parallel rendering algorithms, broke world rendering records in
achieving rates of 8 billion triangles per second. The importance of this
record is demonstrated by ParaView being used to visualize the largest simulations
generated on Red Storm. Several images from ParaView are shown in Figure
1. Figure 1 Visualizations in ParaView of billion cell calculations of the destruction of the asteroid Golevka (left) and the breakdown of the polar vortex (right). The models required extreme scalability of ParaView just to render these results. ParaView utilized over 100 visualization computers to render these results. The results were delivered through the corporate network at interactive rates to a standard Windows workstation allowing analysts to interact with the data in real time to facilitate understanding and science. These results were part of a joint NVIDIA Corp (a leading graphics card vendor) and Sandia press release (http://www.nvidia.com/object/IO_27539.html). The press release was picked-up by other press agencies around the world. Unlimited release movies of these simulation results are available on an internal website (www.ran.sandia.gov/viz). A recent press conference held in 1424’s visualization facility in building 899 (JCEL) also featured these results. The press conference centered on the accomplishments of Red Storm and had speeches from Sandia’s President Tom Hunter and Ambassador Linton Brooks of NNSA. Local news media were on hand and some of 1424’s visualizations and facilities were featured in local news media broadcasts. These broadcasts are available on an internal website (http://www.ran.sandia.gov/viz/vizrnd/redstormPress.html). Recent high-level visitors from ExxonMobil were shown Sandia’s advanced visualization capabilities including demonstrations of ParaView and a new information visualization tool that is being built for the LDRD office. The new information visualization tool is based on the previous Sandia tool VxInsightTM. This new tool provides Sandia’s LDRD office the ability to view past and present LDRD projects in order to better perform overall portfolio management of the program. Besides ExxonMobil, companies and agencies like Goodyear and NIH are interested in it for business management. Its uses are also intended for homeland security applications, biology, and stockpile stewardship. A screen shot of an early proto-type is shown in Figure 2.
Support Grows for dual-core Opteron Nodes on Red Storm A Sandia development team led by Sue Kelly and John Van Dyke is modifying our Catamount light weight kernel technology to make efficient use of dual core Opteron processors. These processors are socket compatible with the current Opteron single-core processors in Red Storm. This team is also working with Cray Corp. to integrate this dual-core support into Cray's operating system software release v1.4. Outside of Sandia's development testing, the first place where this system software will be used at scale is likely to be in England. On January 24, 2006, a public announcement was issued that the UK's Atomic
Weapons Establishment selected a Cray XT3 system with dual-core Opteron processors.
The XT3 is Cray's Commercial version of our Red Storm system. This selection
represents a departure from AWE's recent use of IBM systems that were similar
to the ASC-funded systems at LLNL. This choice also represents a strategic
win for both Cray and Sandia, as it provides an important independent validation/confirmation
of the value of Red Storm's system architecture and the priority it places
on interconnect performance. Finally, this selection also strengthens Sandia's
argument for a Red Storm upgrade to exchange the original 2.0 GHz single-core
processors with 2.4 GHz dual core processors. Red Storm, Z-pinch Simulations, Testing Scalable Supercomputing Systems, V&V, etc.
Atomistic-to-Continuum (AtC) Multiscale Analysis
(Contacts: Rich
Lehoucq and Pavel Bochev) Automated Force Field Fitting: Proof of Principle Achieved
Maintained
by: Bernadette Watts
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||