Skip to Main Content U.S. Department of Energy
Lecture Series

Featured Speakers

2013

Lessons Learned from Computing in High Energy Physics

Dr. Amber Boehnlein
Dr. Amber Boehnlein

October 22, 2013
Location: EMSL Auditorium
Time: 10:00 am
Presented by:
Dr. Amber Boehnlein, Associate Director of Science of Computing, SLAC National Accelerator Laboratory

Dr. Boehnlein will discuss the scale of the computing requirements and the distributed nature of the collaborations running high energy physics experiments and how it led to the development of highly organized computing models. This resulted in large scale software development projects in grid middleware and the provisioning of national computing centers and networks. The successful deployment of this computing paradigm was a major factor in the ability of the Large Hadron Collider collaborations to rapidly achieve key physics goals, such as the discovery of the Higgs Boson.

In contrast, the data rates and computing power traditionally required by experiments mounted at conventional light source end stations have been relatively modest and adequately addressed within the individual experimental groups. Due to advances in detector technology, the use of computer simulations to design experiments and a desire for near real-time feedback during data collection, light source users are experiencing significant increases in data rates and computational needs. This trend, coupled with the development of open data policies, is leading to more formal computing paradigms. Computing systems and infrastructure developed for the Linac Coherent Light Source at SLAC drew on expertise from the high-energy physics computing community and provides an example of the applicability of the lessons learned in one domain as applied to another.

Dr. Amber Boehnlein was a staff scientist at Fermi National Accelerator Laboratory for 17 years and is a collaborator on the FNAL Tevatron collider experiment DØ. From 2008-2011, she was on assignment to the U.S. Department of Energy in the Office of High Energy Physics where she was the program manager responsible for the oversight of the DOE U.S. Large Hadron Collider Operations program and for three Scientific Discovery through Advance Computing projects. In 2011, she joined SLAC National Accelerator Laboratory and manages the Scientific Computing Applications Division. View Lecture Series Poster


Adding Value to the New MGI Landscape

Dr. David McDowell
Dr. David McDowell

May 23, 2013
Location: BSF 1007 Darwin Room
Time: 11:00 am
Presented by:
Dr. David McDowell, Regents' Professor and Carter N. Paden, Jr. Distinguished Chair in Metals Processing at Georgia Tech

Dr. McDowell will discuss how Initiatives such as Integrated Computational Materials Engineering (ICME) and the Materials Genome Initiative (MGI) have drawn broad attention of industry, government research laboratories, and universities in computational materials science and materials design. MGI objectives of decreasing the time to discover, develop, certify and insert new and improved materials into products have evolved rapidly in the past decade and are part of the same "thread" that seeks to draw materials development in closer accord with the timescale for product development, thereby reducing time to market and adding value in terms of integrated multi-functionality of materials and products. Realizing the full impact of this collective vision requires a clear understanding of the need to advance a broad front of supporting technologies, built on a foundation of distributed collaboration of academia, industry, and government. It is argued that many of these technologies require a change of culture in university materials research and, along with modes for integrating engineering, the sciences, big data, and high performance computing. Workforce development issues are central to advancing the MGI vision, in addition to the new tools and methods that must be developed to support multiscale, multiphysics modeling, which in turn informs systems-based integrated design of materials and products.

David McDowell is a Regents' Professor and Carter N. Paden, Jr. Distinguished Chair in Metals Processing at Georgia Tech. In August 2012 he was named Founding Director of the Institute for Materials, one of the university's interdisciplinary research institutes charged with fostering an innovation ecosystem for research and education. His research focuses on the synthesis of experiment and computation to develop physically-based, microstructure-sensitive constitutive models for nonlinear and time-dependent behavior of materials, with emphasis on wrought and cast metals. He is the co-editor of the International Journal of Fatigue and co-director of the NSF-sponsored Center for Computational Materials Design, a joint effort between Georgia Tech and Penn State. View Lecture Series Poster


Big Process for Big Data

Dr. Ian Foster
Dr. Ian Foster

May 21, 2013
Location: EMSL, Room 1077
Time: 2:30 pm
Presented by:
Dr. Ian Foster, Argonne Distinguished Fellow, Argonne National Laboratory

Large and diverse data result in challenging data management problems that researchers and facilities are often ill-equipped to handle. Dr. Ian Foster will demonstrate a new approach to these problems based on the outsourcing of research data management tasks to software-as-a-service providers. This approach can both achieve significant economies of scale and accelerate discovery by allowing researchers to focus on research rather than mundane information technology tasks. Dr. Foster will present early results with the approach in the Globus Online data movement, synchronization, and sharing service as well as describe his experiences applying Globus Online to supercomputer and experimental facility data management, and outline future work aimed at incorporating data cataloging and analysis capabilities into the framework.

Ian Foster is the Arthur Holly Compton Distinguished Service Professor of Computer Science at the University of Chicago and an Argonne Distinguished Fellow at Argonne National Laboratory. He is also the Director of the Computation Institute, a joint unit of Argonne and the University. His research is concerned with the acceleration of discovery in a networked world. Dr. Foster is a fellow of the American Association for the Advancement of Science, the Association for Computing Machinery, and the British Computer Society. Awards include the British Computer Society's Lovelace Medal, honorary doctorates from the University of Canterbury, New Zealand, and CINVESTAV, Mexico, and the IEEE Tsutomu Kanai award. View Lecture Series Poster


Signaling Networks, Epigenetic Biochemical States of a Single Cell, and a Possible Second Stochastic Origin of Cancer: Theories and Computational Challenges

Hong Qian
Professor Hong Qian

Monday, March 18, 2013
Location: BSF 1007 Darwin Room
Time: 9:30 am
Presented by:
Professor Hong Qian, Department of Applied Mathematics, University of Washington

Professor Qian will discuss implications of a new theoretical narrative of cellular biochemical dynamics on cancer biology and its relation to evolutionary processes; as well as the need for a large-scale computational program to help advance such an approach. The talk will focus on a theory for cellular biochemical processes through integration of biochemical reaction network modeling with a system perspective of a cell. Biochemical reactions in a single cell, particularly those associated with gene transcription and regulation, cell signaling and differentiation, are stochastic in nature. We apply stochastic kinetic models to self-regulating gene networks, phosphorylation/dephosphorylation and GTPase signaling modules with feedbacks. Dynamic bistability is illustrated in biochemical systems. We argue the fluctuations inherent in molecular processes (e.g., stochastic gene expression, chemical concentration fluctuations, etc.) do not disappear in mesoscopic cell-sized nonlinear systems; rather they manifest themselves as isogenetic variations on a different time scale.

Isogenetic biochemical variations in terms of the stochastic attractors, e.g., phenotypical states, can have extremely long lifetime. Transitions among such discrete states spend most of the time in "waiting", exhibit punctuated equilibria. It can be naturally passed to "daughter cells" via a simple growth and division process. Implications of this new theoretical narrative of cellular biochemical dynamics on cancer biology and its relation to evolutionary processes will be discussed. A large-scale computational program is suggested to help advancing this approach. View Lecture Series Poster


Data-Driven Models for Protein Binding and Mutagenesis Effects

Hong Qian
Associate Professor Julie Mitchell

Monday, March 18, 2013
Location: BSF 1007 Darwin Room
Time: 11:00 am
Presented by:
Associate Professor Julie Mitchell, Director of the BACTER Institute for Computational Biology, University of Wisconsin

Associate Professor Julie Mitchell from the University of Wisconsin will talk about how despite many decades of research into physical and statistical potentials, many challenges remain in characterizing protein energetics. By recasting problems in structural biology as classification questions, new ways of utilizing experimental data emerge. The use of physics-based features in classification models allows us to learn many underlying physical principles of protein structure from the evolutionary record and experimental data. Several successful examples of such models will be presented, including predictive models for alanine scanning and mutagenesis, protein design, and the identification of nucleotide binding sites on protein surfaces. View Lecture Series Poster


2012

The MIDAS Touch: Modeling Processor Physics for Extreme Scale Computing

Yalamanchili
Professor Sudhakar Yalamanchili

Tuesday, January 29, 2012
Location: BSF 1007 Darwin Room
Time: 10:00 am
Presented by:
Professor Sudhakar Yalamanchili, Center for Experimental Research in Computer Systems,
School of Electrical and Computer Engineering, Georgia Institute of Technology

Presentation: The MIDAS Touch: Modeling Processor Physics for Extreme Scale Computing

As industry moves to increasingly small feature sizes, performance scaling will become dominated by the physics of the computing environment. There are fundamental trade-offs to be made at the architectural level between performance, energy/power, and reliability. We refer to the body of knowledge addressing the impact of physics on such system level metrics as the processor physics. Relatively few efforts to date have targeted understanding, characterizing, and managing the multi-physics and multi-scale (nanoseconds to milliseconds) transient interactions between delivery, dissipation, and removal (cooling) of energy and their impact on system level performance. This talk will describe efforts at GT to construct scalable modeling, emulation, and simulation environments to i) understand how interacting physical phenomena affect architecture level tradeoffs, ii) apply this understanding to develop operational principles for reliable and scalable heterogeneous multicore architectures, and iii) demonstrate these principles with prototype implementations. View Lecture Series Poster


Spin-Orbit Coupling of the 5f Electrons In Actinide Oxides

Bagus
Professor Paul Bagus

Tuesday, December 18, 2012
Location: EMSL Auditorium
Time: 10:30 am
Presented by:
Professor Paul Bagus, University of North Texas,
Center for Advanced Scientific Computing and Modeling

The coupling of the spin and orbital angular momentum of the electrons in the open shells of transition metal, rare-earth, and actinide cations is important, especially for the magnetic properties of oxides and other ionic compounds. When relativistic spin-orbit splitting is small, as for the 3d transition metal oxides, a maximum spin alignment of the open shell electrons explains the properties of high spin ionic crystals. For actinide cations, where the spin-orbit splitting of the 5f shell into 5f5/2 and 5f7/2 is much larger than for the 3d shell, there is a competition between aligning the spins of the 5f electrons and filling first the lower lying 5f5/2 sub-shell. This competition often leads to a significantly reduced spin alignment and, hence, a smaller magnetic moment. Novel concepts are used to explain the dependence of the spin alignment on the 5f shell occupation. The consequences of this spin-alignment for the magnetic moment are examined for metal cations and for embedded clusters modeling actinide oxides. View Lecture Series Poster


Computer Simulation in the Physical & Life Sciences: From Chemistry to Materials & the Nano-Bio-Med Frontier

Wednesday, May 23, 2012
Location: BSF/1007 Darwin Room
Time: 1:00PM
Presented by:
Dr. Michael L. Klein, Laura H. Carnell Professor and Director
Institute for Computational Molecular Science
Temple University

The past decade has seen enormous progress in the broad application of computation to topical problems in science and engineering. By selected examples I will illustrate the current status of the field that employs computer simulation methodologies based on the principles of quantum mechanics and statistical mechanics to problems at the interface between materials science and chemical biology. The prospects for future applications in the biomedical arena will also be touched on, albeit briefly. View Lecture Series Poster


Uncertainty Quantification in Computational Models

Habib Najm

Wednesday, March 21, 2012
Location: BSF/1007 Darwin Room
Time: 9:00AM
Presented by:
Dr. Habib Najm
Combustion Research Facility
Sandia National Laboratories

Models of physical systems generally involve inputs/parameters that are determined from empirical measurements, and therefore exhibit a certain degree of uncertainty. Estimating the propagation of this uncertainty into computational model output predictions is crucial for purposes of model validation, design optimization, and decision support.

Recent years have seen significant developments in probabilistic methods for efficient uncertainty quantification (UQ) in computational models. These methods are grounded in the use of functional representations for random variables. In particular, Polynomial Chaos (PC) expansions have seen significant use in this context. The utility of PC methods has been demonstrated in a range of physical models, including structural mechanics, porous media, fluid dynamics, aeronautics, heat transfer, and chemically reacting flow. While high-dimensionality remains a challenge, great strides have been made in dealing with moderate dimensionality along with non-linearity and oscillatory dynamics.

In this talk, Dr. Najm will give an overview of UQ in computational models. He will cover the two key classes of UQ activities, namely: (1) the estimation of uncertain input parameters from empirical data, and (2) the forward propagation of parametric uncertainty to model outputs. He will cover the basics of PC UQ methods with examples of their use in both forward and inverse UQ problems, and will discuss methods for estimating the joint posterior density on uncertain model parameters given partial information from legacy experiments. View Lecture Series Poster


Stochastic Multiscale Modeling for Physical and Biological Problems

George Karniadakis

Friday, March 2, 2012
Location: BSF/1007 Darwin Room
Time: 10:00AM
Presented by:
Professor George Karniadakis
Applied Mathematics, Brown University
Research Scientist, Department of Mechanical Engineering, MIT

Stochastic modeling is required in many applications that involve multiscale phenomena, where information is lost as part of the coarse-graining procedure, e.g. formulating mesoscale dynamics equations from atomistic descriptions using the Mori-Zwanzig approach. It is also required in other systems with draconian approximations, e.g. spatial lumping, where effective parameters are employed to model some of the dynamics, and it can be in additive or multiplicative form. In many of these cases, the solution of the corresponding stochastic PDEs requires treating effectively problems in high dimensional spaces. In this talk, we will present effective new ways of dealing with this "curse-of-dimensionality" and we present demonstrative examples from fluid mechanics, electric networks, and biology. View Lecture Series Poster


Taming Heterogeneous Parallelism with Domain Specific Languages

Kunle Olukotun

February 14, 2012
Location: EMSL Auditorium
Time: 1:00PM
Presented by:
Professor Kunle Olukotun
Stanford University

Computing systems are becoming increasingly parallel and heterogeneous; however, exploiting the full capability of these architectures is complicated because it requires application code to be developed with multiple programming models. A much more productive single programming model approach to heterogeneous parallelism uses domain specific languages (DSLs). DSLs provide high-level abstractions which improve programmer productivity and enable transformations to high performance parallel code. In this talk, Professor Olukotun will motivate the DSL approach to heterogeneous parallelism; show example DSLs from the domains of graph analysis, mesh -based PDE solvers, and machine learning that provide both high productivity and performance; and describe Delite, a framework that simplifies the development of DSLs embedded in Scala. View Lecture Series Poster


Memory Systems for Extreme Scale Computing

Sun, Xian-He

January 27, 2012
Location: BSF Darwin Room - 1007
Time: 10:00AM
Presented by:
Professor Sun, Xian-He
Chair, Department of Computer Science Illinois Institute of Technology

Technology advances are unbalanced. CPU performance has been improving at a much faster pace than memory technologies during the last three decades, which has led to the so-called memory-wall problem. In the meantime, newly emerged IT applications, such as computer animation, social networks, and sensor networks, are all data intensive, which has led to the so-called big-data problem. The lasting memory-wall problem compounded with the newly emerged big-data problem has changed the landscape of computing. CPU speed is no longer the performance bottleneck of a computing system, the data access speed is. However, historically computing systems are designed and developed to utilize CPU performance, not data accessing. A paradigm change is needed to support data-centric computing. In this talk we first review the history and concepts of the big-data and memory-wall problems. We then discuss the challenges of design advanced memory systems for extreme-scale computing. Finally, we present some our recent results in understanding and optimizing the performance of memory systems from the data-centric point-of-view.

Professor Sun is the chairman and a professor of the Department of Computer Science, the director of the Scalable Computing Software laboratory at the Illinois Institute of Technology (IIT) and a guest faculty in the Mathematics and Computer Science Division at the Argonne National Laboratory. Before joining IIT, he worked at DoE Ames National Laboratory, at ICASE, NASA Langley Research Center, at Louisiana State University, Baton Rouge, and was an ASEE fellow at Navy Research Laboratories. Dr. Sun is an IEEE fellow and his research interests include parallel and distributed processing, high-end computing, memory and I/O systems, and performance evaluation. He has close to 200 publications and four patents in these areas. View Lecture Series Poster


The PhyloFacts Microbial Phylogenomic Encyclopedia:

Phylogenomic tools and web resources for the Systems Biology Knowledgebase

Kimmen Sjolander
Dr. Kimmen Sjölander

January 12, 2012
Location: BSF Darwin Room - 1007
Time: 1:00PM
Presented by: Dr. Kimmen Sjölander
Associate Professor
Bioengineering University of California, Berkeley

Dr. Sjölander will present an overview of the PhyloFacts Microbial Phylogenomic Encyclopedia, which was designed to improve the accuracy of functional annotation of microbial genomes through evolutionary reconstruction of gene families, integrating information from protein 3D structure, biological process, pathway association, protein-protein interaction and other types of experimental data to improve both the specificity and coverage of protein "function" prediction. Her talk will cover structural phylogenomics, key computational challenges being addressed in the PhyloFacts project, and a novel approach to ortholog identification. View Lecture Series Poster


2011

Programming Models in the Exascale Era

Barbara Chapman
Prof. Barbara Chapman

Thursday, December 15, 2011
Location: BSF Darwin Room - 1007
Time: 9:30 AM
Presented by: Professor Barbara Chapman, University of Houston Department of Computer Science

Researchers around the world have begun to consider the requirements and implications of future exascale systems. It is assumed today that such platforms will be introduced around 2018 -2020, and that they will be large clusters with very powerful compute nodes. One of the most urgent open questions is how these systems will be programmed. Will MPI continue to dominate at this level? If so, how will the application developers achieve high performance within each compute node? If not, what will the programming model be? And how will existing applications be migrated to this new kind of platform?

In this presentation, Professor Chapman will discuss the exascale landscape and some ideas on the programming models that might be provided on such platforms. The talk will include considerations for what changes will be needed in the execution environment in order to support efficient application deployment.
View Lecture Series Poster

An Overview of the SciDAC-3 Institute for Performance, Energy, and Resilience

Dr. Bob Lucas
Dr. Bob Lucas

Thursday, December 8, 2011
Location: BSF Darwin Room 1007
Time: 9:00 AM
Presented by: Dr. Bob Lucas, University of Southern California Information Sciences Institute

Over the next five years (2012-2016), computational scientists working on behalf of the Department of Energy's Office of Science (DOE SC) will exploit a new generation of petascale computing resources to make previously inaccessible discoveries in a broad range of disciplines including chemistry, fusion energy, materials science, and physics. The computational systems underpinning this work will increase in performance potential from tens to hundreds of petaflops and will evolve significantly from those in use today: concurrency will scale exponentially; accelerators such as graphical processing units (GPUs) will be utilized; and even the memory hierarchy will change with the incorporation of a new generation of persistent devices (e.g., phase change memory). To ensure that DOE's computational scientists can successfully exploit this emerging generation of leadership-class computing systems, the University of Southern California (USC) has assembled a broad team of computer scientists with the expertise to address their most pressing challenges: (a) end-to-end performance optimization, including single-node performance, interprocessor communication, load balancing and I/O; (b) performance portability for new systems, including heterogeneous processors and new memory hierarchies; (c) management of energy consumption; and (d) resilient computation.
View Lecture Series Poster

Intersecting Informatics, Data Intensive Science and Networks of Science

Thursday, December 1, 2011
Location: EMSL Auditorium
Time: 10:30AM
Presented by:
Peter Fox
Rensselaer Polytechnic Institute, Professor of Earth and Environmental Science and Computer Science
Tetherless World Constellation Chair
Woods Hole Oceanographic Institution, Adjunct Scientist


Among the consequences of new and diversifying means of complex data generation is that, as many branches of science have become dataintensive, they in turn broaden their long-tail distributions—less complex data will always produce excellent science. There are many familiar informatics functions that enable the conduct of science (by specialists or non-specialists) in this new regime. For example, the need for any user to be able to discover relations among and between the results of data analyses and informational queries. Unfortunately, multi-modal discovery over complex data remains more of an art form than an easily conducted practice. Worse is that the resource cost of creating useful science functionality for a wide spectrum of use has been increasing. Folded into this landscape is an increasingly 'networked' aspect to scientific research. Even with current informatics infrastructure, these networks too are increasingly complex. Extra effort is required to make effective progress on tasks that should be routine. The considerable resources consumed could be used for many other purposes. It is now time to change these trends.

The frontier is an interesting and scientifically-based model of how modern informatics can be used to design and undertake complex networked science in the face of increasing data complexity/ intensity all cast in the present reality of Web/Internet-based data and software infrastructures. A logical consequence of this path is that the people working in this new mode of research require additional education to become effective and routine users of new informatics capabilities with the goal to achieve the same fluency that researchers may have in lab techniques, instrument utilization, model development and use, etc.

This presentation will introduce, discuss, and link the aforementioned elements of the frontier of informatics. View Lecture Series Poster

Scientific and Computational Challenges of the Fusion Simulation Program

Wednesday, June 8, 2011
Location: BSF/1007 Darwin Room
Time: 1:00PM
Presented by:
William M. Tang
Princeton Plasma Physics Laboratory
Princeton University

Professor Tang will discuss the scientific and computational challenges facing Fusion Energy Sciences research.

Reliable modeling capabilities in Fusion Energy Sciences are expected to require computing resources at the petascale (1015 floating point operations per second) range and beyond to address ITER burning plasma issues.

This provides the key motivation for the Fusion Simulation Program (FSP)—a new U.S. Department of Energy initiative supported by its Offices of Fusion Energy Science and Advanced Scientific Computing Research—that is currently in the program definition/planning phase.

The primary objective of the FSP is to enable scientific discovery of important new plasma phenomena. This requires developing a predictive integrated simulation capability for magnetically-confined fusion plasmas that are properly validated against experiments in regimes relevant for producing practical fusion energy. View Lecture Series Poster

Extreme Computing Through Innovations in Execution Models

Tuesday, May 10, 2011
Location: EMSL Auditorium
Time: 9:00PM
Presented by:
Professor Sterling
Louisiana State University
Arnaud and Edwards Professor of Computer Science
Adjunct Professor of Electrical & Computer Engineering
System Science and Engineering Focus Area head of the Center for Computation and Technology
Oak Ridge National Laboratory Distinguished Visiting Scientist
Sandia National Laboratories CSRI Fellow

Professor Sterling will talk about how dramatic changes in high performance computing system architectures are forcing new methods of use including programming and system management. His presentation will discuss the driving trends and issues facing these new phases in HPC and will discuss the ParalleX execution model that is serving as a pathfinding framework for exploring an innovative synthesis of semantic constructs and mechanism that may serve as a foundation for computational systems and techniques in the exascale era. The talk will use a kernel application code for numerical relativity via adaptive mesh refinement to demonstrate the effectiveness of the ParalleX model through the use of the HPX runtime software system library. View Lecture Series Poster

Blue Waters: An Extraordinary Research Capability for Advancing Science & Engineering

Friday, February 25, 2011
Location: EMSL Auditorium
Time: 1:00PM

Presentation: Blue Waters: An Extraordinary Computer to for Enable Extraordinary Research

Dr. Thom H. Dunning, Jr., Director of the National Center for Supercomputing Applications and the Institute for Advanced Computing Applications and Technologies, University of Illinois at Urbana-Champaign, will talk about the new supercomputer Blue Waters and its proposed use by the science and engineering community.

A dramatic increase in computing capability has the potential to create breakthrough advances in all fields of science and engineering, including predicting the behavior of complex biological systems, understanding the production of heavy elements in supernovae, designing catalysts and other materi-als at the atomic level, predicting changes in the earth's climate and ecosystems, and designing complex engineered systems from chemical plants to airplanes. However, achieving these breakthroughs requires a computing system capable of answering the most compute-, memory- and data-intensive research questions.

The Office of Cyberinfrastructure in the National Science Foundation is funding the acquisition and deployment of an extraordinary new supercomputing system at the National Center for Supercomputing Applications on the campus of the University of Illinois at Urbana-Champaign. This system, called Blue Waters, is based on the latest computing technology under development by IBM for DARPA's High Productivity Computing Systems Program, including the Power7 processor with a peak performance of a quarter of a teraflop and a new communications hub chip with a total bandwidth of more than one terabyte/sec. Blue Waters will be installed in NCSA's National Petascale Computing Facility in 2011.

Blue Waters will have more than 300,000 compute cores, 1000 terabytes of main memory, 25 petabytes of disk storage, and up to 500 petabytes of archival storage. It will be connected to the nation's research networks at 100+ Gbps. Blue Waters will enable the solution of the most challenging data-intensive problems. Detailed performance projections indicate that Blue Waters will sustain 1 petaflops on many real-world science and engineering applications.
View Lecture Series Poster


Multi-reference coupled-cluster methods: Three different approaches

Monday, February 14, 2011
Location: EMSL Auditorium
Time: 1:00PM

Professor Bartlett will discuss the multi-reference problem in coupled-cluster theory, and describe three different approaches for solving the problem. All are compared for cyclobutadiene's autoisomerization and other multi-reference problems.

Professor Bartlett pioneered the development of coupled-cluster theory in quantum chemistry to offer highly accurate solutions of the Schrödinger equation for molecular structure and spectra. His group is responsible for the widely used ACES program system.

His other research topics include the search for metastable, high-energy density molecules like N5-; non-linear optics; carbon clusters; NMR coupling constants; new correlated quantum chemical methods for polymers and surfaces; ab initio density functional theory; and the 'transfer Hamiltonian' for large scale quantum mechanical simulations of materials.
View Lecture Series Poster

Architecture-Aware Algorithms and Software for Scalable Performance and Resilience on Heterogeneous Architectures

Monday, January 24, 2011
Location: EMSL Auditorium
Time: 1:00PM

Presentation: Architecture-aware Algorithms and Software for Peta and Exascale Computing

Professor Jack Dongarra, Distinguished Research Staff member at Oak Ridge National Laboratory and a University Distinguished Professor in the Deparment of Electrical Engineering and Computer Science at the University of Tennessee, will present a seminar titled "Architecture-Aware Algorithms and Software for Scalable Performance and Resilience on Heterogeneous Architectures."

Professor Dongarra will examine how high performance computing has changed over the last 10 years and look toward the future in terms of trends. These changes have had and will continue to have a major impact on software. Some of the software and algorithm challenges have already been encountered, such as management of communication and memory hierarchies through a combination of compile-time and run-time techniques, but the increased scale of computation, depth of memory hierarchies, range of latencies, and increased run-time environment variability will make these problems much harder. He will look at five areas of research that will have an importance impact in the development of software and algorithms, focusing on redesign of software to fit multicore architectures; automatically tuned application software; exploiting mixed precision for performance; the importance of fault tolerance; and communication avoiding algorithms. View Lecture Series Poster

2010

Operating System Resource Management

Monday, September 27, 2010
Location: ETB, Columbia River Room 1103
Time: 2:00PM

Presentation: Operating System Resource Management

Burton Smith, a Technical Fellow at Microsoft, will give a talk titled "Operating System Resource Management" on Monday, September 27, from 2-3 pm in the Columbia River Room in ETB. Resource management is the dynamic allocation and de-allocation by an operating system of processor cores, memory pages, and various types of bandwidth to computations that compete for those resources. The objective is to allocate resources so as to optimize responsiveness subject to the finite resources available. Historically, resource management solutions have been relatively unsystematic, and now the very assumptions underlying the traditional strategies fail to hold. First, applications increasingly differ in their ability to exploit resources, especially processor cores. Second, application responsiveness is approximately two-valued for "Quality-Of-Service" (QOS) applications, depending on whether deadlines are met. Third, power and battery energy have become constrained. This talk will propose a scheme for addressing the operating system resource management problem.
View Lecture Series Poster

Modeling of physical systems underspecified by data

Friday, September 10, 2010
Location: ETB, Columbia River Room 1103
Time: 1:30PM - 2:30PM

Presentation: Modeling of physical systems underspecied by data

Although it has long been recognized that simulations of most physical systems are fundamentally stochastic, this fact remains overlooked in most practical applications. Even essentially deterministic systems must be treated stochastically when their parameters, boundary and initial conditions, or forcing functions are under-specified by data. Data-driven random domain decompositions provide a novel approach to dealing with the kinds of spatially heterogeneous random processes that typically appear in realistic simulations of physical systems. The method is based on a doubly stochastic model in which the problem domain is decomposed according to stochastic geometries into disjoint random fields. View Lecture Series Poster

A General Computable Methodology for Validation

Monday, August 16, 2010
Location: ETB Columbia River Room 1103
Time: 11:00AM

Professor Roger Ghanem, of the Viterbi School of Engineering at the University of Southern California, will present a seminar titled "A Computable Approach To Validation" at 11:00 am on August 16 in the Columbia River Room in ETB.

Professor Ghanem will discuss uncertainty quantification, verification, and validation and the drive to implement computational resources to reproduce or predict physical reality. The talk will describe research efforts and challenges to understanding the scope of model-based predictions and the necessity of packaging of information so that it can be analyzed and so that reliability estimates can be computed. He will also describe applications in porous media, material modeling, and complex networks. View Lecture Series Poster

Beyond Reactive Management of Network Intrusions

Monday, July 26, 2010
Location: ETB Columbia River Room
Time: 10:00AM

Presentation: Beyond Reactive Management of Network Intrusions

Professor Sushil Jajodia, of the Center for Secure Information Systems at George Mason University, will present a seminar titled "Beyond Reactive Management of Network Intrusions" at 10:00 am on July 26 in the Columbia River Room in ETB.

Prof. Jajodia will discuss issues and methods for protecting systems from malicious attacks, and system survival from attacks that cannot be averted at the outset. He will describe recent research on attack graphs that represent known attack sequences attackers can use to penetrate computer networks, and will show how attack graphs can be used to compute actual sets of hardening measures that guarantee the safety of given critical resources. The session will also include a demo of the working system and an overview of the Center for Secure Information Systems. View Lecture Series Poster

2009

Informatics Role in the Understanding of Infectious Diseases

Presentation Video

Wednesday, February 4, 2009
Location: EMSL Auditorium
Time: Noon

Dr. Bruno Sobral and Dr. Chris Barrett, both of Virginia Bioinformatics Institute at Virginia Tech, will present a lecture titled "Informatics and Computational Sciences," at noon on February 4, in the EMSL Auditorium.

The common aspiration that emphasizes properties of systems, as opposed to the characteristics of data, calls for holistic explanations that are inherently complex, the calculation of which are computationally intensive. Biosystems, from molecular information storage and functional prescription to ecologies, exhibit all of the features of systems that can only be understood systemically and require specialized computational support—informatics—to analyze and comprehend. This presentation will provide an overview of molecular, cellular, organism and population-level systems biology of infectious disease. The speakers will start with genomic description and explanation and move toward population-level public health issues, emphasizing the role of informatics support in the scientific understanding of contagion, as an example.

Dr. Sobral is executive and scientific director and professor at the VBI. Dr. Sobral has a long-standing interest in systems approaches to infectious diseases. His research program fully utilizes wetlab and cyberinfrastructure programs. Dr. Barrett is director of the Network Dynamics and Simulation Science Laboratory at VBI. His research interest include simulation of very large systems; theoretical foundations of simulation; interaction-based systems, computing, and dynamical systems; computational and systems biology; computational problems in epidemiology; cognitive science and computationally aided reasoning; computational economics; infrastructure simulation. View Lecture Series Poster

2008

Photo: Dr. Larry Smarr

Dr. Larry Smarr
Shrinking the Planet
"How Dedicated Optical Networks Are Transforming Computational Science and Collaboration"

Monday, August 25, 2008
EMSL Auditorium
10:30 AM - 11:30 AM
Larry Smarr, professor at California Institute for Telecommunications and Information Technology, will speak on how dedicated optical networks are transforming computational science and collaboration.
View Dr. Smarr's Lecture Series Poster | Video

Photo: Dr. Tony Hey

Dr. Tony Hey
Corporate Vice President
Microsoft Research
"eScience, Semantic Computing and the Cloud: Towards a Smart Cyberinfrastructure for eScience"

Monday, June 2, 2008
Battelle Auditorium
9:00 AM

Presentation: eScience, Semantic Computing and the Cloud

Lecture Series

Computational & Information Sciences Links

Directions

Media Inquiries

Contacts