David Keyes

King Abdullah University of Science and Technology and Columbia University

Title: Scalable Solvers for CSE: Universals and Innovations

Abstract: As CFD enters the exascale era, algorithms must span a widening gap between ambitious applications and austere architectures. Applications are ambitious in many senses: large physical space, phase space or parameter dimensions; resolution of many scales of space and/or time; high fidelity modeling; linking together of multiple complex models; and placement of “forward problems” inside outer loops for inversion, assimilation, optimization, control, or learning. Architectures are austere in several senses, chiefly: low memory per core, low memory bandwidth per core, and low power per operation, which leads to using lower numerical precision where possible and many slower rather than fewer faster cores. Algorithms must adapt to span this gap especially by means that lead to less uniformity and less predetermined schedulability of operations. Otherwise, exascale computers will be limited to petascale performance. We present fifteen universals for researchers in scalable solvers and some innovations that allow to approach lin-log complexity in storage and operation count in some important algorithmic kernels, with controllable loss of accuracy, by exploiting hierarchy and data sparsity.

Peter Bastian

University of Heidelberg, Germany

Title: Trends in High-performance Finite Element Simulations

Abstract: The talk starts by pointing out the challenges when using high-performance computers for finite element computations. The focus will be on node-level performance. Then two approaches will be addressed in the light of these challenges. The first method uses high-order matrix-free finite elements to increase compute intensity and achieve a substantial fraction of peak performance of modern CPUs. The second method addresses the issue of robustness of parallel iterative solvers with respect to coefficient variations in the PDE. Here a new multilevel coarse space based on hierarchical domain decomposition will be presented. The method allows rigorous convergence bounds confirmed by numerical experiments.

Bertrand Llorente

Cancer Research Center of Marseille (CRCM), CRNS, France

Title: Meiotic recombination in budding yeast and the importance of bioinformatics

Abstract: Genetics underwent a revolution during the first decade of the 21st century through the advent of Next generation DNA sequencing. Most questions initially addressed at the gene level can now addressed at the genome level, which requires high computation capacities. This holds true for the study of meiotic DNA recombination, the DNA repair process that promotes the shuffling of parental genomes and ensures proper gamete formation during sexual reproduction. Here, I will show through two main examples how NGS based approaches helped to understand better the mechanism of meiotic DNA recombination and its evolution using budding yeasts as models.

Edoardo Di Napoli,

Juelich Supercomputing Centre (JSC) and leads the Simulation and Data Lab Quantum Materials (SDLQM) Germany

Title: Chasing the hardware evolution: Goals, challenges and perspectives in high-performance and parallel computing.

Abstract: Nowadays, parallel computing architectures trend towards more powerful and beefy heterogeneous nodes with a large number of multi- and many-cores. Therefore, what is required is an efficient way of executing parallel computations that exploits the computing resources both within and across the computing nodes. Most simulation software relies on parallel and optimized numerical algorithms to extract performance from such computing platforms. Traditionally, there were two possible path: make use of black boxes numerical libraries (e.g. ScaLAPACK) or implement numerical kernels specifically tailored and deeply embedded into the simulation software. In both cases, numerical implementations have hard time to keep up with the hardware evolution and exploit parallelism both at the small and large scale. In this talk we illustrate the existing paradigm with its advantages and shortcomings and eventually explain an emerging hybrid approach characterized by three main elements: 1) exploitation of existing knowledge from the domain application, 2) large scale
parallelism through communication optimization, and 3) small scale parallelism based on the use of highly optimized libraries at the node level. Several examples drawn from Condensed Matter Physics will be used to explain and clarify the role and importance of these three elements.

Heather J. Kulik

Department of Chemical Engineering
Massachusetts Institute of Technology (MIT)
Cambridge, MA 02139, USA

Title: What can machine learning do to accelerate the design of catalysts and materials?

Abstract: Many compelling functional materials and highly selective catalysts have been discovered that are defined by their metal-organic bonding. The rational design of de novo transition metal complexes however remains challenging. First-principles (i.e., with density functional theory, or DFT) high-throughput screening is a promising approach but is hampered by high computational cost, particularly in the brute force screening of large numbers of ligand and metal combinations. In this talk, I will outline our efforts to accelerate the design of mid-row, open shell transition metal complexes for catalysis and materials science applications. I will describe how our automated toolkit, molSimplify, has evolved over the past few years to include data fidelity checks and prediction, active learning and multi-objective optimization, and improved uncertainty quantification metrics. I will describe how this powerful toolset has advanced our understanding of metal-organic bonding in materials far-ranging from functional spin crossover complexes to open-shell transition metal catalysts and metal-organic frameworks by enabling the rapid screening of millions of candidate molecules and by revealing robust design rules in weeks instead of decades that traditional high-throughput screening would have required.

Martin Head-Gordon

Kenneth S. Pitzer Center for Theoretical Chemistry, Department of Chemistry, University of California, and, Chemical Sciences Division, Lawrence Berkeley National Laboratory, Berkeley CA 94720, USA.

Title: Recent developments in density functional theory: From new functionals to the nature of the chemical bond.

Abstract: Density functional theory (DFT) is the most widely used electronic structure theory, with broad applications in chemistry, materials science, condensed matter physics, and elsewhere. Crucial to its future is the problem of designing functionals with improved predictive power. I shall describe a new approach to functional design, “survival of the most transferable”, and show how the resulting functionals offer greatly improved accuracy relative to existing functionals of a given class. As a counterpoint to this vital numerical development, I will describe a new energy decomposition analysis (EDA) approach to obtaining physical insight into DFT calculations of chemical bonds and non-bonded molecular interactions. I will present several examples, such as the triplex between vinyl alcohol radical cation, formaldehyde and water, which is a rearranged form of the glycerol radical cation. I will also use the EDA to explore the origin of the chemical bond, a question that is still controversial.

Please see the complete abstract