• Title
    Computational Scientist
  • Email
    hornung1@llnl.gov
  • Phone
    (925) 422-5097
  • Organization
    STE COMP-STE CASC DIV-CENTER FOR APPLIED SCIENTIFIC COMPUTING DI

Rich Hornung is a Computational Scientist in the Center for Applied Scientific Computing (CASC) at Lawrence Livermore National Laboratory (LLNL). During his career at LLNL, Rich has performed algorithm research and software development for a wide range of problems in high-performance computing (HPC). He created or co-created several open-source HPC software projects including: SAMRAI (parallel adaptive mesh refinement), RAJA (C++ abstractions for hardware architecture portability),  RAJA Performance Suite, and Axom (HPC application building blocks). Currently, Rich leads the RAJA, RAJA Performance Suite, and Axom projects. He is a member of the Project Coordination Council (PCC) in the Weapons Simulation and Computing / Computational Physics (WSC/CP) Program at LLNL, where he represents the development of shared CS infrastructure software projects used throughout the program. His research interests include scalable parallel adaptive mesh refinement, MPMD algorithms for multi-scale simulations, programming abstractions for high performance multi-architecture portability, and software infrastructure for large-scale multi-physics applications.

Rich joined LLNL in 1996. He received his Ph.D. and M.A. degrees in Applied Mathematics from Duke University in 1994 and 1991, respectively, and his B.A. degree in Mathematics and Music from Lawrence University in 1989. Following the completion of his Ph.D.,, he was a National Science Foundation Mathematical Sciences Industrial Research Postdoctoral Fellow working with researchers at Duke University and Mobil Exploration and Production Technical Center in Dallas, TX.

Software Projects

See Rich’s GitHub profile. Here’s a list of projects Rich has made substantial contributions to are:

  • RAJA : C++ abstractions for HPC architecture portability
  • RAJA Performance Suite : Companion project to RAJA for assessing RAJA performance and HPC compiler optimization quality.
  • Axom : Library of robust, flexible infrastructure components for multi-physics applications development.
  • SAMRAI : Structured Adaptive Mesh Refinement (SAMR) Application Infrastructure that supports large-scale parallel application development.

Other projects with which Rich is affiliated include:

  • Umpire : Resource management library that enables provisioning, discovery, and management of memory on systems with multiple memory devices.
  • CHAI : Array abstraction library that performs automatic data migration between memory spaces on HPC systems.
  • BLT : CMake-based foundation for Building, Linking, and Testing HPC applications.
  • Conduit : An intuitive model for describing and manipulating hierarchical data in scientific applications.
  • Shroud : A tool for creating a Fortran or Python interface to a C or C++ library.
  • RADIUSS Shared CI : A flexible GitLab CI build and test system designed to be shared by multiple projects.
  • RADIUSS Spack Configs : A shared collection of compiler and package Spack configurations for Livermore Computing systems

Ph.D. Mathematics, Duke University, Durham, NC, 1994

M.A. Mathematics, Duke University, Durham, NC, 1991

B.A. Mathematics & Music (double major), Lawrence University, Appleton, WI, 1989

Selected Publications

Articles and Refereed Proceedings

D. Beckingsale, M. Mcfadden, J. Dahm, R. Pankajakshan, and R. Hornung. Umpire: Application-Focused Management and Coordination of Complex Hierarchical Memory. IBM Journal of Research and Development, 64(3/4):1–10, 2020.

D. A. Beckingsale, J. Burmark, R. Hornung, H. Jones, W. Killian, A. J. Kunen, O. Pearce, P. Robinson, B. Ryujin, and T. R. W. Scogland. RAJA: Portable Performance for Large-Scale Scientific Applications. In 2019 IEEE/ACM International Workshop on Performance, Portability and Productivity in HPC (P3HPC), Denver, CO, pp. 71–81, 2019.

T. R. W. Scogland, J. Gyllenhaal, J. Keasler, R. Hornung, and B. deSupinski. Enabling Region Merging Optimizations in OpenMP. In C. Terboven, B. de Supinski, P. Reble, B. Chapman, and M. Muller, editors, Proc. of 11th International Workshop on OpenMP (IWOMP 2015), pages 177–188, Aachen, Germany, 2015.

T. R. W. Scogland, J. Keasler, J. Gyllenhaal, R. Hornung, B. deSupinski, and H. Finkel. Supporting Indirect Data Mapping in OpenMP. In C. Terboven, B. de Supinski, P. Reble, B. Chapman, and M. Muller, editors, Proc. of 11th International Workshop on OpenMP (IWOMP 2015), pages 260–272, Aachen, Germany, 2015.

R. Hornung and J. Keasler. The RAJA Portability Layer. In Proc. of Nuclear Explosives Code Development Conference (NECDC 2014), Los Alamos, NM, 2014.

R. Hornung, R. Anderson, N. Elliott, B. Gunney, B. Pudliner, B. Ryujin, and M. Wickett. Building Adaptive Mesh refinement into a Multiphysics Code: Software Design Issues and Solutions. In Proc. of Nuclear Explosives Code Development Conference (NECDC 2012), Livermore, CA, 2012.

B. E. Griffith, R. D. Hornung, D. M. McQueen, and C. S. Peskin. Parallel and Adaptive Simulation of Cardiac Dynamics. In M. Parashar, X. Li, and S. Chandra, editors, Advanced Computational Infrastructures for Parallel and Distributed Applications, pages 105–131. Wiley Press, Wiley Series on Parallel and Distributed Computing, 2009.

N. R. Barton, J. Knap, A. Arsenlis, R. Becker, R. D. Hornung, and D. R. Jefferson. Embedded Polycrystal Plasticity and Adaptive Sampling. Int. J. Plasticity, 24(2):242–266, 2008.

J. Knap, N. R. Barton, R. D. Hornung, A. Arsenlis, R. Becker, and D. R. Jefferson. Adaptive Sampling in Hierarchical Simulation. Int. J. Numer. Meth. Eng., 76(11):1566–1592, 2008.

J.-L. Fattebert, R. D. Hornung, and A. M. Wissink. Finite Element Approach for Density Functional Theory Calculations on Locally-refined Meshes. J. Comp. Phys., 223(2):759–773, 2007.

B. E. Griffith, R. D. Hornung, D. M. McQueen, and C. S. Peskin. An Adaptive, Formally Second-order Accurate Version of the Immersed Boundary Method. J. Comp. Phys., 223(1):10–49, 2007.

L. Diachin, R. Hornung, P. Plassman, and A. Wissink. Parallel Adaptive Mesh Refinement. In M. Heroux, P. Raghavan, and H. Simon, editors, Parallel Processing for Scientific Computing, pages 143–162. SIAM, SIAM book series on Software, Environments, and Tools, 2006.

R. D. Hornung, A. M. Wissink, and S. R. Kohn. Managing Complex Data and Geometry in Parallel Structured AMR Applications. Engineering with Computers, 22(3):181–195, 2006.

J. Ma, H. Lu, B. Wang, S. Roy, R. Hornung, A. Wissink, and R. Komanduri. Multi- scale Simulations Using Generalized Interpolation Material Point (GIMP) Method and Molecular Dynamics (MD). Computer Modeling in Engineering Sciences, 14(2):101–118, 2006.

J. Ma, H. Lu, B. Wang, S. Roy, R. Hornung, A. Wissink, and R. Komanduri. Multiscale Simulations Using Generalized Interpolation Material Point (GIMP) Method and SAMRAI Parallel Processing. Computer Modeling in Engineering Sciences, 8(2):135–152, 2005.

M. Pernice and R. Hornung. Newton-Krylov-FAC Methods for Problems Discretized on Locally-Refined Grids. Computing and Visualization in Science, 8(2):107–118, 2005.

H. S. Wijesinghe, R. D. Hornung, A. L. Garcia, and N. G. Hadjiconstantinou. Three- dimensional Hybrid Continuum-Atomistic Simulations for Multiscale Hydrodynamics. J. Fluids Engineering, 126:768–777, 2004.

A. Wissink, D. Hysom, and R. Hornung. Enhancing Scalability of Parallel Structured AMR Calculations. In Proc. 17th ACM International Conference on Supercomputing (ICS ’03), pages 336–347, San Francisco, CA, 2003.

R. D. Hornung and S. R. Kohn. Managing Application Complexity in the SAMRAI Object-Oriented Framework. Concurrency and Computation: Practice and Experience (Special Issue), 14:247–368, 2002.

A. Wissink, R. Hornung, S. Kohn, S. Smith, and N. Elliott. Large Scale Parallel Structured AMR Calculations Using the SAMRAI Framework. In Proc. of Conf. High Perf. Network& Comput. (SC 01), Denver, CO, 2001.

R. D. Hornung and S. R. Kohn. Use of Object-oriented Design Patterns in the SAMRAI Structured AMR Framework. Society for Industrial and Applied Mathematics Workshop on Object-Oriented Methods for Inter-operable Scientific and Engineering Computing, Yorktown Heights, NY, October 21-23, 1998.

R. D. Hornung and J. A. Trangenstein. Adaptive Mesh Refinement and Multilevel Iteration for Flow in Porous Media. J. Comp. Phys., 136(2):522–545, 1997.

I. Mishev, R. D. Hornung, and J. A. Trangenstein. On a Preconditioner for the Hybrid Mixed Method with Adaptive Mesh Refinement. In Proc. 10th Int. Conf. Domain Decomp., Boulder, CO, 1997.

 

Selected Technical Reports

R. Anderson, A. Black, L. Busby, B. Blakeley, R. Bleile, J.-S. Camier, J. Ciurej, A. Cook, V. Dobrev, N. Elliott, J. Grondalski, C. Harrison, R. Hornung, Tz. Kolev, M. Legendre, W. Liu, W. Nissen, B. Olson, M. Osawe, G. Papadimitriou, O. Pearce, R. Pember, A. Skinner, D. Stevens, T. Stitt, L. Taylor, V. Tomov, R. Rieben, A. Vargas, K. Weiss, D. White. The Multiphysics on Advanced Platforms Project. Technical Report LLNL-TR-815869, Lawrence Livermore National Laboratory, Livermore, CA, 2020.

D. A. Beckingsale, W. P. Gaudin, R. D. Hornung, B. T. Gunney, T. Gamblin, J. A. Herdman, and S. A. Jarvis. Parallel Block Structured Adaptive Mesh Refinement on Graphics Processing Units. Technical Report LLNL-TR-664446, Lawrence Livermore National Laboratory, Livermore, CA, 2014.

A. Black, R. Hornung, M. Kumbera, R. Neely, and R. Rieben. Computer Science Recommendations for LLNL ASC Next-Gen Code. Technical Report LLNL-TR-658622, Lawrence Livermore National Laboratory, Livermore, CA, 2014.

H. Johansen, L. Curfman McInnes, D. Bernholdt, J. Carver, M. Heroux, R. Hornung, P. Jones, B. Lucas, and A. Siegal. Software Productivity for Extreme-scale Science. Technical report, U. S. Department of Energy Advanced Scientific Computing Research Workshop Report, Rockville, MD, 2014.

R. D. Hornung and J. A. Keasler. A Case for Improved C++ Compiler Support to Enable Performance Portability in Large Physics Simulation Codes. Technical Report LLNL-TR-635681, Lawrence Livermore National Laboratory, Livermore, CA, 2013.

  • 2022 : LLNL Weapon Simulation and Computing Program Bert Still Award, given annually to an individual who demonstrates significant program impact at the intersection of research and application development
  • 2021 :LLNL Weapon Simulation and Computing Program Bronze Award for development of high-performance GPU capabilities in the next-generation multi-physics code, MARBL
  • 2019 :LLNL Weapon Simulation and Computing Program Bronze Award for developing and presenting the LLNL plenary talk at the JOWOG34 Applied Computer Science meeting
  • 2018LLNL Weapon Complex and Integration Directorate Gold Award for Significant Outstanding Contributions leading the RAJA portability software effort which enabled production programmatic applications to effectively use the Sierra supercomputer
  • 2018 :LLNL Weapon Simulation and Computing / Computational Physics Program Silver Star Award for developing and applying the RAJA performance portability programming model to enable WSC applications to demonstrate substantial early successes on the GPU-enabled Sierra platform
  • 2017 :LLNL Weapon Simulation and Computing Program Silver Star Award for teamwork in porting the Ares and ALE3D applications to the Sierra supercomputer
  • 2015 :LLNL Computation Directorate award for Significant Outstanding Contributions
  • 2013 : LLNL B Division / B Program award for exceptional effort in developing a new Livermore Loop benchmark application, leading to guidance for compiler vendors that will have a real impact on ASC simulation tools
  • 2012 :LLNL Defense Programs Award of Excellence for significant contributions to the Stockpile Stewardship Program enabling new and unprecedented high fidelity simulation capabilities
  • 2012LLNL Weapons Complex and Integration Directorate award in appreciation of exemplary teamwork in the initial deployment of the ASC Code System for full system simulation
  • 2012 :LLNL B Division / B Program award for overcoming nearly insurmountable challenges to gain approval and host the 2012 NECDC conference
  • 2010 : LLNL Computing Applications and Research Department Award of Appreciation for contributions and dedication to improving standards in LLNL software development
  • 2008 : LLNL B Division / B Program award for outstanding work in demonstrating early success on the ASC Dawn machine for scaling of a multi-physics code
  • 1994-1996 : National Science Foundation Mathematical Sciences Industrial Research Postdoctoral Fellowship
  • 1989-1992 : USAF Office of Scientific Research Laboratory Graduate Fellowship
  • 1989 : Phi Beta Kappa, Lawrence University