• Title
    Computer Scientist
  • Email
    pearce8@llnl.gov
  • Phone
    (925) 422-0436
  • Organization
    Not Available

Dr. Olga Pearce is a computer scientist in the Center for Applied Scientific Computing at Lawrence Livermore National Laboratory. She created Benchpark, an open collaborative repository for reproducible specifications of HPC benchmarks and cross site benchmarking environments, and Thicket, an open source python-based toolkit for Exploratory Data Analysis (EDA) of parallel performance data.  Olga leads benchmarking for Advanced Technology Systems, the Performance Analysis and Visualization for Exascale Project, and Performance Modeling in the Fractale SI. Her research interests include HPC architectures and simulations, parallel algorithms and programing models, system software, performance analysis and optimization.

Olga has been at LLNL since 2007. She received the NSF graduate fellowship in 2006, Lawrence Scholar Fellowship in 2009, and joined CASC as technical staff in 2014.  Olga received the LLNL Deputy Director’s Science & Technology award in 2015, and LLNL awards for developing RAJA performance portability model (2018), porting and optimization of codes on LLNL’s first accelerated supercomputer (2019), developing GPU capabilities of the Next-Gen Multiphysics code (2021), response to the National Academies of Sciences RFI (2023), and acceptance of El Capitan (2022, 2024).  

Olga helped create the SC Student Cluster Challenge in 2008, started a joint appointment at Texas A&M University as the Associate Professor of Practice in the Computer Science and Engineering in 2021, and serves as a co-chair of the Salishan Conference on High-Speed Computing.  Olga received her Ph.D. in Computer Science from Texas A&M University in 2014, and her B.S. in Computer Science and Mathematics from Western Oregon University in 2004.

Software Projects:

  • Benchpark, an open collaborative repository for reproducible specifications of HPC benchmarks and cross site benchmarking environments.
  • Thicket, an open source python-based toolkit for Exploratory Data Analysis (EDA) of parallel performance data.

Other projects that Olga is affiliated with in one way or another:

  • RAJA: Parallel Performance Portability Layer (C++)
  • Caliper: Application-level performance data collection library
  • Hatchet: Tool for manipulating call trees in Pandas dataframe
  • Spack: An open source package manager for HPC

For full list of publications, see:

  • 2023: WSC award for response to the National Academies of Sciences request for information
  • 2022: WCI award for acceptance of Early Access System for El Capitan
  • 2021: WSC award for developing GPU capabilities of the Next-Gen Multiphysics code, MARBL
  • 2019: WCI award for porting and optimization of codes on LLNL’s first accelerated supercomputer
  • 2018: WSC award for developing RAJA performance portability model
  • 2015: LLNL Deputy Director’s Science & Technology award for load balancing work
  • 2009: Lawrence Scholar Fellowship
  • 2006: NSF Graduate Fellowship

Curriculum Vitae

Olga's full CV

Links