Sat 17 Jun 2023 09:50 - 11:00 at Magnolia 1-3 - CSC: Open Session 1

7 Lightning talks of 10 minutes each.

Andrew Siegel: Developing HPC Applications at the Forefront

This talk draws on seven years of experience in the Exascale Computing Project (ECP) to distill a number of unique lessons on the complexities of science/engineering application development in the presence of rapidly evolving technologies.

Sreepathi Pai: Formal ISA Semantics Can Bind Us All (in the Quest for Correctness)

The Instruction Set Architecture (ISA) abstraction has successfully provided the shared framework for collaboration and innovation across the computing stack: from computer architecture to compilers to fast numerical algorithms. Using our work on the formal semantics of NVIDIA GPU ISA as the backdrop, I argue that formalized low-level ISA semantics can serve the same purpose for collaboration on correctness in scientific computing. Formal semantics are well-known in the programming languages (PL) community but aren’t widely used outside PL. Repurposable formal ISA semantics could spur adoption as I show by demonstrating applications to computer architecture emulation, compiler validation, test case generation, and analysis of novel numerical algorithms.

Harshitha Menon: Ensuring Correctness in Programs Generated by LLM: Challenges and Solutions

Large Language Models (LLMs) have shown great potential in a variety of software related tasks, from code completion to competitive programming, and can be used to assist in software portability. However, there are challenges associated with using it in HPC as these models lack an understanding of program semantics, which means that there are no guarantees regarding the correctness of the generated code. This talk investigates the challenges and potential solutions in ensuring the correctness of LLM-generated programs.

Piotr Luszczek: Exception Handling in Programming Languages and Numerical Libraries
Alyson Fox: Analytical Bias in ZFP Lossy Compression for Floating-Point Data

Data compression is essential due to the exponential data growth in scientific simulations, and data collection applications pose significant challenges in storage and bandwidth. However, traditional lossless compression methods are ineffective for floating-point data, leading to lossy compression algorithms that allow small deviations. Understanding the impact of errors from lossy compression on data analytics accuracy is crucial. This talk presents a statistical analysis of the error introduced by ZFP compression; a leading lossy compression algorithm designed explicitly for floating-point data. We identify a bias in the error and propose simple modifications to neutralize it and reduce the resulting error. These modifications aim to improve the accuracy of data analytics tasks relying on compressed floating-point data. Our findings provide insights into error characteristics and guide informed decisions about data size reduction versus analytical accuracy.

Vivek Sarkar: Determinacy = Functional Determinism + Structural Determinism
Andrew Appel: Formally Verified Numerical Methods

We are building modular end-to-end machine-checked proofs of correctness and accuracy of numerical programs. We prove in Coq that a C program correctly and exactly implements a floating-point-valued functional model, that the float functional model accurately approximates a real-valued functional algorithm, that the real-valued algorithm accurately finds a solution to the mathematical problem of interest. We compose the accuracy bounds end-to-end to prove that the low-level program finds a solution within a concretely specified tolerance. To do this we decompose proofs modularly, to separate reasoning about C from reasoning about floating-point from reasoning about discrete algorithms in the reals. We rely heavily on Coq libraries (our own and others’): Flocq for floats, VCFloat for round-off error proofs, VST for C program proofs, MathComp for matrices and fields, and new libraries of proofs about (e.g.,) correctness and accuracy of sparse matrix multiplication. We have results regarding ODE integration, Jacobi iteration, and Newton’s method. This is joint work with David Bindel and Ariel Kellison (Cornell) and Jean-Baptiste Jeannin and Mohit Tekriwal (Michigan). Visit us at

Sat 17 Jun

Displayed time zone: Eastern Time (US & Canada) change

09:00 - 11:00
Day opening

Letting HPC Programmers Focus On Correctness First, Then On PerformanceInvited Talk
Ignacio Laguna Lawrence Livermore National Laboratory
Lightning Talks
Andrew Siegel , Sreepathi Pai University of Rochester, Harshitha Menon Lawrence Livermore National Lab, Piotr Luszczek , Alyson Fox , Vivek Sarkar Rice University, USA, Andrew W. Appel Princeton University