TeAAL: A Declarative Framework for Modeling Sparse Tensor Accelerators
Over the past few years, the end of Dennard scaling and the slowing of Moore’s law have led to an increased focus on domain-specific accelerators for a variety of applications, including sparse tensor algebra. Exploiting the sparsity present in real-world tensors enables improvements in performance and efficiency by eliminating data movement of and computation on zero values. However, due to the irregularity present in sparse tensors, accelerators must employ a wide variety of novel solutions to achieve good performance. Unfortunately, prior work on sparse accelerator modeling does not express this full range of design features. This has made it difficult to compare or extend the state of the art, and understand the impact of each design choice.
To address this gap, this talk describes TeAAL: a framework that enables the concise and precise specification and evaluation of sparse tensor algebra architectures. Specifically, we explore how the TeAAL specification language can be used to represent state-of-the-art accelerators and explain how the TeAAL compiler translates designs written in this language to executable performance models that can be evaluated on real input tensors. We have used TeAAL to model a number of accelerators so far, including ones designed for matrix multiplication (e.g., ExTensor, Gamma, OuterSPACE, SIGMA) and graph problems (Graphicianado) and will share some early results.
Nandeeka Nayak is a rising fourth-year, Computer Science PhD student at University of Illinois at Urbana-Champaign, advised by Chris Fletcher. She works on understanding domain-specific accelerators for tensor algebra, with a focus on building abstractions that unify a wide variety of kernels and accelerator designs into a small set of primitives, in collaboration with Joel Emer and Michael Pellauer. In the past, she has also worked on hardware security.
Before coming to the University of Illinois, she completed her B.S. in Computer Science from Harvey Mudd College in 2020. There, she worked with Chris Clark in the Lab for Autonomous and Intelligent Robotics. Additionally, for her senior capstone project, she added a numerical programming library to the programming language Factor.
In her free time, she enjoys cooking, social dancing, traveling with her family, and studying Korean.
Sun 18 JunDisplayed time zone: Eastern Time (US & Canada) change
09:00 - 11:00 | |||
09:00 5mDay opening | Introduction CTSTA Fredrik Kjolstad Stanford University | ||
09:05 15mTalk | Software and Hardware for Sparse ML CTSTA Fredrik Kjolstad Stanford University | ||
09:20 15mTalk | Integrating Data Layout into Compilers and Code Generators CTSTA Mary Hall University of Utah | ||
09:35 15mTalk | Tackling the challenges of high-performance graph analytics at compiler level CTSTA Gokcen Kestor Pacific Northwest National Laboratory | ||
09:50 10mPanel | Discussion CTSTA | ||
10:00 5mBreak | BreakSocial CTSTA | ||
10:05 15mTalk | Challenges and Opportunities for Sparse Compilers in LLM CTSTA Zihao Ye University of Washington | ||
10:20 15mTalk | The Sparse Abstract Machine CTSTA Olivia Hsu Stanford University | ||
10:35 15mTalk | TeAAL: A Declarative Framework for Modeling Sparse Tensor Accelerators CTSTA Nandeeka Nayak University of Illinois at Urbana-Champaign | ||
10:50 10mPanel | Discussion CTSTA |