The second Workshop on Compiler Techniques for Sparse Tensor Algebra (CTSTA) aims to bring together researchers interested in compiler techniques, programming abstractions, libraries/frameworks, algorithms, and hardware for sparse tensor algebra and sparse array programs. Sparse tensor algebra is widely used across many disciplines where performance is critical, including scientific computing, machine learning, and data analytics. Due to the large number of applications, optimization techniques, types of data structures, and specialized hardware, there is a need for automation. In recent years, there has been a lot of interest in compiler techniques to automatically generate sparse tensor algebra code. This workshop aims to bring together leading researchers from academia and industry for talks on applications, code generation, source code transformation and optimization, automatic scheduling, data structure modeling, compilation to different types of hardware, specialized accelerators, extensions to new types of sparse array operations, and applying the techniques beyond sparsity to areas such as lossless compression. The workshop will last one day and will include invited talks, discussion, and submitted talks.
Sun 18 JunDisplayed time zone: Eastern Time (US & Canada) change
09:00 - 11:00 | |||
09:00 5mDay opening | Introduction CTSTA Fredrik Kjolstad Stanford University | ||
09:05 15mTalk | Software and Hardware for Sparse ML CTSTA Fredrik Kjolstad Stanford University | ||
09:20 15mTalk | Integrating Data Layout into Compilers and Code Generators CTSTA Mary Hall University of Utah | ||
09:35 15mTalk | Tackling the challenges of high-performance graph analytics at compiler level CTSTA Gokcen Kestor Pacific Northwest National Laboratory | ||
09:50 10mPanel | Discussion CTSTA | ||
10:00 5mBreak | BreakSocial CTSTA | ||
10:05 15mTalk | Challenges and Opportunities for Sparse Compilers in LLM CTSTA Zihao Ye University of Washington | ||
10:20 15mTalk | The Sparse Abstract Machine CTSTA Olivia Hsu Stanford University | ||
10:35 15mTalk | TeAAL: A Declarative Framework for Modeling Sparse Tensor Accelerators CTSTA Nandeeka Nayak University of Illinois at Urbana-Champaign | ||
10:50 10mPanel | Discussion CTSTA |
11:20 - 12:30 | |||
11:20 15mTalk | Accelerating Sparse Matrix Computations with Code Specialization CTSTA Maryam Mehri Dehnavi University of Toronto | ||
11:35 15mTalk | A General Distributed Framework for Contraction of a Sparse Tensor with a Tensor Network CTSTA Raghavendra Kanakagiri University of Illinois Urbana-Champaign | ||
11:50 15mTalk | Automatic Differentiation for Sparse TensorsVirtual CTSTA Amir Shaikhha University of Edinburgh | ||
12:05 15mTalk | Compiler Support for Structured Data CTSTA Saman Amarasinghe Massachusetts Institute of Technology | ||
12:20 10mPanel | Discussion CTSTA |
14:00 - 15:30 | |||
14:00 15mTalk | Learning workload-aware cost model for sparse tensor program CTSTA Jaeyeon Won Massachusetts Institute of Technology | ||
14:15 15mTalk | Autoscheduling for Sparse Tensor Contraction CTSTA Kirshanthan Sundararajah Purdue University | ||
14:30 10mPanel | Discussion CTSTA | ||
14:40 15mTalk | Fantastic Sparse Masks and Where to Find Them CTSTA Shiwei Liu The University of Texas at Austin, Texas, USA | ||
14:55 15mTalk | Moving the MLIR Sparse Compilation Pipeline into ProductionVirtual CTSTA | ||
15:10 15mPanel | Discussion CTSTA | ||
15:25 5mDay closing | Closing CTSTA |
16:00 - 17:50 | |||
16:00 1h50mPoster | Poster Session and Free-Form Discussion CTSTA |
Talks
Call for Talks
We are soliciting 15 minute talks for the second Workshop on Compiler Techniques for Sparse Tensor Algebra (CTSTA). Relevant topics include applications, libraries, programming language constructs, compilers, libraries/frameworks, and hardware for sparse tensor algebra. The talks can be technical, on new ideas, on your thoughts about future needs, or other related topics that you are excited about. Already published ideas are welcome. There will not be a proceeding, so the talks will not require a submitted paper. If you are interested, please submit a short description (100-200 words).