Sun 18 Jun 2023 14:15 - 14:30 at Magnolia 4 - CTSTA: Session 3

Sparse tensor contractions has become ubiquitous in many different applications such as data mining, scientific simulations, etc. Transformations that reorder, fuse, and distribute loops have asymptotic effects on the performance of sparse tensor contraction.
Previously, we have introduced SparseLNR which provides a loop fusion scheduling directive to Tensor Algebra Compiler (TACO). SparseLNR requires splitting a tensor expression and saving the intermediary result in memory, and when recursively applied, this opens up a significantly large scheduling space (i.e., different schedules for the tensor expression) when performed with loop reordering. It is not a practical solution to run the code for each schedule and pick the best one given the size of the scheduling space. Therefore, we present a framework to analyze the scheduling space and find the potential schedules with both the best asymptotic and memory behavior. In our framework, we show how to encode size and sparsity-related constraints and how it helps to refine autoscheduling.

Sun 18 Jun

Displayed time zone: Eastern Time (US & Canada) change

14:00 - 15:30
14:00
15m
Talk
Learning workload-aware cost model for sparse tensor program
CTSTA
Jaeyeon Won Massachusetts Institute of Technology
14:15
15m
Talk
Autoscheduling for Sparse Tensor Contraction
CTSTA
Kirshanthan Sundararajah Purdue University
14:30
10m
Panel
Discussion
CTSTA

14:40
15m
Talk
Fantastic Sparse Masks and Where to Find Them
CTSTA
Shiwei Liu The University of Texas at Austin, Texas, USA
14:55
15m
Talk
Moving the MLIR Sparse Compilation Pipeline into ProductionVirtual
CTSTA
Aart Bik Google, Inc., Peiming Liu Google Inc
15:10
15m
Panel
Discussion
CTSTA

15:25
5m
Day closing
Closing
CTSTA
Fredrik Kjolstad Stanford University, Saman Amarasinghe Massachusetts Institute of Technology