Sun 18 Jun 2023 14:00 - 14:15 at Magnolia 4 - CTSTA: Session 3

This talk presents WACO, a novel method of co-optimizing the format and the schedule of a given sparsity pattern in a sparse tensor program. A core challenge in this paper is the design of a lightweight cost model that accurately predicts the runtime of a sparse tensor program by considering the sparsity pattern, the format, and the schedule. The key idea in addressing this is exploiting a sparse convolutional network to learn meaningful features of the sparsity pattern and embedding a coupled behavior between the format and the schedule using a specially designed schedule template. We evaluated WACO for four different algorithms (SpMV, SpMM, SDDMM, and MTTKRP) on a CPU using 726 different sparsity patterns. Our experimental results showed that WACO outperformed four state-of-the-art baselines, Intel MKL, BestFormat, TACO with a default schedule, and ASpT. Compared to the best of four baselines, WACO achieved 1.43×, 1.18×, 1.14×, and 1.27× average speedups on SpMV, SpMM, SDDMM, and MTTKRP, respectively.

Sun 18 Jun

Displayed time zone: Eastern Time (US & Canada) change

14:00 - 15:30
14:00
15m
Talk
Learning workload-aware cost model for sparse tensor program
CTSTA
Jaeyeon Won Massachusetts Institute of Technology
14:15
15m
Talk
Autoscheduling for Sparse Tensor Contraction
CTSTA
Kirshanthan Sundararajah Purdue University
14:30
10m
Panel
Discussion
CTSTA

14:40
15m
Talk
Fantastic Sparse Masks and Where to Find Them
CTSTA
Shiwei Liu The University of Texas at Austin, Texas, USA
14:55
15m
Talk
Moving the MLIR Sparse Compilation Pipeline into ProductionVirtual
CTSTA
Aart Bik Google, Inc., Peiming Liu Google Inc
15:10
15m
Panel
Discussion
CTSTA

15:25
5m
Day closing
Closing
CTSTA
Fredrik Kjolstad Stanford University, Saman Amarasinghe Massachusetts Institute of Technology