Autoscheduling for Sparse Tensor Contraction
Sparse tensor contractions has become ubiquitous in many different applications such as data mining, scientific simulations, etc. Transformations that reorder, fuse, and distribute loops have asymptotic effects on the performance of sparse tensor contraction.
Previously, we have introduced SparseLNR which provides a loop fusion scheduling directive to Tensor Algebra Compiler (TACO). SparseLNR requires splitting a tensor expression and saving the intermediary result in memory, and when recursively applied, this opens up a significantly large scheduling space (i.e., different schedules for the tensor expression) when performed with loop reordering. It is not a practical solution to run the code for each schedule and pick the best one given the size of the scheduling space. Therefore, we present a framework to analyze the scheduling space and find the potential schedules with both the best asymptotic and memory behavior. In our framework, we show how to encode size and sparsity-related constraints and how it helps to refine autoscheduling.
Sun 18 JunDisplayed time zone: Eastern Time (US & Canada) change
14:00 - 15:30 | |||
14:00 15mTalk | Learning workload-aware cost model for sparse tensor program CTSTA Jaeyeon Won Massachusetts Institute of Technology | ||
14:15 15mTalk | Autoscheduling for Sparse Tensor Contraction CTSTA Kirshanthan Sundararajah Purdue University | ||
14:30 10mPanel | Discussion CTSTA | ||
14:40 15mTalk | Fantastic Sparse Masks and Where to Find Them CTSTA Shiwei Liu The University of Texas at Austin, Texas, USA | ||
14:55 15mTalk | Moving the MLIR Sparse Compilation Pipeline into ProductionVirtual CTSTA | ||
15:10 15mPanel | Discussion CTSTA | ||
15:25 5mDay closing | Closing CTSTA |