Sun 18 Jun 2023 11:35 - 11:50 at Magnolia 4 - CTSTA: Session 2

Sparse tensor decomposition and completion are common in numerous applications, ranging from machine learning to computational quantum chemistry. Typically, the main bottleneck in optimization of these models are contractions of a single large sparse tensor with a network of several dense matrices or tensors (SpTTN).

Prior works on high-performance tensor decomposition and completion have focused on performance and scalability optimizations for specific SpTTN kernels. We present algorithms and a runtime system for identifying and executing the most efficient loop nest for any SpTTN kernel. We consider both enumeration of such loop nests for autotuning and efficient algorithms for finding the lowest cost loop-nest for simpler metrics, such as buffer size or cache miss models. Our runtime system identifies the best choice of loop nest without user guidance, and also provides a distributed-memory parallelization of SpTTN kernels. We evaluate our framework using both real-world and synthetic tensors. Our results demonstrate that our approach outperforms available generalized state-of-the-art libraries and matches the performance of specialized codes.

Sun 18 Jun

Displayed time zone: Eastern Time (US & Canada) change

11:20 - 12:30
11:20
15m
Talk
Accelerating Sparse Matrix Computations with Code Specialization
CTSTA
Maryam Mehri Dehnavi University of Toronto
11:35
15m
Talk
A General Distributed Framework for Contraction of a Sparse Tensor with a Tensor Network
CTSTA
Raghavendra Kanakagiri University of Illinois Urbana-Champaign
11:50
15m
Talk
Automatic Differentiation for Sparse TensorsVirtual
CTSTA
Amir Shaikhha University of Edinburgh
12:05
15m
Talk
Compiler Support for Structured Data
CTSTA
Saman Amarasinghe Massachusetts Institute of Technology
12:20
10m
Panel
Discussion
CTSTA