Sun 18 Jun 2023 11:50 - 12:05 at Magnolia 4 - CTSTA: Session 2

Sparse tensors are prevalent in many data-intensive applications. However, existing automatic differentiation (AD) frameworks are tailored towards dense tensors, which makes it a challenge to efficiently compute gradients through sparse tensor operations. This is due to irregular sparsity patterns that can result in substantial memory and computational overheads. We propose a novel framework that enables the efficient AD of sparse tensors. The key aspects of our work include a compilation pipeline leveraging two intermediate DSLs with AD-agnostic domain-specific optimizations followed by efficient C++ code generation. Our framework outperforms state-of-the-art alternatives across a variety of synthetic and real-world sparse tensor datasets.

Sun 18 Jun

Displayed time zone: Eastern Time (US & Canada) change

11:20 - 12:30
11:20
15m
Talk
Accelerating Sparse Matrix Computations with Code Specialization
CTSTA
Maryam Mehri Dehnavi University of Toronto
11:35
15m
Talk
A General Distributed Framework for Contraction of a Sparse Tensor with a Tensor Network
CTSTA
Raghavendra Kanakagiri University of Illinois Urbana-Champaign
11:50
15m
Talk
Automatic Differentiation for Sparse TensorsVirtual
CTSTA
Amir Shaikhha University of Edinburgh
12:05
15m
Talk
Compiler Support for Structured Data
CTSTA
Saman Amarasinghe Massachusetts Institute of Technology
12:20
10m
Panel
Discussion
CTSTA