A General Distributed Framework for Contraction of a Sparse Tensor with a Tensor Network
Sparse tensor decomposition and completion are common in numerous applications, ranging from machine learning to computational quantum chemistry. Typically, the main bottleneck in optimization of these models are contractions of a single large sparse tensor with a network of several dense matrices or tensors (SpTTN).
Prior works on high-performance tensor decomposition and completion have focused on performance and scalability optimizations for specific SpTTN kernels. We present algorithms and a runtime system for identifying and executing the most efficient loop nest for any SpTTN kernel. We consider both enumeration of such loop nests for autotuning and efficient algorithms for finding the lowest cost loop-nest for simpler metrics, such as buffer size or cache miss models. Our runtime system identifies the best choice of loop nest without user guidance, and also provides a distributed-memory parallelization of SpTTN kernels. We evaluate our framework using both real-world and synthetic tensors. Our results demonstrate that our approach outperforms available generalized state-of-the-art libraries and matches the performance of specialized codes.
Sun 18 JunDisplayed time zone: Eastern Time (US & Canada) change
11:20 - 12:30 | |||
11:20 15mTalk | Accelerating Sparse Matrix Computations with Code Specialization CTSTA Maryam Mehri Dehnavi University of Toronto | ||
11:35 15mTalk | A General Distributed Framework for Contraction of a Sparse Tensor with a Tensor Network CTSTA Raghavendra Kanakagiri University of Illinois Urbana-Champaign | ||
11:50 15mTalk | Automatic Differentiation for Sparse TensorsVirtual CTSTA Amir Shaikhha University of Edinburgh | ||
12:05 15mTalk | Compiler Support for Structured Data CTSTA Saman Amarasinghe Massachusetts Institute of Technology | ||
12:20 10mPanel | Discussion CTSTA |