Mon 19 Jun 2023 17:40 - 18:00 at Royal - PLDI: Machine Learning Chair(s): Yaniv David

Large language models have demonstrated outstanding performance on a wide range of tasks such as question answering and code generation.
On a high level, given an input, a language model can be used to automatically complete the sequence in a statistically-likely way. Based on this, users prompt these models with language instructions or examples, to implement a variety of downstream tasks. Advanced prompting methods can even imply interaction between the language model, a user, and external tools such as calculators. However, to obtain state-of-the-art performance or adapt language models for specific tasks, complex task- and model-specific programs have to be implemented, which may still require ad-hoc interaction.

Based on this, we present the novel idea of Language Model Programming (LMP). LMP generalizes language model prompting from pure text prompts to an intuitive combination of text prompting and scripting. Additionally, LMP allows constraints to be specified over the language model output. This enables easy adaption to many tasks while abstracting language model internals and providing high-level semantics.

To enable LMP, we implement LMQL (short for Language Model Query Language), which leverages the constraints and control flow from an LMP prompt to generate an efficient inference procedure that minimizes the number of expensive calls to the underlying language model.

We show that LMQL can capture a wide range of state-of-the-art prompting methods in an intuitive way, especially facilitating interactive flows that are challenging to implement with existing high-level APIs. Our evaluation shows that we retain or increase the accuracy on several downstream tasks, while also significantly reducing the required amount of computation or cost in the case of pay-to-use APIs (26-85% cost savings).

Mon 19 Jun

Displayed time zone: Eastern Time (US & Canada) change

16:00 - 18:00
PLDI: Machine LearningPLDI Research Papers at Royal
Chair(s): Yaniv David Columbia University

#pldi-mon-1600-ml-royal Discord icon small YouTube icon small

16:00
20m
Talk
Scallop: A Language for Neurosymbolic Programming
PLDI Research Papers
Ziyang Li UPenn, Jiani Huang UPenn, Mayur Naik University of Pennsylvania
DOI
16:20
20m
Talk
Abstract Interpretation of Fixpoint Iterators with Applications to Neural Networks
PLDI Research Papers
Mark Niklas Müller ETH Zurich, Marc Fischer ETH Zurich, Robin Staab ETH Zurich, Martin Vechev ETH Zurich
DOI
16:40
20m
Talk
Register Tiling for Unstructured Sparsity in Neural Network Inference
PLDI Research Papers
Lucas Wilkinson University of Toronto, Kazem Cheshmi McMaster University, Maryam Mehri Dehnavi University of Toronto
DOI
17:00
20m
Talk
Architecture-Preserving Provable Repair of Deep Neural Networks
PLDI Research Papers
Zhe Tao University of California, Davis, Stephanie Nawas University of California, Davis, Jacqueline Mitchell University of California, Davis, Aditya V. Thakur University of California at Davis
DOI Pre-print
17:20
20m
Talk
Incremental Verification of Neural Networks
PLDI Research Papers
Shubham Ugare University of Illinois at Urbana-Champaign, Debangshu Banerjee UIUC, Sasa Misailovic University of Illinois at Urbana-Champaign, Gagandeep Singh University of Illinois at Urbana-Champaign
DOI
17:40
20m
Talk
Prompting Is Programming: A Query Language for Large Language Models
PLDI Research Papers
Luca Beurer-Kellner ETH Zurich, Marc Fischer ETH Zurich, Martin Vechev ETH Zurich
DOI