Tue 20 Jun 2023 09:40 - 10:00 at Cypress 1 - PLDI: Security Chair(s): Limin Jia

The pervasive brittleness of deep neural networks has attracted significant attention in recent years. A particularly interesting finding is the existence of adversarial examples, imperceptibly perturbed natural inputs that induce erroneous predictions in state-of-the-art neural models. In this paper, we study a different type of adversarial examples specific to code models, called discrete adversarial examples, which are created through program transformations that preserve the semantics of original inputs. In particular, we propose a novel, general method that is highly effective in attacking a broad range of code models. From the defense perspective, our primary contribution is a theoretical foundation for the application of adversarial training — the most successful algorithm for training robust classifiers — to defending code models against discrete adversarial attack. Motivated by the theoretical results, we present a simple realization of adversarial training that substantially improves the robustness of code models against adversarial attacks in practice.

We conduct a comprehensive evaluation on both our attack and defense methods. Results show that our discrete attack is significantly more effective than state-of-the-art whether or not defense mechanisms are in place to aid models in resisting attacks. In addition, our realization of adversarial training improves the robustness of all evaluated models by the widest margin against state-of-the-art adversarial attacks as well as our own.

Tue 20 Jun

Displayed time zone: Eastern Time (US & Canada) change

09:00 - 11:00
PLDI: SecurityPLDI Research Papers at Cypress 1
Chair(s): Limin Jia Carnegie Mellon University

#pldi-tue-0900-security-cypress Discord icon small YouTube icon small

09:00
20m
Talk
Obtaining Information Leakage Bounds via Approximate Model Counting
PLDI Research Papers
Seemanta Saha University of California Santa Barbara, Surendra Ghentiyala University of California Santa Barbara, Shihua Lu University of California Santa Barbara, Lucas Bang Harvey Mudd College, Tevfik Bultan University of California at Santa Barbara
DOI
09:20
20m
Talk
CommCSL: Proving Information Flow Security for Concurrent Programs using Abstract Commutativity
PLDI Research Papers
Marco Eilers ETH Zurich, Thibault Dardinier ETH Zurich, Peter Müller ETH Zurich
DOI
09:40
20m
Talk
Discrete Adversarial Attack to Models of Code
PLDI Research Papers
Fengjuan Gao Nanjing University of Science and Technology, Yu Wang Nanjing University, Ke Wang Visa Research
DOI
10:00
20m
Talk
Generalized Policy-Based Noninterference for Efficient Confidentiality-Preservation
PLDI Research Papers
Shamiek Mangipudi Università della Svizzera italiana (USI), Pavel Chuprikov USI Lugano, Patrick Eugster USI Lugano; Purdue University, Malte Viering TU Darmstadt, Savvas Savvides Purdue University
DOI
10:20
20m
Talk
Taype: A Policy-Agnostic Language for Oblivious Computation
PLDI Research Papers
Qianchuan Ye Purdue University, Benjamin Delaware Purdue University
DOI
10:40
20m
Talk
Automated Detection of Under-Constrained Circuits in Zero-Knowledge Proofs
PLDI Research Papers
Shankara Pailoor University of Texas at Austin, Yanju Chen University of California at Santa Barbara, Franklyn Wang Harvard University, 0xparc, Clara Rodríguez-Núñez Complutense University of Madrid, Jacob Van Geffen Veridise Inc., Jason Morton ZKonduit, Michael Chu 0xparc, Brian Gu 0xparc, Yu Feng University of California at Santa Barbara, Işıl Dillig University of Texas at Austin
DOI