Differentiable Symbolic Execution
Neural networks increasingly pervade software, but they are also brittle and opaque. How do we address the threat to software reliability that their adoption poses? One response to this challenge is safety verification of neural and neurosymbolic systems. However, this can only go so far, as a model whose training objective does not include a correctness property may not satisfy the property after training. In this talk, I will discuss some of our recent work that addresses this weakness by integrating verification into the learning loop. Specifically, I will talk about differentiable symbolic execution, an approach in which an algorithm for learning neural modules within a larger software system is offered differentiable feedback from a symbolic executor. I will also summarize ongoing follow-up work that extends the method to reactive software systems. I will conclude with a broader discussion of ways in which safety analysis of programs can be used to provide assurance for the emerging breed of learning-enabled neurosymbolic systems.
Swarat Chaudhuri is an Associate Professor of computer science at the University of Texas at Austin. His research lies in the intersection of Programming Languages (PL) and Machine Learning (ML). Specifically, he studies ways in which PL and ML techniques can be brought together to build robust and trustworthy intelligent systems targeting complex tasks such as software development and robot control.
Swarat received a bachelor’s degree in computer science from the Indian Institute of Technology, Kharagpur, in 2001, and a doctoral degree in computer science from the University of Pennsylvania in 2007. Before joining UT Austin, he held faculty positions at Rice University and the Pennsylvania State University. He is a recipient of the National Science Foundation CAREER award, the ACM SIGPLAN John Reynolds Doctoral Dissertation Award, and the Morris and Dorothy Rubinoff Dissertation Award from the University of Pennsylvania.