Ameesh Shah

Ameesh Shah

PhD Student

UC Berkeley

Ameesh Shah

Hi! I’m currently a Ph.D student at UC Berkeley, where I’m advised by Sanjit Seshia, and gratefully supported by the NDSEG Fellowship.

My research interests lie broadly at the intersection of machine learning and formal methods, with particular focus on applications in cyber-physical systems, robotics, and program synthesis. By applying formal reasoning to learning algorithms, I hope to (1) better identify when learned models fail in unpredictable ways, (2) prevent such arbitrary failures via correct-by-construction learning, and (3) design neurosymbolic methods to enable the application of learning algorithms to real-world settings.

I received my Bachelor’s and Master’s degrees from Rice University, where I worked on projects related to interpretable machine learning via program synthesis and formal methods. I was advised by Swarat Chaudhuri, who I worked with on the “Understanding the World Through Code” NSF Expeditions Project. As an undergrad, I also collaborated with Ankit Patel and Richard Baraniuk. I was fortunate to be supported by the Rice CS Graduate Fellowship.

I’ve also spent time at Microsoft Research AI, where I worked with Alex Polozov and the GRAIL group on interactive ML-driven program synthesis. I am currently collaborating with Jon DeCastro and TRI to help roboticists better model human behavior.

In my free time, I enjoy doing a lot of things, including but not limited to: playing tennis, writing subpar poetry, cooking, and repeatedly fixing and breaking my jumpshot. I’m originally from Cleveland, Ohio, and I love watching my beloved Cavaliers break my heart year after year.

Publications

(2022). Demonstration-Informed Specification Search. Under Review.

PDF

(2022). Learning Concepts from Membership and Preference Queries. Under Review.

(2022). Learning Deterministic Finite Automata Decompositions from Examples and Demonstrations. At FMCAD 2022.

PDF

(2022). Model-based Formalization of the Autonomy-to-Human Perception Hand-off. At ITSC 2022; UC Berkeley Technical Report.

PDF

(2020). Learning Differentiable Programs with Admissible Neural Heuristics. At NeurIPS 2020.

PDF

(2020). Program Learning with Neural Heuristics. Rice University Masters’ Thesis.

PDF

(2019). Representing Formal Languages: A Comparison Between Finite Automata and Recurrent Neural Networks. At ICLR 2019.

PDF

(2018). Finite Automata can be Linearly Decoded from Recurrent Neural Networks. Oral Presentation at GCURS 2018.

Contact