About
I am an Assistant Professor of Computer Science at
UMass Amherst CICS. In our group, we look at security and privacy attack vectors for AI
systems deployed in real-life.
I co-lead AI Security
Lab at UMass with Amir Houmansadr and co-lead AI Safety Initiative with
Shlomo Zilberstein.
I am also a senior research scientist (part-time) at Google working on agentic privacy
and security.
I completed my PhD at
Cornell Tech
advised by
Vitaly
Shmatikov
and
Deborah
Estrin. My research was recognized by
Apple Scholars in AI/ML
and
Digital Life Initiative
fellowships, and Usenix Security Distinguished Paper
Award. At UMass, we received
Schmidt Sciences AI Safety Grant.
Before going to the grad school, I got an engineering degree from
Baumanka
and worked at Cisco as a software engineer.
I grew up in
Tashkent
and play water polo.
Recruitment:
I am looking for new PhD students (apply) and post-docs to work on attacks on LLM agents and generative
models.
Check the new gorgeous UMass CICS Building,
our incredible AI Security Lab,
and exciting AI Safety Initiative.
Please reach out over email and fill the form.
Research
Agents
We proposed AirGapAgent(CCS'24)
for privacy and Conseca(HotOS'25)
for security of agents
leveraging the theory of Contextual Integrity.
Recently, we studied
prompt leakage in research agents
and investigated throttling to prevent denial of
service or scraping by web agents. Finally, our Terrarium
framework enables careful studies of multi-agent systems.
Multi-modal Security
We studied vulnerabilities in multi-modal systems including
self-interpreting images (USENIX
Security'25)
and adversarial illusions (USENIX Security'24, Distinguished Paper)
that can manipulate vision-language models. Our work demonstrates novel
attack vectors in systems that process both visual and textual information.
LLM and Reasoning Attacks
We developed attacks on large language models including an
attack (S&P'22)
on generative language models that enables propaganda generation.
We recently proposed the OverThink
attack on reasoning models that exploits their chain-of-thought mechanisms.
Privacy
We applied Contextual Integrity to form-filling tasks (TMLR'25)
and investigated context ambiguity (NeurIPS'25) in privacy reasoning.
We worked on aspects of differential privacy including fairness
trade-offs (NeurIPS'19),
applications to location
heatmaps (PETS'21), and tokenization
methods (ACL FL workshop'22)
for private federated learning. We built the
Ancile (WPES'19)
system that enforces use-based privacy of user data.
PhD students: Abhinav
Kumar, June
Jeong (co-advised w Amir Houmansadr),
Dzung Pham (co-advised w Amir Houmansadr).
Recent collaborators:
Amir Houmansadr,
Shlomo
Zilberstein,
Sahar Abdelnabi,
Brian Levine,
Kyle Wray,
Ali Naseh,
Jaechul Roh,
Mason Nakamura,
Saaduddin Mahmud,
and many others.
Teaching
Courses
CS 360:
Intro to Security, SP 25.
CS 692PA: Seminar on Privacy and Security for GenAI models,
FA'24, SP'25, FA'25
CS 690: Trustworthy and Responsible AI, FA'25 (link)
Service
Academic Service
Recent Program Committees:
- ACM CCS'24/'25, ICLR'24/25/26, IEEE S&P'26
Workshop Organizer:
Broadening Participation
I co-organize
PLAIR
(Pioneer Leaders in AI and Robotics), an outreach program
that introduces high school students across Western Massachusetts to
the world of robotics and AI safety. Please reach out if you are
interested in joining.