Eugene Bagdasarian

Eugene Bagdasarian

Assistant Professor, UMass Amherst CICS, he/him

eugene@umass.edu
Office: CS 304

About

I am an Assistant Professor at UMass Amherst CICS. In our group, we look at security and privacy attack vectors for AI systems deployed in real-life. I co-lead AI Security Lab at UMass with Amir Houmansadr and co-lead AI Safety Initiative with Shlomo Zilberstein. I am also part-time at Google working on privacy-conscious agents.

I completed my PhD at Cornell Tech advised by Vitaly Shmatikov and Deborah Estrin. My research was recognized by Apple Scholars in AI/ML and Digital Life Initiative fellowships, and Usenix Security Distinguished Paper Award. At UMass, we received Schmidt Sciences AI Safety Grant. Before going to the grad school, I got an engineering degree from Baumanka and worked at Cisco as a software engineer.

I grew up in Tashkent and play water polo.

Announcement 1: I am looking for PhD students (apply) and post-docs to work on attacks on LLM agents and generative models. Please reach out over email and fill the form!

Announcement 2: We are running a seminar on AI Safety, Security, and Privacy with incredible speakers, please sign up here to be added to the mailing list and receive calendar invites!

Research

Agents

We proposed AirGapAgent(CCS'24) for privacy and Conseca(HotOS'25) for security protection leveraging the theory of Contextual Integrity, and also applied CI to form-filling(TMLR'25). Recently, we studied prompt leakage in Research Agents and throttling web agents to prevent denial of service or scraping.

Backdoors

We worked on backdoor attacks in federated learning(AISTATS'20) and proposed frameworks Backdoors101(Security'20) and Mithridates(CCS'24) for understanding and defending against backdoor attacks. We also studied backdooring bias(Security'25) into diffusion models.

Multimodal Security

We studied vulnerabilities in multi-modal systems including self-interpreting images(Security'25) and adversarial illusions(Security'24, Award) that can manipulate vision-language models. Our work demonstrates novel attack vectors in systems that process both visual and textual information.

LLM and Reasoning Attacks

We developed attacks on large language models including an attack(S&P'22) on generative language models that enables propaganda generation. We recently proposed the OverThink attack on reasoning models that exploits their chain-of-thought mechanisms.

Privacy

We worked on aspects of differential privacy including fairness trade-offs(NeurIPS'19), applications to location heatmaps(PETS'21), and tokenization methods(ACL FL workshop'22) for private federated learning. We built the Ancile(WPES'19) system that enforces use-based privacy of user data.

PhD students: Abhinav Kumar, June Jeong (co-advised w Amir Houmansadr).

Recent collaborators: Amir Houmansadr, Shlomo Zilberstein, Brian Levine, Kyle Wray, Ali Naseh, Jaechul Roh, Dzung Pham, and many others.

Impact

  • Our research has been covered by VentureBeat and The Economist.
  • Our work on adversarial illusions received a Distinguished Paper Award at USENIX Security'24.
  • NIST Taxonomy on Adversarial Machine Learning uses our work on Federated Learning and AI Agents for their agenda.
  • Multiple key research roadmaps (e.g., DP and FL ) use our work as a foundation for future studies.
Current Funding:

Teaching

Courses

CS 360: Intro to Security, SP 25.

CS 692PA: Seminar on Privacy and Security for GenAI models, FA'24, SP'25, FA'25

CS 690: Trustworthy and Responsible AI, FA'25 (link)

Service

Academic Service

Program Committees:

  • ACM CCS'24/'25, ICLR'25, IEEE S&P'26

Workshop Organizer:

Broadening Participation

Eugene co-organizes PLAIR (Pioneer Leaders in AI and Robotics), an outreach program that introduces high school students across Western Massachusetts to the world of robotics and AI safety. Please reach out if you are interested in joining.

Recent News