Eugene Bagdasarian

Eugene Bagdasarian

another spelling: Eugene Bagdasaryan Most of my family has 'i' instead of 'y' due to changes in transliteration and the spelling of this Armenian name with 'i' is more accurate. Apologies for the confusion, I am fine with any spelling version that you use.

Assistant Professor at UMass Amherst.

eugene@umass.edu
Office: CSL E451, 130 Governors Dr, Amherst, MA 01003

About

I am an Assistant Professor of Computer Science at UMass Amherst CICS. I co-lead AI Security Lab at UMass with Amir Houmansadr and co-lead AI Safety Initiative with Shlomo Zilberstein. I am also a senior research scientist (part-time) at Google working on agentic privacy and security.

In my group, we look at security and privacy attack vectors for AI systems deployed in real-life. My research was recognized by Apple Scholars in AI/ML Fellowship and Usenix Security Distinguished Paper'24 Award. My group is supported by the Schmidt Sciences Trustworthy AI Grant. I completed my PhD at Cornell Tech advised by Vitaly Shmatikov and Deborah Estrin. Before going to the grad school, I got an engineering degree from Baumanka and worked at Cisco as a software engineer.

I grew up in Tashkent and play water polo.

Recruitment: I am not looking for new PhD students.

Announcement: We are running a Trustworthy AI Seminar Series with Emiliano De Cristofaro. Check the speakers and subscribe up for videos and emails.

Impact

Current Funding:

Current Research

Multi-Agent AI Systems

We proposed AirGapAgent(CCS'24) for privacy and Conseca(HotOS'25) for security of agents leveraging the theory of Contextual Integrity. Recently, we studied prompt leakage(USENIX Security'26) in research agents and investigated throttling to prevent denial of service or scraping by web agents. Finally, our Colosseum framework discovers new collusion artifacts in multi-agent systems.

LLM and Reasoning Attacks

We developed attacks on large language models including an attack (S&P'22) on generative language models that enables propaganda generation. We recently proposed the OverThink attack on reasoning models that exploits their chain-of-thought mechanisms.

Privacy

We applied Contextual Integrity to form-filling tasks (TMLR'25) and investigated context ambiguity (NeurIPS'25) in privacy reasoning. We worked on aspects of differential privacy including fairness trade-offs (NeurIPS'19), applications to location heatmaps (PETS'21), and tokenization methods (ACL FL workshop'22) for private federated learning. We built the Ancile (WPES'19) system that enforces use-based privacy of user data.

Earlier Topics

Backdoors

We worked on backdoor attacks in federated learning (AISTATS'20) and proposed frameworks Backdoors101 (USENIX Security'20) and Mithridates (CCS'24) for understanding and defending against backdoor attacks. We also studied backdooring bias (USENIX Security'25) into diffusion models.

Multi-modal Security

We studied vulnerabilities in multi-modal systems including self-interpreting images (USENIX Security'25) and adversarial illusions (USENIX Security'24, Distinguished Paper) that can manipulate vision-language models. Our work demonstrates novel attack vectors in systems that process both visual and textual information.

PhD students: Abhinav Kumar, June Jeong (co-advised w Amir Houmansadr), Dzung Pham (co-advised w Amir Houmansadr).

Recent collaborators: Amir Houmansadr, Shlomo Zilberstein, Sahar Abdelnabi, Brian Levine, Kyle Wray, George Bissias, Ali Naseh, Jaechul Roh, Mason Nakamura, Saaduddin Mahmud, and many others.

Teaching

CS 360 · Introduction to Security SP'25, SP'26

Take the class — or explore the interactive demos.

CS 690F · Trustworthy and Responsible AI FA'25, FA'26

Graduate course on safety, privacy, and alignment for modern AI systems — course site.

CS 692PA · Seminar on Privacy and Security for GenAI FA'24, SP'25, FA'25

Reading group on the latest research in GenAI security — seminar site.

Service

Academic Service

Recent Program Committees:

  • ACM CCS'24/'25, ICLR'24/25/26, IEEE S&P'26, IEEE S&P'27

Workshop Organizer:

Broadening Participation

I co-organize PLAIR (Pioneer Leaders in AI and Robotics), an outreach program that introduces high school students across Western Massachusetts to the world of robotics and AI safety. Please reach out if you are interested in joining.

Recent News