Eugene Bagdasarian

Eugene Bagdasarian

another spelling: Eugene Bagdasaryan Most of my family has 'i' instead of 'y' due to changes in transliteration and the spelling of this Armenian name with 'i' is more accurate. Apologies for the confusion, I am fine with any spelling version that you use.

Assistant Professor at UMass Amherst.

eugene@umass.edu
Office: E451

About

I am an Assistant Professor of Computer Science at UMass Amherst CICS. I co-lead AI Security Lab at UMass with Amir Houmansadr and co-lead AI Safety Initiative with Shlomo Zilberstein. I am also a senior research scientist (part-time) at Google working on agentic privacy and security.

In our group, we look at security and privacy attack vectors for AI systems deployed in real-life. My research was recognized by Apple Scholars in AI/ML Fellowship and Usenix Security Distinguished Paper'24 Award. My group issupported by the Schmidt Sciences Trustworthy AI Grant. I completed my PhD at Cornell Tech advised by Vitaly Shmatikov and Deborah Estrin. Before going to the grad school, I got an engineering degree from Baumanka and worked at Cisco as a software engineer.

I grew up in Tashkent and play water polo.

Recruitment: I am looking for new PhD students (apply) and post-docs to work on attacks on LLM agents and generative models. Check the new gorgeous UMass CICS Building, our incredible AI Security Lab, and exciting AI Safety Initiative. Please reach out over email and fill the form.

Research

Multi-Agent AI Systems

We proposed AirGapAgent(CCS'24) for privacy and Conseca(HotOS'25) for security of agents leveraging the theory of Contextual Integrity. Recently, we studied prompt leakage(USENIX Security'26) in research agents and investigated throttling to prevent denial of service or scraping by web agents. Finally, our Colosseum framework discovers new collusion artifacts in multi-agent systems.

LLM and Reasoning Attacks

We developed attacks on large language models including an attack (S&P'22) on generative language models that enables propaganda generation. We recently proposed the OverThink attack on reasoning models that exploits their chain-of-thought mechanisms.

Privacy

We applied Contextual Integrity to form-filling tasks (TMLR'25) and investigated context ambiguity (NeurIPS'25) in privacy reasoning. We worked on aspects of differential privacy including fairness trade-offs (NeurIPS'19), applications to location heatmaps (PETS'21), and tokenization methods (ACL FL workshop'22) for private federated learning. We built the Ancile (WPES'19) system that enforces use-based privacy of user data.

Backdoors

We worked on backdoor attacks in federated learning (AISTATS'20) and proposed frameworks Backdoors101 (USENIX Security'20) and Mithridates (CCS'24) for understanding and defending against backdoor attacks. We also studied backdooring bias (USENIX Security'25) into diffusion models.

Multi-modal Security

We studied vulnerabilities in multi-modal systems including self-interpreting images (USENIX Security'25) and adversarial illusions (USENIX Security'24, Distinguished Paper) that can manipulate vision-language models. Our work demonstrates novel attack vectors in systems that process both visual and textual information.

PhD students: Abhinav Kumar, June Jeong (co-advised w Amir Houmansadr), Dzung Pham (co-advised w Amir Houmansadr).

Recent collaborators: Amir Houmansadr, Shlomo Zilberstein, Sahar Abdelnabi, Brian Levine, Kyle Wray, George Bissias, Ali Naseh, Jaechul Roh, Mason Nakamura, Saaduddin Mahmud, and many others.

Impact

Current Funding:

Teaching

Courses

CS 360: Intro to Security, SP 25.

CS 692PA: Seminar on Privacy and Security for GenAI models, FA'24, SP'25, FA'25

CS 690: Trustworthy and Responsible AI, FA'25 (link)

Service

Academic Service

Recent Program Committees:

  • ACM CCS'24/'25, ICLR'24/25/26, IEEE S&P'26

Workshop Organizer:

Broadening Participation

I co-organize PLAIR (Pioneer Leaders in AI and Robotics), an outreach program that introduces high school students across Western Massachusetts to the world of robotics and AI safety. Please reach out if you are interested in joining.

Recent News