Eugene Bagdasarian

Eugene Bagdasarian

Assistant Professor, UMass Amherst CICS

eugene@umass.edu
Office: CS 304

About

Eugene is an Assistant Professor at UMass Amherst CICS. Eugene's work focuses on security and privacy in emerging AI-based systems and agents under real-life conditions and attacks. He also spends time at Google working on contextual integrity and privacy-conscious agents.

He completed his PhD at Cornell Tech advised by Vitaly Shmatikov and Deborah Estrin. Eugene's research was recognized by Apple Scholars in AI/ML and Digital Life Initiative fellowships, and Usenix Security Distinguished Paper Award. He received an engineering degree from Baumanka and worked at Cisco as a quality engineer before going to grad school.

Eugene grew up in Tashkent and plays water polo.

Announcement 1: I am looking for PhD students (apply) and post-docs to work on attacks on LLM agents and generative models. Please reach out over email and fill the form!

Announcement 2: We are running a seminar CS 692PA on Privacy and Security for GenAI models, please sign up if you are interested.

Research

Security: He worked on backdoor attacks in federated learning and proposed frameworks Backdoors101 and Mithridates, and an attack on generative language models covered by VentureBeat and The Economist. He studied vulnerabilities in multi-modal and agentic systems: instruction injections, adversarial illusions and adding biases to text-to-image models. He recently did an OverThink attack on reasoning models.

Privacy: Eugene worked on the Air Gap privacy protection for AI Agents and Contextual Integrity. He worked on aspects of differential privacy including fairness trade-offs, applications to location heatmaps, and tokenization methods for private federated learning. Additionally he helped to build the Ancile system that enforces use-based privacy of user data.

PhD students: Abhinav Kumar. Recent collaborators: Amir Houmansadr, Shlomo Zilberstein, Brian Levine, Kyle Wray, Ali Naseh, Jaechul Roh, Dzung Pham, June Jeong, and many others.

Teaching

Courses

CS 360: Intro to Security, SP 25.

CS 692PA: Seminar on Privacy and Security for GenAI models, FA'24, SP'25, FA'25

CS 690: Trustworthy and Responsible AI, FA'25 (link TBD)

Service

Academic Service

Program Committees:

  • ACM CCS'24, '23
  • IEEE S&P (Oakland)'25

Workshop Organizer:

Broadening Participation

Eugene co-organizes PLAIR (Pioneer Leaders in AI and Robotics), an outreach program that introduces high school students across Western Massachusetts to the world of robotics and AI safety. Please reach out if you are interested in joining.

Recent News

  • June 2025 Two papers were accepted to USENIX Security'25: one on backdooring bias and one on self-interpreting images.
  • May 2025 I gave an invited talk at IEEE S&P'25 SAGAI Workshop on contextual defenses for agents. [slides]
  • May 2025 I gave invited talks at Apple PPML Workshop, Brave Research, Service Now.
  • March 2024 I gave a tutorial at AAAI'25 PPAI Workshop on contextual integrity for AI agents.
  • March 2024 I gave a keynote at AAAI'25 Deployable AI Workshop about attacks on inference-heavy pipelines.
  • Oct 2024 - At CCS'24 I talked about how to defend against backdoor poisoning with Mithridates.
  • July 2024 - Privacy-conscious agents accepted to CCS'24.
  • May 2024Adversarial Illusions received Distinguished Paper Award at USENIX Security'24.