Eugene Bagdasarian

Eugene Bagdasarian

Other spellings: Eugene Bagdasaryan, Evgeny Bagdasaryan

  • Email: eugene@umass.edu
  • Office: CS 304

Bio

Eugene is an Assistant Professor at UMass Amherst CICS. Eugene's work focuses on security and privacy in emerging AI-based systems and agentic use-cases under real-life conditions and attacks.

He completed his PhD at Cornell Tech advised by Vitaly Shmatikov and Deborah Estrin. Eugene's research was recognized by Apple Scholars in AI/ML and Digital Life Initiative fellowships and Usenix Security Distinguished Paper Award. He received an engineering degree from Baumanka and worked at Cisco as a software engineer. Eugene has extensive industry experience (Cisco, Amazon, Apple) and spends part of his time as a Research Scientist at Google.

Eugene grew up in Tashkent and plays water polo.

Research:

Security: He worked on backdoor attacks in federated learning and proposed new frameworks Backdoors101 and Mithridates, and a new attack on generative language models covered by VentureBeat and The Economist. Recent work includes studies on vulnerabilities in multi-modal systems: instruction injections, adversarial illusions and adding biases to text-to-image models.

Privacy: Eugene worked on the Air Gap privacy protection for LLM Agents and operationalizing Contextual Integrity. He worked on aspects of differential privacy including fairness trade-offs, applications to location heatmaps, and tokenization methods for private federated learning. Additionally he helped to build the Ancile system that enforces use-based privacy of user data.

Announcement 1: I am looking for PhD students (apply) and post-docs to work on attacks on LLM agents and generative models. Please reach out over email!

Announcement 2: We will be holding a seminar CS 692PA on Privacy and Security for GenAI models, please sign up if you are interested.

Recent news

  • July 2024, at CCS'24 we will show how to defend against poisoning without modifying an ML pipeline with Mithridates.
  • July 2024, Privacy-conscious agents will appear at CCS'24.
  • May 2024, Adversarial Illusions received Distinguished Paper Award at USENIX Security'24.