Hello! I'm a Ph.D. student
at UMass Amherst advised by
Professor Amir Houmansadr.
Broadly, my research interests are in security and privacy of machine learning.
The two specific areas I am currently working on are: 1) Exploiting and fixing the vulnerability of federated learning to various types of poisoning threats, 2) Developing private information inference attacks and defenses for the centralized and distributed learning algorithms.
Before joining PhD, I worked on developing FELICS, a performance benchmarking tool for lightweight cryptography hardware at CryptoLux group of University of Luxembourg under the guidance of Professor Alex Biryukov. I completed my undergraduate studies in Electrical and Electronics Engineering from IIT Bombay, where, for my thesis, I developed countermeasures against side channel attacks on AES hardware under the guidance of Professor Virendra Singh.
Publications
-
The Perils of Learning From Unlabeled Data: Backdoor Attacks on Semi-supervised Learning
IEEE/CVF Internation Converence on Computer Vision (ICCV), 2023
Virat Shejwalkar, Lingjuan Lyu, and Amir Houmansadr
-
Recycling Scraps: Improving Private Learning Using Intermediate Checkpoints
AAAI Privacy Preserving Artificial Intelligence (PPAI) Workshop, 2023
Virat Shejwalkar, Arun Ganesh, Rajiv Mathhews, Om Thakkar, and Abhradeep Guha Thakurta
-
On The Pitfalls of Security Evaluation of Robust Federated Learning
Momin Khan, Virat Shejwalkar, Amir Houmansadr, and Fatima Anwar
Deep Learning Security and Privacy Workshop at IEEE S&P, 2023
-
Every Vote Counts: Ranking-Based Training of Federated Learning to Resist Poisoning Attacks
Hamid Mozaffari, Virat Shejwalkar, and Amir Houmansadr
To appear in the 32nd USENIX Security Symposium, 2023
-
Machine Learning with Differentially Private Labels: Mechanisms and Frameworks
Shinyu Tang, Milad Nasr, Saeed Mahloujifar, Virat Shejwalkar, Liwei Song, Amir Houmansadr, and Prateek Mittal
Proceedings on Privacy Enhancing Technologies (PETS) Symposium, 2022
-
Back to the Drawing Board: A Critical Evaluation of Poisoning Attacks on Production Federated Learning
[Talk]
[Code]
Virat Shejwalkar, Amir Houmansadr, Peter Kairouz, and Daniel Ramage
43rd IEEE Symposium on Security & Privacy (Oakland), 2022
-
Mitigating Membership Inference Attacks by Self-Distillation Through a Novel Ensemble Architecture
Shinyu Tang, Saeed Mahloujifar, Liwei Song, Virat Shejwalkar, Milad Nasr, Amir Houmansadr, and Prateek Mittal
31st USENIX Security Symposium, 2022
NeurIPS Workshop on Privacy Preserving Machine Learning (PPML), 2021
-
Systematic Privacy Risk Analysis of Natural Language Processing Classification Models
Virat Shejwalkar, Huseyin Inan, Amir Houmansadr, and Robert Sim
NeurIPS Workshop on Privacy Preserving Machine Learning (PPML), 2021
-
Cronus: Robust and heterogeneous collaborative learning with black-box knowledge transfer
Chang Hongyan, Virat Shejwalkar, Reza Shokri, and Amir Houmansadr
NeurIPS Workshop on New Frontiers in Federated Learning (NFFL), 2021
-
Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses for Federated Learning
[Poster]
[Talk]
[Code]
Virat Shejwalkar and Amir Houmansadr
28th Network and Distributed System Security Symposium (NDSS), 2021
NeurIPS Workshop on Scalability, Privacy, and Security in Federated Learning (SpicyFL), 2020
-
Membership Privacy for Machine Learning Models Through Knowledge Transfer
[poster][slides]
Virat Shejwalkar and Amir Houmansadr
In 35th AAAI Conference on Artificial Intelligence (AAAI), 2021.
NeurIPS Workshop on Privacy Preserving Machine Learning (PPML), 2020
-
GECKO: Reconciling Privacy, Accuracy and Efficiency in Embedded Deep Learning
Vasisht Duddu, Antoine Boutet, and Virat Shejwalkar
NeurIPS Workshop on Privacy Preserving Machine Learning (PPML), 2020
-
Quantifying Privacy Leakage in Graph Embedding
Vasisht Duddu, Virat Shejwalkar, and Antoine Boutet
EAI MobiQuitous, 2020
-
Leveraging Prior Knowledge Asymmetries in the Design of Location Privacy-Preserving Mechanisms
Nazanin Takbiri, Virat Shejwalkar, Amir Houmansadr, Dennis Goeckel, and Hossein Pishro-Nik
IEEE Wireless Communications Letters, 2020
-
Revisiting Utility Metrics for Location Privacy Preserving Mechanisms [code]
Virat Shejwalkar, Amir Houmansadr, Hossein Pishro-Nik, and Dennis Goeckel
In 35th ACM Annual Computer Security Applications Conference (ACSAC), 2019