Research

I want to ensure that theoretical computer science considers and protects the perspectives of groups whose interests it has historically overlooked.

My research focuses on how we can protect the rights of individuals both at the high-level of defining frameworks to analyze how algorithms adhere to societal values (i.e. privacy, fairness), and at a lower level of explicitly designing systems that have built-in abuse mitigation and other protections rather than depending on policies alone to ensure ethical behavior.

Note: Unfortunately this page is not fully up-to-date with all the exciting things I've been working on! Please check back soon for more recent work.

Algorithmic Fairness and Diversity

Since September 2020, I have been working with Omer Reingold and Judy Shen on questions related to how to analyze and design algorithms that encourage diversity.

Secure Source-Tracking for Encrypted Messaging Systems

End-to-end encrypted messaging systems such as WhatsApp are attractive to users because they provide strong privacy and security guarantees. These platforms, however, have become victims of viral forwards of misinformation and illegal content, facilitated by their anonymous guarantees. Without knowing who is sending these problematic messages, secure messaging platforms are unable to enforce the same type of content moderation employed by sites such as Twitter to prevent the viral spread of misinformation via forwards.

I spent summer 2020 working on a project in applied cryptography under the mentorship of Dan Boneh and Saba Eskandarian. Our work developed tools to control the spread of disinformation and malicious content in end-to-end encrypted messaging systems, and was awarded a CURIS Outstanding Poster Award.

Poster (CURIS 2020)

Bounded-Leakage Differential Privacy

How is an individual's privacy affected when a differentially private release of data is combined with an existing background of auxiliary information?

To help answer this question, I worked with Omer Reingold and Katrina Ligett starting in Fall 2019 to define a new variant of differential privacy: bounded-leakage differential privacy, that gives a tighter measure of privacy with respect to some upper bound on potential auxiliary information (leakage).

This new definition can be applied to bring new insights to settings such as releasing exact counts of census data, or reasoning about whether your privacy can be affected by studies that you did not participate in.

I presented our work at the 2020 Symposium on the Foundations of Responsible Computing in June, and it was featured as a poster at TPDP 2020. We are currently working on a longer version of the paper for broader publication.

Poster (TPDP 2020) | Presentation (FORC 2020) | Paper (FORC 2020)