Back to Portfolio

Adversarial Attacks on Facial Recognition Systems

Berkman Klein Center for Internet & Society, Harvard University
Role Research Fellow
Timeline 2017
Focus Area AI Ethics, Privacy, Computer Vision

Key Takeaway

Technical vulnerabilities in AI systems have real-world consequences. Building systems that respect privacy requires understanding how they can fail—and who they fail for.

Problem Statement

Facial recognition systems were being rapidly deployed across public and private sectors without adequate understanding of their vulnerabilities and potential for misuse. These systems disproportionately impacted marginalized communities through higher error rates and enabled surveillance capabilities that threatened individual privacy and civil liberties.

The technical research community needed comprehensive analysis of these vulnerabilities, particularly around adversarial attacks that could expose systemic weaknesses. Policy makers required evidence-based research to inform regulation, while the public needed to understand the risks these technologies posed to their fundamental rights.

Technical Approach

As part of the Assembly fellowship at Berkman Klein Center and MIT Media Lab, our team developed methodologies to test the robustness of commercial facial recognition systems through adversarial attacks. We focused on understanding failure modes across different demographic groups and documenting disparate impacts.

The research examined:

We built the equalAIs project—a tool that explored algorithmic bias and empowered individuals to understand and potentially subvert problematic computer vision systems. The work bridged technical computer science research with policy implications, translating complex machine learning concepts into actionable insights for non-technical stakeholders.

Impact

The research contributed to broader conversations about facial recognition regulation and highlighted concerns around disparate impact. Our findings informed policy discussions at both state and federal levels regarding facial recognition deployment in law enforcement and public spaces.

The work demonstrated that:

What I Learned

This project crystallized several insights that have shaped my approach to responsible AI:

Privacy isn't a feature—it's an architecture decision. Systems claiming to protect privacy must be designed from the ground up with privacy guarantees, not bolted on afterward. Every architectural choice either strengthens or weakens privacy protections.

Bias isn't just in the data—it's in the entire pipeline. From problem formulation to dataset construction to model architecture to deployment context, bias can enter at every stage. Addressing algorithmic fairness requires examining the full sociotechnical system, not just tweaking algorithms.

Technical research has policy implications. Working at the intersection of technology and law taught me to translate between technical and policy languages. Research findings must be communicated in ways that enable informed regulation without oversimplifying technical nuance.

Who a system fails for matters as much as how it fails. Aggregate accuracy metrics mask disparate impacts. Systems that work well "on average" may systematically harm specific communities. Responsible AI development requires disaggregated analysis and centering the experiences of those most impacted.

Broader Context

This work was part of a critical moment in AI ethics discourse. It preceded major legislative actions around facial recognition, including moratoria in several cities and ongoing debates about federal regulation. The research contributed to growing recognition that AI systems require proactive governance, not just reactive fixes after harm occurs.

The lessons from this project directly influenced my subsequent work on privacy-preserving AI deployment at Palantir and informed my approach to building ethical technology systems throughout my career.