Meta’s Smart Glasses Face Recognition: Privacy Risks & Legal Battles
- Meta is once again considering integrating facial recognition technology into its smart glasses, a move that raises significant privacy concerns and echoes past controversies.
- The company previously abandoned similar facial recognition efforts and has paid out billions in settlements related to biometric data privacy.
- The core issue isn’t simply whether Meta *can* build this technology, but whether it *should*.
Meta is once again considering integrating facial recognition technology into its smart glasses, a move that raises significant privacy concerns and echoes past controversies. An internal document reported by The New York Times reveals the company may strategically launch the feature – dubbed “Name Tag” – during periods of heightened political activity, anticipating reduced scrutiny from civil society groups.
This isn’t a new direction for Meta. The company previously abandoned similar facial recognition efforts and has paid out billions in settlements related to biometric data privacy. In November 2021, Meta announced it would shut down its facial recognition system for tagging people in photos on its platforms and delete over a billion face templates. Prior to that, a settlement with the Federal Trade Commission (FTC) resulted in a $5 billion fine, stemming from allegations that the company’s face recognition settings were confusing and deceptive, and that it failed to obtain proper consent from users. Further settlements followed, including a $650 million class action settlement in Illinois related to the state’s Biometric Information Privacy Act (BIPA), and a more recent $1.4 billion settlement over violations of Texas law.
The core issue isn’t simply whether Meta *can* build this technology, but whether it *should*. Facial recognition, particularly when deployed in a wearable and constantly-active form factor like smart glasses, presents unique dangers. Unlike traditional social media where users actively upload photos, these glasses could passively collect biometric data – a “faceprint” – from anyone within view, without their knowledge or consent. This raises the specter of mass surveillance and the potential for discrimination, as well as the risk of data breaches exposing sensitive biometric information.
The technical challenge of obtaining informed consent from every individual captured by the glasses is, practically speaking, insurmountable. Meta cannot realistically ask permission from every passerby. This is further complicated by the increasing number of state laws that consider biometric information to be sensitive and require affirmative consent for its collection and processing. Dozens of state laws already address these concerns, creating a complex legal landscape for Meta to navigate.
The proposed “Name Tag” feature, as described in the internal document, would allow smart glasses wearers to identify people using Meta’s built-in AI assistant. While the company reportedly envisions initially using the technology to identify people the wearer is already connected with on Meta’s platforms (Facebook, Instagram), or those with public profiles on those platforms, the potential for expansion and misuse is clear. The Verge reported that Meta initially considered launching the feature at a conference for the blind, but that plan was abandoned.
The timing of the potential launch, as outlined in the internal memo, is particularly troubling. The suggestion to release the feature “during a dynamic political environment” where civil society groups might be preoccupied with other issues demonstrates a calculated disregard for ethical considerations and a willingness to prioritize profit over privacy. This approach echoes concerns raised in the past about Meta’s data practices and its handling of user information.
The risks extend beyond privacy. The Electronic Frontier Foundation (EFF) points to recent examples of public backlash against similar technologies, such as Immigration and Customs Enforcement’s (ICE) use of facial recognition through the “Mobile Fortify” app, and the controversy surrounding Amazon Ring’s surveillance capabilities. The public is increasingly aware of the potential for invasive technology and is demonstrating a willingness to resist it.
The EFF also highlights the potential for repurposing seemingly benign features for mass surveillance. The example of Amazon Ring’s feature marketed for finding lost dogs, which could potentially be used for broader biometric tracking, illustrates this concern. This underscores the importance of considering the long-term implications of deploying facial recognition technology, even with seemingly limited initial applications.
Privacy advocates are prepared to challenge Meta’s efforts. The EFF and other civil liberties groups have vowed to continue focusing their resources on opposing privacy-invasive practices, and are urging privacy regulators and attorneys general to investigate. Meta’s history with facial recognition technology, coupled with the inherent risks of deploying it in a wearable device, suggests that this latest attempt will likely face significant legal and public opposition. The company should abandon this plan, given its past missteps and the fundamental privacy rights at stake.
