Meta is facing a new class-action lawsuit alleging false advertising regarding the privacy features of its Ray-Ban Meta smart glasses. The lawsuit claims the company misled users by failing to disclose that human contractors review sensitive footage, including intimate moments, captured by the devices to train Meta's AI models. This legal challenge underscores growing concerns about data handling in always-on wearable tech.
The Lawsuit: "Surveillance Conduit" Allegations
A class-action lawsuit, filed Wednesday in federal court in San Francisco, accuses Meta of "affirmatively false advertising" concerning the privacy protections of its AI-powered smart glasses. The complaint follows reports that subcontractors in Kenya were tasked with reviewing footage captured by users' glasses, reportedly including highly personal material such as bathroom visits, sexual encounters, and other private details. These workers, according to a Swedish newspaper report, were part of a data labeling operation designed to help train Meta's artificial intelligence models.The lawsuit, brought by Clarkson Law Firm, names two individuals from California and New Jersey who purchased the smart glasses. They assert they relied on Meta's marketing claims about privacy and would not have bought the devices had they known about the involvement of human contractors in reviewing footage. The plaintiffs are seeking monetary damages and injunctive relief, aiming to compel Meta to change its practices and disclosures.
Meta's Position on Data Review and AI Training
A spokesperson for Meta confirmed to Engadget that data from its smart glasses can indeed be shared with human contractors. The company stated, "Ray-Ban Meta glasses help you use AI, hands free, to answer questions about the world around you." They further elaborated that while media captured by users generally stays on the device unless explicitly shared, content shared with Meta AI is sometimes reviewed by contractors. This review process, Meta claims, is for improving user experience, a practice it likens to "many other companies." The company also said it "take[s] steps to filter this data to protect people's privacy and to help prevent identifying information from being reviewed."However, critics point out a significant omission: the "multimodal" features of the smart glasses—which allow the AI to interpret a user's surroundings—inherently share captures with Meta. As one review noted, "images of your surroundings processed for the glasses' multimodal features like Live AI can be used for training purposes (these images aren't saved to your device's camera roll)." This distinction is crucial, as footage used for Live AI, not explicitly saved by the user, can still be sent to contractors for AI model training.
The Scope of Reviewed Content and Contractor Concerns
The core of the privacy concerns lies in the deeply personal nature of the content allegedly viewed by contractors. Workers interviewed by Svenska Dagbladet and other publications have described witnessing "intimate" and "sensitive" material. This includes footage depicting nudity, sexual activity, and individuals using the toilet. Contractors reportedly stated that Meta’s purported anonymization safeguards were unreliable, making it possible to identify individuals and observe highly private moments.The lawsuit argues that this "undisclosed human review pipeline" fundamentally transforms the Meta AI Glasses from a personal device into a "surveillance conduit." It suggests that this practice exposes consumers to "unreasonable risks of dignitary harm, emotional distress, stalking, extortion, identity theft, and reputational injury."
Regulatory Scrutiny and Broader Privacy Context
Beyond the lawsuit, Meta's smart glasses have drawn attention from data privacy regulators. The UK's Information Commissioner's Office (ICO) has expressed "concerning" reports regarding the outsourced review of sensitive content. The ICO emphasizes that "devices processing personal data, including smart glasses, should put users in control and provide for appropriate transparency". This statement highlights a key tension: the convenience of hands-free AI versus the expectation of privacy in everyday life.This isn't Meta's only recent privacy-related legal challenge. For instance, the company faced a separate class-action complaint in June 2025 regarding allegations that it secretly tracked Android users' browsing activity on mobile websites through an analytics pixel. Such incidents underscore a broader pattern of privacy concerns surrounding Meta's data collection practices across its various platforms and devices.







