
Meta is facing a new class-action lawsuit alleging false advertising regarding the privacy features of its Ray-Ban Meta smart glasses. The lawsuit claims the company misled users by failing to disclose that human contractors review sensitive footage, including intimate moments, captured by the devices to train Meta's AI models. This legal challenge underscores growing concerns about data handling in always-on wearable tech.
The lawsuit, brought by Clarkson Law Firm, names two individuals from California and New Jersey who purchased the smart glasses. They assert they relied on Meta's marketing claims about privacy and would not have bought the devices had they known about the involvement of human contractors in reviewing footage. The plaintiffs are seeking monetary damages and injunctive relief, aiming to compel Meta to change its practices and disclosures.
However, critics point out a significant omission: the "multimodal" features of the smart glasses—which allow the AI to interpret a user's surroundings—inherently share captures with Meta. As one review noted, "images of your surroundings processed for the glasses' multimodal features like Live AI can be used for training purposes (these images aren't saved to your device's camera roll)." This distinction is crucial, as footage used for Live AI, not explicitly saved by the user, can still be sent to contractors for AI model training.
The lawsuit argues that this "undisclosed human review pipeline" fundamentally transforms the Meta AI Glasses from a personal device into a "surveillance conduit." It suggests that this practice exposes consumers to "unreasonable risks of dignitary harm, emotional distress, stalking, extortion, identity theft, and reputational injury."
This isn't Meta's only recent privacy-related legal challenge. For instance, the company faced a separate class-action complaint in June 2025 regarding allegations that it secretly tracked Android users' browsing activity on mobile websites through an analytics pixel. Such incidents underscore a broader pattern of privacy concerns surrounding Meta's data collection practices across its various platforms and devices.
For Developers Building AI Hardware
Scrutinize your data pipeline architecture, especially for edge devices with constant recording capabilities. Transparency isn't just good PR; it's a legal imperative. Clearly defining what data is collected, how it's processed, and who accesses it can prevent future lawsuits, particularly when multimodal AI features implicitly share user environments.
For Founders in Wearable Tech
Prioritize robust, explicit user consent for any form of human review or third-party data access. The fact that California and New Jersey residents are plaintiffs suggests consumers are increasingly aware of privacy implications, pushing companies to go beyond generic privacy policy language.
For Consumers of Smart Devices
Recognize that "improving user experience" or "training AI" often involves human oversight of your data. Understand the explicit and implicit data sharing mechanisms of any device that captures your surroundings, especially if it connects to cloud AI services.
For Investors in AI Companies
Evaluate companies' legal and regulatory compliance, particularly their data governance policies. Ongoing lawsuits like this, and the separate Android tracking complaint from June 2025, highlight potential long-term liabilities and reputational risks associated with privacy missteps.
Meta is facing a class-action lawsuit alleging they misled users about the privacy of Ray-Ban Meta smart glasses. The lawsuit claims Meta failed to disclose that human contractors review footage captured by the glasses, including sensitive and private moments, to train Meta's AI models. Plaintiffs claim they would not have purchased the glasses had they known about this practice.
Human contractors are reportedly reviewing a range of footage captured by Ray-Ban Meta smart glasses, including highly personal material. This includes footage of bathroom visits, sexual encounters, and other private details. This data review is part of a data labeling operation to help train Meta's artificial intelligence models.
Meta confirms that data from its smart glasses can be shared with human contractors to improve user experience. They state that while media captured by users generally stays on the device unless explicitly shared, content shared with Meta AI is sometimes reviewed. Meta claims they take steps to filter this data to protect people's privacy and prevent identifying information from being reviewed.
The 'multimodal' features, which allow the AI to interpret a user's surroundings, inherently share captures with Meta, raising privacy concerns. Images of surroundings processed for features like Live AI can be used for training purposes, even if the user doesn't explicitly save those images. This means footage not saved to the device's camera roll can still be sent to Meta.
More insights on trending topics and technology







