Meta Faces U.S. Lawsuit Over Privacy Practices in AI Smart Glasses

AI News Hub Editorial
Senior AI Reporter
March 5, 2026
Meta Faces U.S. Lawsuit Over Privacy Practices in AI Smart Glasses

Meta is facing a new lawsuit in the United States over its Ray-Ban Meta AI smart glasses, with plaintiffs alleging the company misled consumers about privacy protections while allowing contractors to review footage captured by the devices. The complaint, filed by Gina Bartone of New Jersey and Mateo Canu of California and brought by Clarkson Law Firm, claims Meta and its manufacturing partner Luxottica of America violated consumer protection and privacy laws by marketing the glasses as “designed for privacy” while user content could be reviewed by human contractors.

The lawsuit follows a media investigation by Swedish newspapers that reported workers at a Kenya-based subcontractor were reviewing footage captured by customers’ glasses as part of Meta’s AI improvement process. According to the reports, some of that material included highly sensitive scenes, such as nudity, sexual activity, and people using the bathroom.

At the center of the complaint is the claim that Meta’s advertising created a misleading impression about how the glasses handle personal data. Marketing for the device emphasized privacy features with statements like “built for your privacy” and “you’re in control of your data and content.” The plaintiffs say those messages did not clearly disclose that footage shared with Meta’s AI systems could be manually reviewed by overseas workers.

Meta says that human review occurs only when users choose to share captured media with Meta AI. In a statement to the BBC, the company said contractors may review shared data to improve the product experience, a practice it says is described in its privacy policies and terms of service. A spokesperson also said Meta filters reviewed data to help prevent identifying information from being seen.

However, the complaint argues that disclosures about human review were not presented clearly to users. While Meta’s U.S. terms note that interactions with its AI systems “may be automated or manual (human),” the lawsuit contends that this language does not match the privacy assurances featured in product advertising.

The legal challenge also highlights the scale of the issue. More than seven million people purchased Meta’s smart glasses in 2025, according to the complaint, meaning large volumes of captured footage could potentially flow into Meta’s data review pipeline. The plaintiffs argue that users have no practical way to opt out once content is shared with Meta’s AI features.

The case adds to a growing backlash against devices capable of constant or passive recording. Smart glasses and other always-on AI gadgets have sparked concerns about so-called “luxury surveillance” technology, prompting some developers to create tools that alert people when such devices are nearby.

Meta has not commented directly on the lawsuit itself, saying only that the litigation was newly filed. The company maintains that media captured on the glasses remains on the user’s device unless it is intentionally shared, and that any review of shared content is intended to improve the product while protecting user privacy.

The case now places Meta’s smart glasses—and the way companies handle data collected by AI-powered wearables—under closer legal scrutiny in the United States, following regulatory interest in the issue abroad.

This analysis is based on reporting from TechCrunch.

Image courtesy of Meta.

This article was generated with AI assistance and reviewed for accuracy and quality.

Last updated: March 5, 2026

About this article: This article was generated with AI assistance and reviewed by our editorial team to ensure it follows our editorial standards for accuracy and independence. We maintain strict fact-checking protocols and cite all sources.

Word count: 535Reading time: 0 minutes

AI Tools for this Article

Browse All Articles
Share this article:
Next Article