NAIROBI, Kenya — Meta Platforms Inc. terminated its contract with a Kenyan data-labeling firm after workers reported viewing highly intimate and explicit footage captured by Ray-Ban Meta smart glasses users, including scenes of people having sex, using the toilet and undressing, sparking fresh accusations of retaliation and poor treatment of overseas contractors.

Headquarters of Facebook parent company Meta Platforms Inc in Mountain View
Headquarters of Facebook parent company Meta Platforms Inc in Mountain View

The move, confirmed this week, affects more than 1,100 workers at Sama, a Nairobi-based company contracted by Meta to annotate video, image and speech data for training the AI systems powering its smart glasses. Sama said the contract cancellation will result in 1,108 redundancies, coming less than two months after workers spoke publicly about the disturbing content they were forced to review.

In February, workers at Sama told Swedish newspapers Svenska Dagbladet and Göteborgs-Posten that they routinely watched graphic and private recordings from Ray-Ban Meta glasses. One employee described seeing a man place his glasses on a bedside table before his wife entered and undressed. Others reported footage of users in bathrooms, changing clothes, engaging in sexual activity and even displaying bank details.

Meta defended the decision, stating the contractors "did not meet our standards." The company has not directly addressed the timing of the contract termination relative to the workers' public disclosures. A Meta spokesperson previously told outlets that when users share content with Meta AI, the company sometimes uses contractors for data annotation under strict privacy protocols, including face blurring, though workers claimed the blurring frequently failed.

The controversy highlights ongoing challenges in Meta's AI training pipeline. Ray-Ban Meta smart glasses, developed in partnership with EssilorLuxottica, allow users to record hands-free video and audio, ask questions of the built-in AI assistant and share footage. Millions of pairs were sold in 2025, but privacy advocates warn that opt-in data-sharing features expose users — and unwitting bystanders — to unintended surveillance.

Sama, which has faced previous criticism for its work with Meta on content moderation, employs workers to label data that helps train AI models to better understand visual scenes. Employees described the job as emotionally taxing, with some feeling compelled to continue despite the nature of the content to avoid losing income in Kenya's competitive job market.

Privacy experts and labor rights groups have condemned the situation. The UK's Information Commissioner's Office contacted Meta for clarification after the initial February reports. Class-action lawsuits have been filed in the United States alleging Meta misrepresented privacy protections in its marketing of the glasses. Plaintiffs claim users were led to believe footage remained private or processed only by AI, not reviewed by human contractors overseas.

Meta has maintained that users consent to data use through terms of service and that robust safeguards are in place. However, investigations revealed inconsistencies in face-blurring technology and limited awareness among users that human reviewers might see their recordings. One worker told reporters: "You understand that it is someone's private life you are looking at, but at the same time you are just expected to carry out the work."

The incident echoes broader concerns about AI development's reliance on low-paid overseas labor for reviewing sensitive material. Sama previously faced lawsuits from former Facebook content moderators in Kenya who developed PTSD after exposure to graphic violence and abuse. Critics argue Meta's cost-saving strategy externalizes both financial and psychological burdens.

Kenyan authorities have launched investigations into the matter, focusing on data protection, labor conditions and potential non-consensual recording enabled by the glasses. Advocacy groups have called for stricter regulations on wearable AI devices and greater transparency in how user-generated content is handled for model training.

For Ray-Ban Meta owners, the revelations raise uncomfortable questions about consent and privacy. While the glasses include indicators when recording, critics note they can be discreetly used in public and private settings. Footage shared for AI features may end up in training datasets reviewed by humans far from the point of capture.

Meta has sold millions of the smart glasses, positioning them as lifestyle accessories with advanced AI capabilities. Features like real-time translation, visual search and hands-free assistance have driven popularity, but privacy trade-offs have drawn increasing scrutiny from regulators in Europe and beyond.

Labor advocates in Kenya and internationally are pushing for better protections for data workers, including mental health support, fair wages and the right to refuse harmful content without retaliation. The abrupt contract end at Sama has left many workers facing sudden unemployment, exacerbating concerns about exploitative practices in the global AI supply chain.

As the story continues to unfold, Meta faces pressure to explain its contractor decisions and improve transparency around smart glasses data practices. The company has promised ongoing reviews of its AI training processes but has provided limited public details on safeguards or alternatives to human annotation for sensitive material.

The episode underscores a fundamental tension in modern AI development: the need for vast amounts of real-world data versus the ethical and privacy implications of how that data is collected and reviewed. For users of Ray-Ban Meta glasses, the convenience of AI comes with the realization that their private moments may be viewed by strangers halfway around the world — and speaking out about it can cost workers their jobs.

Industry watchers say the scandal could prompt tighter regulations and force tech companies to reconsider outsourcing strategies for sensitive AI training. In the meantime, affected Kenyan workers and privacy-conscious consumers await clearer answers from Meta on how it balances innovation with respect for human dignity on both sides of the camera.