![]()
A coalition of 75 civil liberties, domestic violence, LGBTQ+, labor and immigrant advocacy organizations is calling on Meta to abandon plans to add facial recognition technology to its Ray-Ban and Oakley smart glasses, warning the feature would endanger abuse survivors, immigrants and LGBTQ+ people.
In an open letter addressed to Meta chief executive Mark Zuckerberg Monday, the groups demanded the company “immediately halt and publicly disavow” the planned feature, which is reportedly known internally as “Name Tag.” The coalition includes the American Civil Liberties Union, the Electronic Privacy Information Center, Fight for the Future, Access Now and the Leadership Conference on Civil and Human Rights.
“The principle here is quite simple: Your glasses should not know my name,” said Cody Venzke, a senior staff attorney with the ACLU’s Speech, Privacy, and Technology Project.
As Wired first reported, the feature would work through the artificial-intelligence assistant built into Meta’s smart glasses, allowing wearers to pull up information about people in their field of view. Engineers have reportedly considered two versions: one that would identify only people the wearer is already connected to on a Meta platform, and a broader version capable of recognizing anyone with a public account on a Meta service such as Instagram.
The New York Times first reported on Name Tag in February, citing an internal document that revealed Meta had planned to debut the feature at a conference for blind attendees. The same document showed the company anticipated a muted response from advocacy groups, noting in a May 2025 memo that it would “launch during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.”
The coalition called that reasoning “frankly shameful” and accused Meta of exploiting “rising authoritarianism” and the Trump administration’s expansion of immigration enforcement. The groups also noted that Border Patrol and Immigration and Customs Enforcement agents have been documented wearing Meta AI smart glasses during field operations.
The coalition argues the dangers posed by the feature “cannot be resolved through product design changes, opt-out mechanisms, or incremental safeguards,” given that bystanders have no meaningful way to consent to being identified. The groups are also urging Meta to disclose any known instances of its wearables being used in stalking, harassment or domestic violence cases, and to reveal any discussions with federal law enforcement agencies, including ICE and Customs and Border Protection, about the use of Meta wearables or data from them.
“People should be able to move through their daily lives without fear that stalkers, scammers, abusers, federal agents, and activists across the political spectrum are silently and invisibly verifying their identities,” the coalition wrote.
A Meta spokesperson said following the letter’s publication that the company does not currently offer facial recognition on its smart glasses. “If we were to release such a feature, we would take a very thoughtful approach before rolling anything out,” the spokesperson said.
The controversy adds to a string of privacy concerns surrounding the Ray-Ban glasses. A joint investigation by Swedish newspapers Svenska Dagbladet and Göteborgs-Posten found last month that contractors in Kenya were reviewing personal videos recorded by users of the devices, including intimate footage. The glasses, which can be used to film others in public without their knowledge, have faced growing criticism online.
It would not be the first time Meta retreated from facial recognition. In November 2021, the company ended Facebook’s photo-tagging system and deleted the face recognition templates of more than a billion users, citing the need to “weigh the positive use cases for facial recognition against growing societal concerns.” Meta has since paid roughly $2 billion to settle biometric privacy lawsuits in Illinois and Texas, and in 2019 paid the Federal Trade Commission $5 billion, then the largest privacy penalty in the agency’s history, to resolve a separate case that included allegations tied to its face recognition software.
This article was constructed with the assistance of artificial intelligence and published by a member of The Washington Times’ AI News Desk team. The contents of this report are based solely on The Washington Times’ original reporting, wire services, and/or other sources cited within the report. For more information, please read our AI policy or contact Steve Fink, Director of Artificial Intelligence, at sfink@washingtontimes.com
The Washington Times AI Ethics Newsroom Committee can be reached at aispotlight@washingtontimes.com.






