AI company harvested billions of Facebook photos for a facial recognition database it sold to police

Artificial intelligence is having a cultural moment. AI-powered chatbots like ChatGPT — and their visual image-creating counterparts like DALL-E — have been in the news lately for fear that they could replace human jobs. Such AI tools work by scraping the data from millions of texts and pictures, refashioning new works by remixing existing ones in intelligent ways that make them seem almost human. 

Yet another, albeit lesser-known AI-driven database is scraping images from millions and millions of people — and for less scrupulous means. Meet Clearview AI, a tech company that specializes in facial recognition services. Clearview AI markets its facial recognition database to law enforcement “to investigate crimes, enhance public safety, and provide justice to victims,” according to their website.

Yet revelations as to how the company obtains images for their database of nearly 30 billion photos have caused an uproar. Last week, CEO Hoan Ton-That said in an interview with BBC that the company obtained its photos without users’ knowledge, scraped from social media platforms like Facebook and provided them to U.S. law enforcement. The CEO also said that the database has been used by American law police nearly a million times since 2017.

In a statement to Insider, Ton-That said that the database of images was “lawfully collected, just like any other search engine like Google.” Notably, “lawful” does not, in this context, imply that the users whose photos were scraped gave consent.  

“Clearview AI’s database is used for after-the-crime investigations by law enforcement, and is not available to the general public,” the CEO told Insider. “Every photo in the dataset is a potential clue that could save a life, provide justice to an innocent victim, prevent a wrongful identification, or exonerate an innocent person.”

As reported by the BBC, Clearview AI has faced millions of dollars in fines for breaches of privacy in Europe and Australia. In the BBC interview, Miami Police confirmed that it uses this software and treats it as a tip for investigations for all crimes, and that it helped solve some murders.

Following a settlement, Clearview has been banned from making its faceprint database available to private entities and most businesses in the United States.

“We don’t make an arrest because an algorithm tells us to,” said Assistant Chief of Police Armando Aguilar. “We either put that name in a photographic line-up or we go about solving the case through traditional means.”


Want more health and science stories in your inbox? Subscribe to Salon’s weekly newsletter The Vulgar Scientist.


Clearview is no stranger to lawsuits over potential violations of privacy law. In May 2020, the The American Civil Liberties Union (ACLU) filed a lawsuit against Clearview alleging that the company violated Illinois residents’ privacy rights under the Illinois Biometric Information Privacy Act (BIPA). According to the ACLU, following a settlement, Clearview has been banned from making its faceprint database available to private entities and most businesses in the United States.

While Clearview claims its technology is highly accurate, there are stories that suggest otherwise. For example, the New York Times recently reported on a wrongful arrest of a man, claiming that he used stolen credit cards to buy designer purses. The police department had a contract with Clearview, according to the report, and it was used in the investigation to identify him.

In response to the report, Ton-That was apologetic, saying “one false arrest is one too many.”

“Even if Clearview AI came up with the initial result, that is the beginning of the investigation by law enforcement to determine, based on other factors, whether the correct person has been identified,” he told the Times. “More than one million searches have been conducted using Clearview AI.”

Police have also used the technology to arrest a protester who was accused of throwing rocks at a police officer in Miami.

The Electronic Frontier Foundation (EFF) has described facial recognition technology as “a growing menace to racial justice, privacy, free speech, and information security.” In 2022, the organization praised the multiple lawsuits it faced.

“One of the worst offenders is Clearview AI, which extracts faceprints from billions of people without their consent and uses these faceprints to help police identify suspects,” the EFF stated. “For example, police in Miami worked with Clearview to identify participants in a Black-led protest against police violence.”

Meta, owners of Facebook told Insider recently that the scraping by Clearview invades “people’s privacy. Meta said that they “banned their founder from our services and sent them a legal demand to stop accessing any data, photos, or videos from our services.”

Matthew Guariglia, a senior policy analyst for the international digital rights non-profit, told Insider that it is not merely Facebook is a cause for concern — it’s the web in general.

“I think that’s one of the nefarious things about it,” Guariglia told Insider. “Because you might be very aware of what Clearview does, and so prevent any of your social media profiles from being crawled by Google, to make sure that the picture you post isn’t publicly accessible on the open web, and you think ‘this might keep me safe.’ But the thing about Clearview is it recognizes pictures of you anywhere on the web.”

Read more

about AI

Comments

Leave a Reply

Skip to toolbar