Tech reporter Kashmir Hill has written about the intersection of privacy and technology for more than a decade, but even she was stunned when she came across a legal memo in 2019 describing a facial recognition app that could identify anyone based on a picture. She immediately saw the potential this technology had to become the stuff of dystopian nightmare, the “ultimate surveillance tool,” posing immense risks to privacy and civil liberties.
Hill recalled this incident to Jonathan Zittrain, the George Bemis Professor of International Law and Berkman Klein Center for Internet & Society director, as part of a conversation Wednesday at Harvard Law School about her new book, “Your Face Belongs to Us: A Secretive Startup’s Quest to End Privacy as We Know It.”
The work chronicles the story of Clearview AI, a small, secretive startup that launched an app in 2017, using a 30-billion-photo database scraped from social media platforms without users’ consent. The company, led by Australian computer engineer Hoan Ton-That, has been fined in Europe and Australia for privacy violations.
“I’d say that Clearview made an ethical breakthrough, not a technological one,” said Hill, who wrote the first major story about the technology in 2020, sparking backlash from tech companies and privacy advocates. “They were willing to do what other companies like Google and Facebook hadn’t been willing to do … Tech companies had all agreed that the one thing that no one should do is build an app that you can just take a picture of a stranger and then find out who they are.”
Hill spoke of the need to come up with regulations to safeguard users’ privacy and rein in social media platforms that are profiting from users’ personal information without their consent. Some states have passed laws to protect people’s right to access personal information shared on social media sites and the right to delete it, but that is not enough, she said.
“It’s a little bit of a Wild West, and I worried that we’re not doing enough about it,” said Hill. “We have privacy laws, but we don’t have anything at the federal level that addresses what Clearview has done.”
Clearview’s software is used in crime investigations and has been used by law enforcement to identify Jan. 6 rioters, said Hill. Facial-recognition technology can also be used to intimidate or harass investigative journalists, government officials, or political opponents, she said. It can also lead to wrongful arrests, and she said measures should be put in place to regulate its use.
“If we are going to use facial recognition in policing, do we want to use a database like Clearview that’s looking through 30 billion faces, including probably all of us in this room, to find a match to a shoplifter in New Orleans?” said Hill.
Zittrain, who criticized Clearview’s practices in a 2020 Washington Post column, said measures to protect user privacy are long overdue, but he acknowledged that it’s not an easy debate with law enforcement in favor and privacy advocates against.
During the book talk, Zittrain asked the audience whether they would have favored using Clearview to identify rioters who took part in the Jan. 6 attack on the Capitol or to identify Russian soldiers killed in Ukraine to be returned to their families. “How many people think that is a salutatory, good use of Clearview?” asked Zittrain. “It’s the kind of thing that does complicate the story.”
To demonstrate how instant facial recognition works, Hill and Zittrain showed the audience an app called PimEyes, which offers several pricing plans and uses the same technology as Clearview. After Hill uploaded a photo of herself on the app, it delivered more than a dozen other pictures of her with links to where they had been published.
“You see here,” said Hill, “it only found photos of me. It didn’t even have any doppelgangers. These are all me … When I uploaded my photo, it analyzed my face and came up with a biometric identifier, and then looked through its database. You can see it works pretty well.”
With 2.8 billion faces in their database, PimEyes has not drawn as much attention as Clearview, said Hill.
Headquartered in the UAE and with legal offices “somewhere in the Caribbean,” the company is run by a man who lives in the country of Georgia. After Hill wrote an article about threats to children from AI, PimEyes blocked searches of minors’ faces, she said. There are several facial-recognition apps in the market that are Clearview copycats.
“PimEyes hasn’t gotten as much attention, even though it’s out there, and anyone can use it,” said Hill. “You can pay $30 a month, and you could use it on people in this room right now.”
Even though facial recognition technology can be used for good purposes such as criminal investigations, the dangers it poses to privacy rights could outweigh its benefits, said Hill. Both the right to privacy and users’ right to control their personal information shared on social media platforms should be protected, she added. New laws to protect those rights should be modeled after regulations that made wiretapping, or the recording of communications between parties without their consent, illegal.
Recent privacy laws in Europe restrict how personal data is collected and handled by social media platforms. And in 2008, Illinois passed the Biometric Information Privacy Act (BIPA), an initiative led by the ACLU of Illinois, which guarantees that individuals are in control of their own biometric data (i.e., fingerprints, iris scans, DNA) and prohibits private companies from collecting it unless they inform users and obtain their written consent. Technology is not going to slow down, so the law needs to catch up and regulate its uses, said Hill.
“I don’t believe that just because a technology exists and is capable of doing this, we just have to accept it,” said Hill. “Part of why I wrote this book was because I am worried that this is just getting out there and we’re not doing enough to choose the world that we want to live in. We’re letting the technology dictate it.”