Print

Excerpt: "In recent weeks, millions have taken to the streets to oppose police violence and proudly say: 'Black Lives Matter.' These protests will no doubt be featured in history books for many generations to come. But, as privacy researchers, we fear a darker legacy, too."

'In New York City alone, the NYPD used facial recognition more than 8,000 times last year.' (photo: David McNew/Getty)
'In New York City alone, the NYPD used facial recognition more than 8,000 times last year.' (photo: David McNew/Getty)


Did You Protest Recently? Your Face Might Be in a Database

By Evan Selinger and Albert Fox Cahn, Guardian UK

17 July 20


In the United States, at least one in four law enforcement agencies are able to use facial recognition technology. The implications are troubling

n recent weeks, millions have taken to the streets to oppose police violence and proudly say: “Black Lives Matter.” These protests will no doubt be featured in history books for many generations to come. But, as privacy researchers, we fear a darker legacy, too. We know that hundreds of thousands of photos and videos of protesters have been recorded and uploaded online. They could remain there indefinitely, only to be dredged up decades later. It is for this reason that we must ask whether those photos could end up in a facial recognition database.

We know that, in the United States, at least one in four law enforcement agencies are able to use facial recognition technology– considered one of the most dangerous surveillance tools by privacy researchers – with little oversight. While it may take months, even years, to know the full scope of how facial recognition has been used in the most recent protests, police departments have used everything from military grade drones to body-cams with live facial recognition capability.

In New York City alone, the NYPD used facial recognition more than 8,000 times last year, including in conjunction with its so-called “gang database” of 42,000 New Yorkers, overwhelmingly New Yorkers of color. Police could potentially retaliate against protesters by adding their names to databases and singling them out for unjustified, follow-up monitoring and “selective enforcement of unrelated matters”, like minor traffic offenses.

Aside from the ethics of diminishing people’s obscurity when they are in public and stripping away their right to do lawful things like protest anonymously, there is a real risk of misidentification through this technology.

In recent weeks, we’ve begun to hear from victims of facial recognition – people like Robert Williams, who was wrongfully put behind bars because police were swayed by a biased and broken facial recognition algorithm that wrongfully matched him as the perpetrator of a crime he didn’t commit. Mr Williams’ case highlights how facial recognition can produce results that are prejudiced against Black and Latinx Americans and create disproportionately false “matches” and a higher risk of wrongful arrest for them. And just as importantly, Mr Williams explains that bias is only part of the problem: “Even if this technology does become accurate … I don’t want my daughters’ faces to be part of some government database. I don’t want cops showing at their door because they were recorded at a protest the government didn’t like.”

Back in 2016, the police reportedly used facial recognition to find and arrest some people who protested about Freddie Gray’s death who they believed had outstanding arrest warrants. Today, police departments around the country and the FBI are asking for “videos or images” that can link protesters to violence and destruction. These requests are happening even though it’s well documented that law enforcement agencies, including the Minneapolis police department, have used Clearview AI’s facial recognition technology.

This noxious company scraped the internet to compile a name-face database of 3bn faces, which is why Senator Ed Markey recently wrote the company’s chief executive to “ensure its product is not being used to monitor protests against police brutality.” While IBM announced it’s out of the facial recognition technology business, Amazon won’t sell facial recognition technology to the police for a year, and Microsoft won’t sell facial recognition to the police “until there is a strong national law grounded in human rights”, Clearview AI remains all in.

Are the police definitely using facial recognition right now to track protesters? Nobody knows. Since law enforcement has been criticized for not being transparent about its use of facial recognition technology and the FBI and protesters are shining a spotlight on a lack of transparency as a systemic policing problem, every protester at a Black Lives Matter protest and every journalist covering one should assume they could be.

What can be done? Facial recognition technology should be banned. This agenda needs as much support as can be mustered. Calls to defund the police and stop providing them with facial recognition technology are gaining momentum, which is a good first step. But as Tim Maughan rightly argues: “We must not allow private contractors and technology companies to seep in, fill the void, and repeat – or even exacerbate – the same disastrous mistakes.”

This leaves risk-mitigation strategies in the hands of two groups. Protesters can help protect one another by using tools to obscure faces and erase metadata. And journalists shouldn’t publish any images that the police can use to track a protester’s identity unless they have explicit consent to do so.

Journalists might be wary of stepping up. After all, outdated legal doctrines hold that people lack a reasonable expectation of privacy when they’re in public. As a result, journalists have a legal right to photograph whomever they choose at these newsworthy events. Furthermore, journalists might believe they are ethically barred from manipulating “the content of a photograph in any way”.

But this restriction conflicts with their duty to “give special consideration to vulnerable subjects” and “minimize harm”. Journalists have the privilege and responsibility of doing what they can to protect protesters who are living in a society that has yet to come to terms with the fact that analog assumptions about what’s private and public no longer hold in the face of modern police surveillance.

This isn’t the first time protesters are risking their safety and wellbeing standing up for justice. Sadly, it won’t be the last. Since facial recognition technology poses an unprecedented threat, every possible precaution needs to be taken.

e-max.it: your social media marketing partner
Email This Page