7.1 C
New York
Friday, April 19, 2024

IBM's Withdrawal Won't Mean the End of Facial Recognition

To some in the tech industry, facial recognition increasingly looks like toxic technology. To law enforcement, it’s an almost irresistible crime-fighting tool.

IBM is the latest company to declare facial recognition too troubling. CEO Arvind Krishna told members of Congress on Monday that IBM would no longer offer the technology, citing the potential for racial profiling and human rights abuse. In a letter, Krishna also called for police reforms aimed at increasing scrutiny and accountability for misconduct.

“We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies,” wrote Krishna, the first non-white CEO in the company’s 109-year history. IBM has been scaling back the technology’s use since last year.

Krishna’s letter comes amid public protest over the killing of George Floyd by a police officer and police treatment of black communities. But IBM’s withdrawal may do little to stem the use of facial recognition, as a number of companies supply the technology to police and governments around the world.

“While this is a great statement, it won’t really change police access to #FaceRecognition,” tweeted Clare Garvie, a researcher at Georgetown University's Center on Privacy and Technology who studies police use of the technology. She noted that she had not so far come across any IBM contracts to supply facial recognition to police.

>

According to a report from the Georgetown center, by 2016 photos of half of American adults were in a database that police could search using facial recognition. Adoption has likely swelled since then. A recent report from Grand View Research predicts the market will grow at an annual rate of 14.5 percent between 2020 and 2027, fueled by “rising adoption of the technology by the law enforcement sector.” The Department of Homeland Security said in February that it has used facial recognition on more than 43.7 million people in the US, primarily to check the identity of people boarding flights and cruises and crossing borders.

Other tech companies are scaling back their use of the technology. Google in 2018 said it would not offer a facial recognition service; last year, CEO Sundar Pichai indicated support for a temporary ban on the technology. Microsoft opposes such a ban, but said last year that it wouldn’t sell the tech to one California law enforcement agency because of ethical concerns. Axon, which makes police body cameras, said in June 2019 that it wouldn’t add facial recognition to them.

But some players, including NEC, Idemia, and Thales, are quietly shipping the tech to US police departments. The startup Clearview offers a service to police that makes use of millions of faces scraped from the web.

The technology apparently helped police hunt down a man accused of assaulting protesters in Montgomery County, Maryland.

At the same time, public unease over the technology has prompted several cities, including San Francisco and Oakland, California, and Cambridge, Massachusetts, to ban use of facial recognition by government agencies.

Officials in Boston are considering a ban; supporters point to the potential for police to surveil protesters. Amid the protests following Floyd’s killing, “the conversation we’re having today about face surveillance is all the more urgent,” Kade Crockford, director of the Technology for Liberty program at the ACLU of Massachusetts, said at a press conference Tuesday.

Timnit Gebru, a Google researcher who has played an important role in revealing the technology’s shortcomings, said during an event on Monday that facial recognition has been used to identify black protesters and argued that it should be banned. “Even perfect facial recognition can be misused,” Gebru said. “I’m a black woman living in the US who has dealt with serious consequences of racism. Facial recognition is being used against the black community.”

In June 2018, Gebru and another researcher, Joy Buolamwini, first drew widespread attention to bias in facial recognition services, including one from IBM. They found that the systems worked well for men with lighter skin but made errors for women with darker skin.

Silhouette of a human and a robot playing cards

The Secret to Machine Learning? Human Teachers

By Tom Simonite

In a Medium post, Buolamwini, who now leads the Algorithmic Justice League, an organization that campaigns against harmful uses of artificial intelligence, commended IBM’s decision but said more needs to be done. She called on companies to sign to the Safe Face Pledge, a commitment to mitigate possible abuses of facial recognition. “The pledge prohibits lethal use of the technology, lawless police use, and requires transparency in any government use,” she wrote.

Others also have reported problems with facial recognition programs. ACLU researchers found Amazon’s Rekognition software misidentified members of Congress as criminals based on public mug shots.

Facial recognition has improved rapidly over the past decade thanks to better artificial intelligence algorithms and more training data. The National Institute of Standards and Technologies has said that the best algorithms got 25 times better between 2010 and 2018.

The technology remains far from perfect though. Last December, NIST said facial recognition algorithms perform differently depending on a subject’s age, sex, and race. Another study, by researchers at the Department of Homeland Security, found similar issues in an analysis of 11 facial recognition algorithms.

IBM’s withdrawal from facial recognition won’t affect its other offerings to police that rely on potentially problematic uses of AI. IBM touts projects with police departments involving use of predictive policing tools that try to preempt crimes by mining large amounts of data. Researchers have warned that such technology often perpetuates or exacerbates bias.

“Artificial Intelligence is a powerful tool that can help law enforcement keep citizens safe,” Krishna, the CEO, wrote in his letter. “But vendors and users of Al systems have a shared responsibility to ensure that Al is tested for bias, particularity when used in law enforcement, and that such bias testing is audited and reported.”

IBM reviews all deployments of AI for potential ethical problems. The company declined to comment on how the tools already supplied to police departments are vetted for potential biases.

Related Articles

Latest Articles