9.1 C
New York
Thursday, March 28, 2024

Europe's Proposed Limits on AI Would Have Global Consequences

logoThe AI Database →

Application

Ethics

Face recognition

Recommendation algorithm

Regulation

End User

Big company

Consumer

Government

Sector

Consumer services

Public safety

Social media

Source Data

Clickstream

Geolocation

Images

Sensors

Speech

Text

Transactions

Video

Technology

Machine learning

Machine vision

Natural language processing

The European Union proposed rules that would restrict or ban some uses of artificial intelligence within its borders, including by tech giants based in the US and China.

The rules are the most significant international effort to regulate AI to date, covering facial recognition, autonomous driving, and the algorithms that drive online advertising, automated hiring, and credit scoring. The proposed rules could help shape global norms and regulations around a promising but contentious technology.

“There's a very important message globally that certain applications of AI are not permissible in a society founded on democracy, rule of law, fundamental rights,” says Daniel Leufer, Europe policy analyst with Access Now, a European digital rights nonprofit. Leufer says the proposed rules are vague, but represent a significant step toward checking potentially harmful uses of the technology.

The debate is likely to be watched closely abroad. The rules would apply to any company selling products or services in the EU.

Other advocates say there are too many loopholes in the EU proposals to protect citizens from many misuses of AI. “The fact that there are some sort of prohibitions is positive,” says Ella Jakubowska, policy and campaigns officer at European Digital Rights (EDRi), based in Brussels. But she says certain provisions would allow companies and government authorities to keep using AI in dubious ways.

The proposed regulations suggest, for example, prohibiting “high risk” applications of AI, including law enforcement use of AI for facial recognition—but only when the technology is used to spot people in real time in public spaces. This provision also suggests potential exceptions when police are investigating a crime that could carry a sentence of at least three years.

So Jakubowska notes that the technology could still be used retrospectively in schools, businesses, or shopping malls, and in a range of police inquiries. “There’s a lot that doesn’t go anywhere near far enough when it comes to fundamental digital rights,” she says. “We wanted them to take a bolder stance.”

Facial recognition, which has become far more effective due to recent advances in AI, is highly contentious. It is widely used in China and by many law enforcement officers in the US, via commercial tools such as Clearview AI; some US cities have banned police from using the technology in response to public outcry.

The proposed EU rules would also prohibit “AI-based social scoring for general purposes done by public authorities,” as well as AI systems that target “specific vulnerable groups” in ways that would “materially distort their behavior” to cause “psychological or physical harm.” That could potentially restrict use of AI for credit scoring, hiring, or some forms of surveillance advertising, for example if an algorithm placed ads for betting sites in front of people with a gambling addiction.

>

The EU regulations would require companies using AI for high-risk applications to provide risk assessments to regulators that demonstrate their safety. Those that fail to comply with the rules could be fined up to 6 percent of global sales.

The proposed rules would require companies to inform users when trying to use AI to detect people’s emotion, or to classify people according to biometric features such as sex, age, race, or sexual orientation or political orientation—applications that are also technically dubious.

Leufer, the digital rights analyst, says rules could discourage certain areas of investment, shaping the course that the AI industry takes in the EU and elsewhere. “There’s a narrative that there’s an AI race on, and that’s nonsense,” Leufer says. “We should not compete with China for forms of artificial intelligence that enable mass surveillance.”

A draft version of the regulations, created in January, was leaked last week. The final version contains notable changes, for example removing a section that would prohibit high-risk AI systems that might cause people to "behave, form an opinion, or take a decision to their detriment that they would not have taken otherwise".

The proposal must go through the EU Parliament and EU Council, and is likely to see significant changes before being signed into law. The rules need to be squared with other EU regulations including the EU Charter of Fundamental Rights and a proposed EU Digital Governance Act. Member states will implement the final rules by enacting their own laws. The proposal suggests creating an EU Artificial Intelligence Board and national supervisory authorities to oversee enforcement, but does not offer details of how they would operate.

​AI has moved quickly in recent years, often too fast. Key breakthroughs in algorithms that learn from large amounts of data have made it possible for machines to learn how to, among other things, recognize faces, drive cars, and target advertisements. But the way AI algorithms work can be difficult to understand or predict, and the data fed to AI systems can perpetuate biases and discrimination.

Big technology companies including Google, Amazon, and Facebook in the US, and Alibaba, Tencent, and Bytedance in China, have made fortunes using AI, but have sometimes deployed the technology in questionable ways, such as offering facial recognition systems to law enforcement or using biased hiring algorithms.

Avi Gesser, a partner at the US law firm Debevoise, which advises US tech firms, says the rules are likely to have big implications for US businesses because previous EU regulations such as General Data Protection Regulation (GDPR) have influenced regulations elsewhere.

“With AI in general, regulators are reluctant to act, one, because they think it's highly technical, two, because they're worried about stifling innovation,” Gesser says. But he says the EU proposal “makes other regulators more comfortable about jumping into this space in a way that will actually curb certain behavior.”

Gesser says it will take years for the regulations to become law, but ultimately they could affect all sorts of US business. “All advertising is designed to manipulate behavior,” Gesser notes. “The challenge will be to determine what is acceptable and what is unacceptable.”

Related Articles

Latest Articles