8.3 C
New York
Thursday, March 28, 2024

AI Can Help Diagnose Some Illnesses—If Your Country Is Rich

logoThe AI Database →

Application

Personal services

Recommendation algorithm

Regulation

Ethics

Prediction

End User

Research

Big company

Sector

Health care

Research

Source Data

Images

Technology

Machine learning

Machine vision

Artificial intelligence promises to expertly diagnose disease in medical images and scans. However, a close look at the data used to train algorithms for diagnosing eye conditions suggests these powerful new tools may perpetuate health inequalities.

A team of researchers in the UK analyzed 94 data sets—with more than 500,000 images—commonly used to train AI algorithms to spot eye diseases. They found that almost all of the data came from patients in North America, Europe, and China. Just four data sets came from South Asia, two from South America, and one from Africa; none came from Oceania.

The disparity in the source of these eye images means AI eye-exam algorithms are less certain to work well for racial groups from underrepresented countries, says Xiaoxuan Liu, an ophthalmologist and researcher at Birmingham University who was involved in the study. “Even if there are very subtle changes in the disease in certain populations, AI can fail quite badly,” she says.

The American Association of Ophthalmologists has shown enthusiasm for AI tools, which it says promise to help improve standards of care. But Liu says doctors may be reluctant to use such tools for racial minorities if they learn they were built from studying predominantly white patients. She notes that the algorithms might fail due to differences that are too subtle for doctors themselves to notice.

The researchers found other problems in the data, too. Many data sets did not include key demographic data, such as age, gender, and race, making it difficult to gauge whether they are biased in other ways. The data sets also tended to have been created around just a handful of diseases: glaucoma, diabetic retinopathy, and age-related macular degeneration. Forty-six data sets that had been used to train algorithms did not make the data available.

>

The US Food and Drug Administration has approved several AI imaging products in recent years, including two AI tools for ophthalmology. Liu says the companies behind these algorithms do not typically provide details of how they were trained. She and her coauthors call for regulators to consider the diversity of training data when examining AI tools.

The bias found in eye image data sets means algorithms trained on that data are less likely to work properly in Africa, Latin America, or Southeast Asia. This would undermine one of the big supposed benefits of AI diagnosis: its potential to bring automated medical expertise to poorer areas where it is lacking.

“You're getting an innovation that only benefits certain parts of certain groups of people,” Liu says. “It’s like having a Google Maps that doesn't go into certain post codes.”

The lack of diversity found in the eye images, which the researchers dub “data poverty,” likely affects many medical AI algorithms.

Amit Kaushal, an assistant professor of medicine at Stanford University, was part of a team that analyzed 74 studies involving medical uses of AI, 56 of which used data from US patients. They found that most of the US data came from three states—California (22), New York (15), and Massachusetts (14).

article image

The WIRED Guide to Artificial Intelligence

Supersmart algorithms won't take all the jobs, But they are learning faster than ever, doing everything from medical diagnostics to serving up ads.

By Tom Simonite

“When subgroups of the population are systematically excluded from AI training data, AI algorithms will tend to perform worse for those excluded groups,” Kaushal says. “Problems facing underrepresented populations may not even be studied by AI researchers due to lack of available data.”

He says the solution is to make AI researchers and doctors aware of the problem, so that they seek out more diverse data sets. “We need to create a technical infrastructure that allows access to diverse data for AI research, and a regulatory environment that supports and protects research use of this data,” he says.

Vikash Gupta, a research scientist at Mayo Clinic in Florida working on the use of AI in radiology, says simply adding more diverse data might eliminate bias. “It’s difficult to say how to solve this issue at the moment,” he says.

In some situations, though, Gupta says it might be useful for an algorithm to focus on a subset of a population, for instance when diagnosing a disease that disproportionately affects that group.

Liu, the ophthalmologist, says she hopes to see greater diversity in medical AI training data as the technology becomes more widely available. “Ten years down the line, when we’re using AI to diagnose disease, if I have a darker-skinned patient in front of me, I don't want to say ‘I'm sorry, but I have to give you a different treatment, because this doesn't work for you,’” she says.

Related Articles

Latest Articles