11.9 C
New York
Monday, April 22, 2024

If Done Right, Fei-Fei Li Says AI Could Make Policing Fairer

logoThe AI Database →

Application

Ethics

Face recognition

Human-computer interaction

Regulation

Safety

Text analysis

Company

Alphabet

Google

End User

Government

Research

Sector

Public safety

Research

Health care

Source Data

Text

Speech

Video

Images

Technology

Machine learning

Machine vision

Natural language processing

A decade ago, Fei-Fei Li, a professor of computer science at Stanford University, helped demonstrate the power of a new generation of powerful artificial intelligence algorithms. She created ImageNet, a vast collection of labeled images that could be fed to machine learning programs. Over time, that process helped machines master certain human skills remarkably well when they have enough data to learn from.

Since then, AI programs have taught themselves to do more and more useful tasks, from voice recognition and language translation to operating warehouse robots and guiding self-driving cars. But AI algorithms have also demonstrated darker potential, for example as a means of automated facial recognition that can perpetuate race and gender bias. Recently, the use of facial recognition software in law enforcement has drawn condemnation and prompted some companies to swear off selling to police.

Li herself has ridden the ups and downs of the AI boom. In 2017 she joined Google to help, in her words, “democratize” the technology. Not long after, the company, and Li herself, became embroiled in a controversy over supplying AI to the military through an effort known as Maven, and attempting to keep the project quiet.

A few months after the blowup, Li left Google and returned to Stanford to colead its new Human-Centered Artificial Intelligence (HAI) institute. She also cofounded AI4All, a nonprofit dedicated to increasing diversity in AI education, research, and policy. In May, she joined the board of Twitter.

Li spoke with WIRED senior writer Will Knight over Zoom from her home in Palo Alto. This transcript has been edited for length and clarity.


WIRED: We are witnessing public outrage over systemic racism and bias in society. How can technologists make a difference?

Fei-Fei Li: I think it is very important. It goes to a core belief of mine: “There are no independent machine values. Machine values are human values.” I heard Shannon Vallor, a computational ethicist, say this years ago, and I’ve been using it since. Technology has been a part of humanity since the dawn of time, and the deployment of technology fundamentally affects humans.

We have to ensure that technology is developed in such a way that it has a positive human impact and represents the values we believe in. This takes people—on the innovation side, the application side, policy-making side—and leads to a natural belief in the importance of inclusiveness.

Let’s talk about facial recognition. In 2018, one of your students, Timnit Gebru, helped create a project called Gender Shades that highlighted racial bias in commercial face-recognition algorithms. Now companies like Amazon, IBM, and Microsoft are restricting sales of such technology to police. How can companies make sure they don’t release products with biases in the first place?

We need a multi-stakeholder dialog and an action plan. This means bringing together stakeholders from all parts of our society, including nonprofits, community, technologists, industry, policymakers, academicians, and beyond. Facial recognition technology is a double-edged sword, and obviously we need to consider individual rights and privacy versus public safety. Civil society has to come together to think about how we regulate applications of technology like this. Companies must play a role. They are responsible and should be held accountable just like other stakeholders.

Do you think AI can potentially help make policing fairer?

I want to highlight two recent research works by my Stanford colleagues related to policing, both with diverse people behind them. In one, Dan Jurafsky and Jennifer Eberhardt used AI to analyze the language used by police from bodycam footage when people were stopped. They showed there is a significant discrepancy between who was stopped by police and the language police use—officers were found to talk to black people in less respectful ways. Using natural language AI techniques, we can get insights into ourselves and our institutions in ways we couldn’t have done before.

>

Another Stanford professor, Sharad Goel, is looking at how we can improve fairness in written police reports. If you think about a police report—“there’s an Asian female in the parking lot, height is 5 foot 3, driving a Toyota, bumped into a car”—the information it contains might inadvertently impact decisionmaking in a way that’s not necessarily logical. Using natural language processing techniques, you could effectively anonymize it to say “female” or “person A” instead of “Asian female,” or “hair color B” instead of “dark brown hair,” for example. But of course there are other signals, like the car you drive.

What about efforts to ban certain uses of AI, for example for facial recognition or autonomous weapons?

We want to make sure that our country continues to lead in the development and innovation of the technology—not just facial recognition, but many areas of AI. This can help us to use the technology in positive ways. The iPhone was one of the first smartphones to use face ID technology. And it has great potential for personalized banking, as just another example. But it is also important to understand that by continuing to innovate and understand the effects of this technology, we can better promote the positive uses of the technology while defending against the potential bad uses.

The application of technology is often a double-edged sword. Good intentions are far from enough. As a society, we need to recognize this. We need to work together to apply the right guardrails to the application of technologies like facial recognition that reflect the values of our society. From technology, to civil society, from law enforcement to policymakers, I believe we all should try to be part of the solutions.

Tell us about the Stanford Institute for Human-Centered AI.

My PhD research traversed both human cognition and machine intelligence. As one of the first generation of researchers to turn AI from a niche lab science into a major driver of societal changes, even then I felt a sense of responsibility to contribute to promoting human-centered values and guidance in technology. HAI is founded on this belief that human-centered technology and the development of AI have to go hand in hand—we can no longer afford for the human aspect of tech to be an afterthought. And one of its greatest strengths is that it’s interdisciplinary.

You left Stanford to join Google for a while. What did you learn from your time there—and the fallout over Maven in particular?

I was at Google from 2017 for a planned 20-month sabbatical to try to help democratize some of the machine-learning tools through Google’s cloud platform. But this was also a time when companies were waking up to the impact of technology. It was very educational and humbling.

For me, it was illuminating and confirmed my entire career’s view that we need a human-centered approach to tech—not just to talk about it, but to do it in a deep way. Being in a big tech company made me sure we also need a multi-stakeholder approach involving so many sectors. It is important that technologists, companies, and business leaders engage with all parts of society when designing, developing, and deploying these technologies. This includes groups from individual citizens to nonprofits, communities, educators, policymakers, and more.

The experience also affected my sense of responsibility as an educator, which goes to the core of who I am. Look at the timing. It’s when AI4All went from Stanford to a national program. And the seeds of HAI were born from the understanding that came from that.

What did you make of the public and press reaction?

I’m very much an introvert, happiest when hunkering down in the lab working with students. But I also feel a tremendous sense of responsibility in communicating science in the responsible way to the public. Those not trained in the technology deserve that communication. Which makes your job very important, Will. In that respect, my experience from Google is so important—the public still doesn’t know enough about how AI works and what it can do.

Let’s talk a bit about your own research. You’re focused on health care, right?

One of the most exciting applications of AI and ML, in my opinion, is health care. There are countless scenarios where AI can become a useful tool for our patients, caretakers, clinicians, and the health care system as a whole.

>

One example is clinician hand hygiene in hospitals. This is a project we’ve worked on for many years, and it’s now in the collective consciousness during a pandemic. Collaborating with Stanford Children’s Hospital, as well as Utah’s Intermountain Hospital, our team has been piloting a series of studies that show computer vision technology can effectively detect the moments when proper hand hygiene should take place, making it possible for real-time alert systems to intervene.

We work closely with the clinicians, including nurses and doctors. We understood their eagerness to help improve clinical practice, but also to protect and respect privacy. So we chose to use depth sensors that do not contain information of the scenes and people, but only 3D depth signals. Our team works with bioethicists, legal scholars, and ethics professors at Stanford for every research project.

AI for senior care is one of my most passionate areas of research, partly due to the fact that I’ve been taking care of my ailing parents, who are in their late '70s, for many years. With several health organizations and geriatricians, our team has been working on pilot projects that aim to use smart sensor technology to help clinicians and caretakers understand the progressions of health conditions for seniors, such as gait or gesture changes that might lead to increased risk of falling, or activity abnormalities that need further assessment or intervention.

Do you think efforts to use AI to fight Covid-19 could have unintended consequences? The pandemic highlights how inequitable society is, and without due care, AI algorithms could reinforce that—if algorithms work better for rich, white patients for example.

I have worked on applying AI to health care for more than eight years. As my collaborator, the Stanford medical school professor Arnold Milstein, always says, we have to focus on the most vulnerable groups of patients and their circumstances—housing, economics, access to health care, and so on. Good intentions are not enough, we need to get different stakeholders involved to have the right effect. We don’t want to keep repeating unintended consequences.

How do you ensure your own research doesn’t do this?

In addition to all required guardrails for research involving human subjects, HAI is starting to do ethical reviews of our research grant proposals. These aren’t required yet by anyone, but we feel the action is required. We should continue to improve our efforts. Because of Covid we should put more effort into guardrails [such as more diverse teams and practices designed to prevent bias.]

What would you say is HAI’s most important achievement to date?

I am especially proud of how we responded after Covid hit our country. We were planning a conference on April 1 on neuroscience and AI, but on March 1, we asked ourselves, what can we do for this crisis? In a couple of weeks, we put together a program with scientists, national leaders, social scientists, ethicists, and so on. If you look at our agenda, we have medicine, a drug discovery track, but also the international picture, privacy aspects related to contact tracing, and the social side of things such as xenophobia towards different ethinic groups in the US.

>

Then, two months later, on June 1, we had another conference to look at the economic and election impact of Covid. We brought together national security scholars, doctors, and economists to talk about financing vaccines and the impact on the elections. I think this is an example of HAI engaged in impactful events and topics through an interdisciplinary approach, engaging with everyone.

Tell us why you chose to join Twitter’s board.

I was flattered that Twitter invited me. Twitter is an unprecedented platform that gives individuals a voice on a global scale, shaping conversations of our society near and far. And because of that, it is so important to do it right. Twitter is committed to advocating for healthy conversation. As a scientist, I joined to be helpful, mostly on the technical side. This is only week three or four, but I hope that I will have a positive impact, and Twitter’s aspiration of serving healthy conversations aligns with that value.

As a user of social technology myself, I obviously know the negative aspects of it, and I hope all of us, in or outside Twitter, can help. And it will take time. It won't be a light-switch moment. It will take lots of time, even mistakes of trial and error, but we have to try.

Let’s talk a bit about policies affecting AI and tech. What do you make of the US government’s decision to suspend H-1B visas?

I’m a grateful immigrant. From my career at Stanford and earlier universities, I've worked with so many students coming from all over the world. I’m not an expert in this, but I think that attracting researchers worldwide is important to advancing our country’s technological capabilities. It drives innovation, if done well, and makes people’s lives better. We are proud of America's history of technological advances, and want that to continue. But we need to do this in a thoughtful and careful way.

How does your personal experience, as a Chinese-American AI scientist, inform your thinking?

America was built by immigrants, and I wouldn’t be where I am today without the support of so many individuals, schools, and workplaces that gave me opportunities since I came to this country with parents from China at a very young age. My family came here for opportunities and freedom. We also cherish the cultural heritage we bring. My formative years, especially my education years, were spent here. I’m now taking care of my aging parents here. I have a happy life rooted in America.

Some colleagues have also pointed out to me that it’s very rare for a woman of color to lead a major AI organization in our country. I’m very proud of that, but even more so, I feel a tremendous sense of responsibility, because this shouldn’t be the way. If we don’t have that representation, we miss opportunities and voices, and that’s extremely important.

Related Articles

Latest Articles