4.5 C
New York
Friday, December 1, 2023

The Case for a Light Hand With AI and a Hard Line on China

logoThe AI Database →





End User





Health care

Public safety

Last week, at the WIRED HQ at CES, I spoke with Michael Kratsios, the chief technology officer of the United States. We dug into the government’s recent regulatory framework for AI, the potential for an AI cold war with China, and whether or not the NSA is building a quantum computer. The conversation has been lightly edited for clarity.

Nicholas Thompson: You've just laid out a memorandum to the heads of executive departments and agencies on how to regulate artificial intelligence. Why don’t you give us the quick top-line summary of what you've done and why it matters?

Michael Kratsios: Yeah, absolutely. To kind of step back a second, this requires a little bit of context to see where this fits into the larger effort we've been making as a country.

Beginning in 2017, we identified some of the core emerging technologies that we need to ensure American leadership. First and foremost was artificial intelligence. And we launched the US national strategy on AI beginning last year, called the American AI Initiative. And the American AI Initiative really is a whole-of-government approach to ensuring American leadership in AI and consists of four key areas: One is research and development leadership, two is workforce, three is regulations, and four is international engagement.

Now for each of those pillars, we continued to work very diligently over the last year. And then the big announcement that came out yesterday was on that third pillar of regulations. We want to create an environment where it’s the United States that encourages and drives entrepreneurs to make next-generation AI discoveries here in the US. We want to do that in a way that's still true to the values that we as Americans hold dear, those of privacy and civil rights and freedoms. And in order to do that, we developed this regulatory memo. What this memo essentially does, this is a direction from the White House to agencies that regulate AI-powered technologies and gives them considerations or more guidance as to how we should do that.


So in our system, we have lots of agencies that touch AI-powered technology. You could be at the Food and Drug Administration looking at AI-powered medical diagnostics, and those need to be approved. You can be at the [Federal Aviation Administration] and dealing with drones that are flying, or the [Department of Transportation] looking at autonomous vehicles. Each of those agencies is dealing with an AI-powered technology, but they need to have some flexibility in the way they approach the regulation of that particular AI-powered tech. So what our memo does is essentially provide 10 principles that they should be focusing on. And that's out for comment now in the community; we're excited to get comments back on how we can improve this in the next 60 days, and then, once the memo is final, any time an agency is attempting to put forward a regulation impacting our technology, they must essentially comply.

NT: So my quick summary of the memo—you can tell me whether this is accurate or not—is: AI is important, there are lots of really hard questions involved, please don't overregulate and fuck this up.

MK: That's about right, yes. Generally there are three main categories of considerations that agencies should have. The first is to ensure public engagement. First and foremost, we recognize as bureaucrats in Washington that we don't have all the answers, that even if we pull the best people together from all across government to sit down and think about what are the AI regulatory considerations you have for the next-generation autonomous vehicle, we don't have the right answers. So first and foremost, we're directing all of our agencies, when they are moving forward with regulating AI, to call a committee, go talk to the community, have stakeholder meetings, bring people to Washington to discuss it with them.

Number two is to promote this light-touch regulatory approach, this idea that if we're too heavy-handed with artificial intelligence, we end up stifling entire industries that we want to make sure to foster and generate here in the United States. And the last, and I think this is the one that tends to get the most coverage, is this idea of promoting trustworthy AI. We want Americans who interact with these technologies in the private sector to trust them. Like when you go and take a prescription drug, you have confidence that the FDA has done a very thorough process ensuring that that drug is safe. We want to make sure the same thinking goes into play with types of AI technology.

NT: So let's talk through a couple of specific examples. The NHTSA has come under a certain amount of pressure for not regulating AI vehicles. So Uber had a vehicle, self-driving car, hit someone and killed them; Tesla's autopilot has been involved in a number of accidents. NHTSA and DOT have not taken aggressive action. So if I'm working at one of those places today, and I read this, I kind of feel like I'm in the right place, is that right?

MK: No, I think what it provides is some guidance back to the agencies like DOT. When you think about cars right now, you know, any American company can't produce an autonomous vehicle and put it on the street; we're still in the testing mode. And I think that plays very much into the general theme of this model where you have to be able to test these technologies safely, and hopefully this will provide further guidance, so that as these testing opportunities are presented around the country, DOT and other agencies will be asking the right questions.

NT: So let me give you another example of how to think about it. We recently ran a story in WIRED about an algorithm in the health care system that would evaluate patient risk and potential patient cost, and it turned out that it was biased against black people. The experience of being black in America raises all kinds of obstacles to, for example, following up on a suggested x-ray from your doctor. So the algorithm, even though race hadn't been encoded into it, turned out to be racist. So if I'm at [the Department of Health and Human Services], and I'm trying to figure out a policy for nonracist algorithms, how does this memo help me think through that question?

MK: Well, as you know, unlawful discrimination is unlawful to begin with. So whether this memo existed or not, that is illegal. You can't do that.

NT: So it's illegal if the outcome is racist, even if there's no input that has to do with the race and the creators of the algorithm had no idea what it was going to do?

MK: Yeah, absolutely. I think what this memo provides is some guidance to the agencies to ask those tough questions that may not have been considered otherwise. And as you're attempting to create a regulatory process for potentially approving some sort of medical diagnostic, which could potentially introduce some sort of racial bias, these are the types of questions that regulators must be asking, and do the right cost-benefit analysis and testing of that particular technology to make sure that whatever comes out actually does comply with the law.

NT: But it seems like the implication of this memo is, go ahead and push the algorithm live, right? We don't want to stop you from trying to innovate with AI outcomes that can massively improve patient care. So you're not saying you have to test them and make sure that every outcome doesn't have disparate racial impact, right?

MK: So generally speaking, I think this provides flexibility to the agencies to develop agency-specific processes for their regulation. If you are dealing with life-and-death situations relating to patients, the process that you're going to undertake in order to check and validate the outcome of that particular algorithm is very different than some other instances. So what we want to do is introduce the right questions to the agencies. And what this memo actually strives for is encouraging agencies to come back to the White House and report to us how they are going to be implementing this memo in their regulatory process. If we do that back-and-forth with the agency, there will be a better sense of the ones that make more impactful decisions while approaching the regulation of those technologies in a much different fashion.

NT: So one of the possible ways you could have written the memo was to say, “Bias and AI is a huge issue, you must be sure before any AI algorithm goes live and we are able to test it, make sure it doesn't have disparate impact, that we need to make sure it has explainability.” And you have not done that. You said, “These are the values that are important. This is how you need to test. Here's a framework for how you regulate and how you think about it.”

MK: Yes, yeah.

NT: This memo is very United States–specific, and now we have opted out of working on guidelines across Western countries. We're not participating in the various governing bodies across multiple countries. Why is that, or will we do that?

MK: Well, that assertion could not be further from the truth. The United States joined the OECD in Paris just last May. And at our lead, we got the OECD nations, which are the largest democratic nations in the world, to come together and agree to AI principles. This was an example where, in an administration that is very cautious oftentimes with multilateral agreements, we were able to step in and say it's particularly important that the West comes together on the way that we should be looking at regulation or oversight of artificial intelligence. And we pushed forward the AI principles, and we agreed to them last May. Now this is the next step, where now each country in the OECD, who has signed on to these principles has a very tough and challenging question of how they're able to implement these high-level principles, using their own domestic system of regulation processes. And we're one of the first out of the gates to do this. More generally in technology, the US has sort of been in wait-and-see mode with most regulations—take GDPR as an example.

This is a great example where we're leading the pack, and hopefully over the next couple months, we will take this on the road, go to Europe, talk to regulators there, and say, "Look, if we want the West to lead the next generation of [technological] discoveries, we have to have the regulatory systems that all of our [innovators] understand.” We can do that with this memo.

NT: Let's get into the issue of China, which I think is underlying some of this memo and some of your previous statements. And, honestly, maybe the most important question in tech in the world. So do you think that AI right now is better at enabling authoritarianism or democracy?

There's an argument that AI is kind of a centralizing force, right? What makes AI good, in part, with machine learning is having lots of data, and centralized governments have more control over the data, so there's a real fear that as AI becomes more important in economies, it will actually be the authoritarian governments that get the advantage. A counterargument of course would be, it's basically businesses, and business innovation tends to come in democratic countries. So tell me where you think AI is prospering the most and how you think about the balance of AI in authoritarianism and democracy?

MK: Well, what we're seeing is that authoritarian regimes around the world have not hesitated to attempt to leverage artificial intelligence to pursue their agendas. China is one of the best examples and an obvious one. It's a place where China's Communist Party has essentially twisted this technology in a way that, to many in the West, is extraordinarily frightening. When AI is used to track people, to identify ethnic minorities, to imprison them in concentration camps, it's extraordinarily disturbing and troubling. And this is something that we have been speaking about across the administration. And not just for me, it's something that we as the West really need to stand up to. And I think the imperative of Western leadership in technology could not be more important. We need to lead the world, the next generation of technological discovery, because if we don't, it's going to be these authoritarian regimes that are going to be driving and inserting their values into the process.

That being said, we all know here, as people who love technology, the power of artificial intelligence and other technologies to fundamentally change the world for the better. We know that, we can see it in front of us, when we have autonomous vehicles on the road and the disabled and the elderly can get to places, where there aren't as many people who are dying on the streets, and because of these, it can be a really big deal. And for us, we don't want to stifle that innovation here in the United States, just because other people are using it for nefarious purposes. It makes our need to lead the world even more important.


NT: So your fear is that if we don't lead, if we don't get the regulations right, if we fall behind in AI, AI will essentially be baked in an authoritarian way.

MK: Yes, absolutely. And that's why I think—

NT: And what exactly does that mean?

MK: It means that these authoritarian regimes are trying to guide and drive this development, and will attempt to export those values abroad. And if they're the ones making the next great technological discoveries, they will be the ones dictating types of technologies that everyone will want to get their hands on. And that's not something we believe is going to be good for the world generally, certainly not for America. So for us, we think American leadership and Western leadership broadly on technologies is important.

NT: OK, so let's accept that premise; and I do very much accept that premise. Then the question is, what steps do you take? The steps that the White House has taken have very much been to split the US off from China, at least as I read it, right? You take the major Chinese AI companies that have been involved in surveilling the Uighurs so they can't do business with American companies. You've taken other Chinese tech companies, not for particular surveillance reasons, but for belief that they have backdoors for the Chinese government, you put Huawei on the entities list, meaning that Huawei can no longer work with US companies. So it has really been a split. Why is splitting the United States and China the proper way to deal with this problem as opposed to trying to integrate?

MK: The reality is our general strategy has been one to promote and protect. There's no way that we can ensure leadership without doing both. So you highlighted a couple of our techniques to make sure that our networks in the United States are secure and that when you get on your phone you know that that information is safe and secure. And we know as government networks that that information is safe and secure.

On the promote side is where I think we're leaning in harder than we ever have before. And I think it's equally as important as protection. So on the promote side, some of the stuff that we're doing has been focused a lot on research and development. The US is spending more on research and development, the federal government, than any time before in the history of this country. The president signed the largest R&D budget in history last December before Christmas break. As part of the American AI initiative, for the first time in history, we're actually tracking AI R&D across all agencies. So what makes the United States so unique is that we don't have a Ministry of Science, we have Darpa doing incredible work, we have the National Science Foundation doing incredible work, the Department of Energy, where there's 16 national labs, and so on. And on our first tally, our 2020 budget, we had really a billion nondefense unclassified dollars being spent on AI, and that's going to increase even more in years to come. So that's one.

Number two is our huge emphasis on workforce development. Right now, the United States has the lowest unemployment rate in a very, very long time. And we're seeing across the board areas where we can prepare American workers for next-generation jobs. That's something we've emphasized to all of our programs, this idea of training the next generation of AI researchers here in the United States. So we have prioritized grants, fellowships, and dollars that the federal government has to sponsor students to pursue graduate degrees in the US. Now they're prioritizing for two years. So there are a lot of instances of this promote side, and it's balanced with all the protective measures.

NT: I want to stick with protect, because I totally agree with all the promotion. I wish there was even more promote. I'm totally, all the money you can put in AI, all the great AI research you can bring here, totally down with that. But that'd be boring. But with the protect, I feel like there may be more of a disagreement. So what could the Chinese government do that would—I mean if the Chinese government were to credibly say, "We are no longer using AI to surveil the Uighurs, we are no longer using AI to suppress people and track protesters," would that change your policy?


MK: Well if you read the notice that we sent out when those companies were added to the entities list, we cite those specific things. So if those in some alternate reality were not true anymore, then there would be no reason for them to be on the list.

NT: So let's start with the implications of the entities list. So Huawei, one of the largest Chinese phone manufacturers, as a result of being put on the list is no longer going to use Android. They're going to develop their own operating system. And in fact, you go out on the floor and you talk to other Chinese companies, they say, "I don't know, we used to really want to work with the US companies. We used to really want to use Android. I mean, it's still great, but like once Huawei builds their OS, we're going to use that instead." So how is it in the US national interest to suddenly have hundreds of millions of phones around the world that don't have Android, but have Huawei OS?

MK: The better question is, how is it in the US interest for Huawei to be part of our network, and to let an authoritarian regime that has attempted to undermine and compete with the United States have access to our technology. And I think that is a dramatically more—

NT: But isn't that different? Can't you ban Huawei from being part of our 5G networks without putting it on the entities list?

MK: The entities list is the legal path that makes the most sense. And as I said, I think it's critically important for the rest of the world to be a little more cognizant and understand the threats posed by these companies in particular. Their practices are well documented, from stealing IP and using these techniques around the world. This is not a company that any rational business person would want to do business with.

NT: How worried are you that one of the implications will be, you know, something that we at WIRED have called the new Cold War, where there is a Chinese tech stack with Huawei 5G, and you know, Chinese-made phones running a Huawei OS. And there's a Western tech stack with Ericsson 5G, and Apple, Google, whatever it is. Does it worry you if you think about a future where we split that way? Or is that just the inevitable consequence of changing things?

MK: I don't know if it's an inevitable consequence. I don't want to make a postulation of that magnitude. But again, I think the truth is, when you're dealing with players that are exhibiting this type of behavior, just ignoring them, ignoring your problem, is not the solution. And I think we as the West generally need to be a lot more open-eyed about the situation we're facing, and some of these operators, and their practices and use of this technology, is something that should concern us greatly.

NT: So what is the best-case scenario for the relationship between the US and Chinese tech industries over, say, the next five years? Give me the best-case scenario, and give me your worst-case scenario.

MK: I don't know about that. But I think we're excited with the phase-one trade deal that's coming together, and we're excited to morph into phase two. I think, ultimately, the Chinese are hopefully working as a big part of the larger global economic order, something that could benefit their people, just as we know it's benefited the West for years. And ultimately, I think we'll be able to present—

NT: But let me ask you more specifically about—as the Trump administration has laid out all these policies, part of me has thought, I mean, it's us trying to get leverage, trying to get to a better position. You know, it's kind of like the recent assassination in Iran, where it's saying, "Look, you guys have crossed the red line. Huawei has done something we know is not considered acceptable. So we're going to raise the price of that." But the hope is they will reach a new and better relationship, right? That China will say, "OK fine, we will no longer steal your IP." China will feel the pain of all these companies being put on the entities list and will respond, and we'll reach a new equilibrium, where once again, the US and China do lots of business across the tech industry, and folks on the floor at CES aren't afraid of doing business with the US. And it's done on better terms, right? That this is partly just the US negotiating. Is that a fair way of looking at it? Is that the way you look at it? Or is that wrong?

MK: No, I think that's pretty fair. I mean, the example that I always bring up is the WTO. We went in there believing that allowing China to enter the WTO in the 2000s would sort of transform the business practices that they exhibited into ones that conformed with the global order. And over the next almost 20 years, the exact opposite happened. And it's left the West in a very interesting bind, because it was the first time in history where we had this sort of status quo where everyone was doing the right thing, and then suddenly somebody stopped, and no one knew how to respond. And the mechanisms for keeping people in line and in order are not particularly efficient. We bring some sort of issue to the WTO, good luck getting that sorted within five years.

So you know, I think what we face, especially in the tech domain is, we can't be so naive. When China came to the G20 in Tokyo, we decided to move ahead with the OECD AI principles and bake them into a [communique] coming out at the G20 ministers meeting in the summer. And the Chinese breached them. They're not practicing any of the principles. They literally are doing the exact opposite. And for us in the West to keep believing that somehow they're just going to miraculously change, I think is very, very much why we are using the particular tools that are being used by this administration. We finally have an administration that's actually doing something to protect the American innovator whose idea has been stolen and taken for decades.

NT: So your position is, this may well move China to a better position where there's less ID theft, but that is not our expectation. We're more just finally reacting, and the US has taken way too long to do something.

MK: We're reacting, but we expect that they will. Let's hope something will change.

NT: So you do actually think that, if I could make up an answer to my question from a minute ago, the best-case scenario in two years would be a Chinese administration that has credibly agreed to stop ID theft, stop hacking us, stop surveilling, you know, violations of human rights through AI. And in return, these companies have been taken off the entities list, and we're able to do business in a normal way again. That might be the best case scenario?

MK: Yeah, absolutely.

NT: Well, let's go to the next topic on that happy note.


Quantum computing. So I know you're a big proponent of quantum computing, I know that you care a lot that the US gets the lead in quantum computing. Is the NSA working on a quantum computer?

MK: I can speak for our civilian approach to quantum computing. One of our biggest legislative wins that we've seen at the end of last year was the signing of the quantum initiative. This was a great example of a bipartisan piece of legislation where folks on both sides of Congress came together and said, "Look, the US has to lead in this technology, and the negatives associated with the damage that could be done if we don't lead the world in this are so grave that we need to drive this effort. They committed the $1.2 billion over five years to this effort and to set up the National Quantum Coordination Office at the White House, and they've been moving ahead. So there's been announcements around quantum consortium centers, big stuff at the Department of Energy and other agencies. So we're very excited about this. We believe that it is the exact type of research and development that the federal government should be focused on. This is early-stage, precompetitive, basic research, where the incentive for the private sector to invest is actually much lower than with things like artificial intelligence. So these are the important dollars that taxpayers must be spending, because the private sector is not quite ready to make the big investment.

NT: Sounds like a yes to my question! All right, great. So when you think about quantum computing, if I'm reading your answer, more seriously, it sounds like you would like the US private sector to make major advances in quantum computing, but you really do believe that this may be an area where it needs to be government investments, not just government encouragement of the private sector?

MK: I think it's government encouragement of the private sector, but also there was recognition that there was a big chunk of basic early-stage research in quantum science that someone should be paying for, and there's no one really out there other than the federal government to do so. The federal government has been supporting early-stage, basic research for decades now. And some of the greatest technological discoveries have happened because of the federal government's focus on early-stage research, and we're seeing amazing programs at National Science Foundation, great stuff from our national labs, these ultimately will be commercialized and taken up by the private sector to drive.

NT: So let's talk a minute about immigration. If you look at the companies in the US—Microsoft, Google—that are leading on quantum computing, they're run by immigrants. And the Trump administration does not have a reputation of welcoming immigrants from around the world. How do you change that perception? And how do you make sure that the smartest minds in the world come to America?

MK: Absolutely. I would just read our immigration proposal at WhiteHouse.gov. I mean, we are pursuing, the president has been very vocal about a merit-based immigration system, one that supports and encourages the best and brightest from around the world to come to the United States. And there are other countries around the world that have merit-based systems, ones that provide more points or more benefits or better spots in the queue, if you will, if you have advanced degrees, if you have some sort of skill or merit, and that's what we want to do. And so I think this is one place where we're actually very aligned with the tech industry. We would love to see merit-based immigration, so they can have more of the best and brightest.

NT: They're aligned with the policy of a merit-based system, and they're concerned about the rhetoric coming from the White House and the perception that the United States is no longer being a welcoming place. Clearly, you're not in charge of the rhetoric, you're more in charge of the policy, but is there anything that can be done to change that perception?

MK: Well, I love to do talks like these. I love to go out there and speak to anyone who wants to listen. I deeply believe that America can and should be the home for the next generation of discoveries. My mother was an immigrant who came over from Greece. And I deeply believe that this is a country that has always welcomed immigrants. And I'm deeply excited about a president who's interested in driving a merit-based immigration system.

NT: You work for the federal government, and elections are on the state level. But when will I be able to vote online?

MK: I wish I had a good answer for you. You know, I think it would be great. Voting online would be fantastic. Obviously it's a state-run effort, and the more we can do maybe at the federal level of figuring out standards and giving technical assistance to a lot of these—

NT: One of the huge problems will be where's it hosted, right? You could presumably figure out how to host state elections, or in a federal election, we could figure out how the US government could host servers for the states, right? There seem to be a lot of things you could do as White House CTO to make this work, and it would be great. So on a show of hands, who wants this man to make sure we get online voting in the United States?

MK: It appears unanimous! I'll get on it!

NT: Thank you very much to Michael Kratsios. Thanks so much for coming.

Related Articles

Latest Articles