A truly kick-ass videogame combines clever code, gorgeous graphics, and artful animation—plus thousands of hours of hard work.
Researchers at Electronic Arts—the company behind FIFA, Madden, and other popular games—are testing recent advances in artificial intelligence as a way to speed the development process and make games more lifelike. And in a neat twist, the researchers are harnessing an AI technique that proved itself by playing some of the earliest console videogames.
A team from EA and the University of British Columbia in Vancouver is using a technique called reinforcement learning, which is loosely inspired by the way animals learn in response to positive and negative feedback, to automatically animate humanoid characters. “The results are very, very promising,” says Fabio Zinno, a senior software engineer at Electronic Arts.
Traditionally, characters in videogames and their actions are crafted manually. Sports games, such as FIFA, make use of motion capture, a technique that involves tracking a real person often using markers on their face or body, to render more lifelike actions in human characters. But the possibilities are limited by the actions that have been recorded, and code still needs to be written to animate the character.
By automating the animation process, as well as other elements of game design and development, AI could save game companies millions of dollars while making games more realistic and efficient, so that a complex game can run on a smartphone, for example.
Reinforcement learning has sparked excitement in recent years by letting computers learn to play complex games and solve vexing problems without any instruction. In 2013, researchers at DeepMind, a UK company later acquired by Google, used reinforcement learning to create a computer program that learned to play several Atari videogames to a superhuman level. The program learned to play through experimentation and feedback from the pixels and the game score. DeepMind later employed the same technique to build a program that mastered the fiendishly complex and subtle board game Go, among other things.
In work to be presented in July at Siggraph 2020, a computer graphics conference, the EA-UBC researchers show that reinforcement learning can create a controllable soccer player that moves realistically without using conventional coding or animation.
To make the character, the team first trained a machine-learning model to identify and reproduce statistical patterns in motion-capture data. They then used reinforcement learning to train another model to reproduce realistic motion with a specific objective, such as running toward a ball in the game. Crucially, this produces animations not found in the original motion-capture data. In other words, the program learns how a soccer player moves, and can then animate the character jogging, sprinting, and shimmying by itself.
“I can definitely see this technology being useful in different ways,” says Julian Togelius, a professor at NYU and the cofounder of a Modl.ai, a company that makes AI tools for games. He adds that the reinforcement learning project is part of a wave of automated or “procedural generation” methods that will transform how game content is created.
“Procedural animation will be a huge thing,” Togelius says. “It basically automates a lot of the work that goes into building game content.”
As consoles, PCs, and smartphones become ever-more powerful, games will become increasingly sophisticated and complex, requiring greater investment from game companies. Existing tools can help make designers and animators more efficient, but they’re still needed at every step. Just as AI can concoct photo-realistic faces and scenes when fed enough data, algorithms may automate the creation of new characters and scenes.
AI could generate content for other genres, including action and role-playing games. Some game companies are experimenting with procedural generation as a way to make games more expansive. A simple method is used to generate new worlds for players to explore in No Man’s Sky, a space-based survival game released in 2016. Togelius says AI is also emerging as a powerful way to test games and find bugs, using artificial players.
At the other end of the spectrum, there’s potential for AI to generate simple videogames from scratch. On Friday, researchers from the University of Toronto, MIT, and Nvidia, which makes gaming chips, revealed an AI engine that learned how to recreate the classic game Pac-Man without any of the original code.
On the 40th anniversary of the arcade game’s release, the researchers showed how a program called GameGAN can recreate simple games by watching the screen and monitoring the controls used during 50,000 games of Pac-Man. GameGAN then generated its own version, complete with new scenarios and platforms.
It took 10 engineers at Namco, the company behind Pac-Man, 17 months to design, program, and test the original game. If fed enough data, such an algorithm might eventually fashion a compelling new game—an Angry Birds or Candy Crush that no one needed to code.
“You can imagine training it on many games—thousands of different games,” says Sanja Fidler, an assistant professor at the University of Toronto and director of AI at Nvidia. “And one would hope that now you can somehow mash up and interpolate different things from different games.”
Zinno of EA says it may be several years before game developers routinely use AI, partly because machine-learning algorithms are tricky to understand and debug. The proof will be in the popularity of the resulting games, he notes: “Game development is its own beast. No matter how incredible your animation technology, the point is, is it fun to play?”
Michiel van de Panne, a professor at UBC who is involved in the EA project, says the next step is to use reinforcement learning to train nonhuman videogame characters inside physically realistic environments. But he acknowledges it will be more difficult to train algorithms to come up with entirely new animation from scratch, because it is difficult to quantify what players will find appealing. “I’m waiting to see something that really takes full advantage of AI for the generation of animation,” van de Panne says. “But it will come for sure.”