Takuma Miyazono began driving virtual race cars at age 4, when his father brought home the highly realistic motorsport game Gran Turismo 4. Sixteen years later, in 2020, Miyazono became the Gran Turismo world champion, winning an unprecedented “triple crown” of esports motor racing events. But he had never faced a Gran Turismo driver quite like GT Sophy, an artificial intelligence developed by Sony and Polyphony Digital, the studio behind the Gran Turismo franchise.
“Sophy is very fast, with lap times better than expected for the best drivers,” he says via a translator. “But watching Sophy, there were certain moves that I only believed were possible afterward.”
Video games have become an important sandbox for AI research in recent years, with computers mastering a growing array of titles. But Gran Turismo represents a significant new challenge for a machine.
In contrast to board games that AI has mastered, such as chess or Go, Gran Turismo requires continuous judgments and high-speed reflexes. It’s far more complex than action games like Starcraft or Dota and demands challenging driving maneuvers. A Gran Turismo ace must balance pushing a virtual car to its limits and wrestling with friction, aerodynamics, and precise driving lines with the subtle dance of trying to overtake an opponent without unfairly blocking their line and incurring a penalty.
“Outracing human drivers so skillfully in a head-to-head competition represents a landmark achievement for AI,” said Chris Gerdes, a professor at Stanford who studies autonomous driving, in an article published on Wednesday alongside the Sony research in the journal Nature.
Gerdes says the techniques used to develop GT Sophy could help the development of autonomous cars. Currently, self-driving cars only use the kind of neural network algorithm that GT Sophy employed to keep track of road markings and perceive other vehicles and obstacles. The software controlling the car is handwritten. “GT Sophy’s success on the track suggests that neural networks might one day have a larger role in the software of automated vehicles than they do today,” Gerdes writes.
Sony announced in 2020 that it was developing a prototype electric car featuring advanced driver assistance features. But the company says there are no plans, as yet, to use GT Sophy in its automotive efforts.
GT Sophy also shows how important simulated environments have become for real-world AI systems. Many companies developing self-driving technology use sophisticated computer simulations to generate training data for their algorithms. For instance, Waymo, the self-driving car company owned by Alphabet, says its vehicles have traveled the equivalent of 20 million miles in simulations.
“The use of machine learning and autonomous control for racing is exciting,” says Avinash Balachandran, senior manager for Human Centric Driving Research at the Toyota Research Institute, which is testing self-driving cars capable of operating at extreme speeds. He says Toyota is working on “human amplification, in which technologies that leverage expert learnings from motorsport can one day improve active safety systems.”
Bruno Castro da Silva, a professor at the University of Massachusetts Amherst who studies reinforcement learning, calls GT Sophy “an impressive achievement” and an important step toward training AI systems for autonomous vehicles. But da Silva says going from Gran Turismo to the real world will be challenging, because it’s difficult for reinforcement learning algorithms such as GT Sophy to consider the long term implications of decisions, and because it’s hard to guarantee the safety or reliability of such algorithms.