DeepMind Artificial Intelligence (AI) technology has now reached ‘human-level performance’ in modded game.
The researchers at the British artificial intelligence company, DeepMind, recently taught AI to play video games. The technology allowed machines to play games like “StarCraft II”.
In the most recent developments, however, the research firm owned by Alphabet was able to use its AI to reach ‘human-level performance’ in the mod version of online shooting game, “Quake III Arena”.
They released the game in 1999 by a company called id Software.
Most AI technology in video games are restricted and not made play effectively win games against humans. However, these new programs by DeepMind have changed that narrative.
AI developed by DeepMind has been able to beat expectation. It subsequently defeated professional StarCraft II players in January 2019. DeepMind said, however, that the point of this new technology isn’t exactly for entertainment. They say they want to use the digital world of Quake II Arena to teach AI to behave as humans do in the real world.
According to an article by DeepMind,
“The real world contains multiple agents, each learning and acting independently to cooperate and compete with other agents.”
The researchers at DeepMind decided to opt for a modified version of Quake III Arena’s ‘Capture the Flag’ mode. This is where two teams compete to capture the most flags in five minutes.
The AI agents now face the task of contending with opponents and scoring points while navigating the environment.
The version of the game used for the testing is a mod version that had no guns or humans. Then little balls that move through maps replace the AI players. They target each other to send them back instead of shooting down an enemy like in the regular game.
The AI agents do not have access to other information a playing human wouldn’t. They learn independently from the game score and pixel data.
DeepMind scientist Thore Graepel said in a statement,
“What makes these results so exciting is that these agents perceive their environment from a first-person perspective, just as a human player would. In order to learn how to play tactically and collaborate with their teammates, these agents must rely on feedback from the game outcomes – without any teacher or coach showing them what to do.”
The system has been proven to work, and DeepMind did an experiment. They used a tournament system with the mod version of Quake III. Then they put 40 human players against their trained AI to calculate the machine’s skills.
They also found that some of the AI agents were able to surpass the highly-skilled human win rates. They used the Elo rating system to document the findings. The system ranks players based on win probabilities.