If you want to be great at games, consider paying attention to Albert Einstein’s advice, which is to learn the rules of the game, and then play better than anyone else. Of course, this is easier said than done. But if you’re an AI like DeepMind, then it would be much easier for you — so much easier that you can even skip Einstein’s first advice.
DeepMind, a subsidiary of Alphabet, has previously made groundbreaking strides using reinforcement learning to teach programs to master the Chinese board game Go and the Japanese strategy game Shogi, as well as chess and challenging Atari video games. In all those instances, computers were given the rules of the game.
But Nature reported… that DeepMind's MuZero has accomplished the same feats—and in some instances, beat the earlier programs—without first learning the rules.
Programmers at DeepMind relied on a principle called "look-ahead search." With that approach, MuZero assesses a number of potential moves based on how an opponent would respond. While there would likely be a staggering number of potential moves in complex games such as chess, MuZero prioritizes the most relevant and most likely maneuvers, learning from successful gambits and avoiding ones that failed.
More details about this over at TechXplore.
Wow!
(Image Credit: PIRO4D/ Pixabay)