The Best Open Source Research at DeepMind in 2019 So Far

ODSC - Open Data Science
4 min readOct 30, 2019

--

DeepMind, a company which originated in London, and has since spread across the world and partnered with Google, is one of the leading AI research centers today. The interdisciplinary team there has made huge strides in AI applications from healthcare to game theory, and they’re still going strong. We made our picks of the best research at DeepMind from 2019 so far.

[Related Article: Behavior Suite for Reinforcement Learning]

OpenSpiel: A Framework for Reinforcement Learning in Games

GitHub Link

This paper introduces a collection of environments created for reinforcement learning within games. It covers some of the code, as well as the needed terminology and concepts within reinforcement learning, and some extra tools to analyze the learning dynamics.

By: Marc Lanctot, Edward Lockhart, Jean-Baptiste Lespiau, Vinicius Zambaldi, Julien Pérolat, Finbarr Timbers, Karl Tuyls, Shayegan Omidshafiei, Daniel Hennes, Paul Muller, Timo Ewalds, Ryan Faulkner, János Kramár, Bart De Vylder, David Ding, Sebastian Borgeaud, Matthew Lai, Julian Schrittwieser, Thomas Anthony, Edward Hughes, Ivo Danihelka (all from DeepMind); Satyaki Upadhyay, Sriram Srinivasan, Brennan Saeta, James Bradbury, Jonah Ryan-Davis (from Google); and Dustin Morrill (DeepMind and the University of Alberta).

The StreetLearn Environment and Dataset

GitHub Link

This paper introduces a learning environment and data set for deep learning navigation programs to be trained on. It’s been compiled from Google’s Street View service, and allows better navigation outputs without having to directly code in every decision the program makes. The team has created two tasks that requires the agent to navigate through one of two cities they’ve programmed (New York and Pittsburgh), the first of which is getting itself from point A to point B without any prior information from the map or knowledge of where it is, the second as following step-by-step instructions.

By: Piotr Mirowski, Andras Banki-Horvath, Keith Anderson, Denis Teplyashin, Mateusz Malinowski, Matthew Koichi Grimes, Karen Simonyan, Koray Kavukcuoglu, Andrew Zisserman, Raia Hadsell (all from DeepMind London), and Karl Moritz Hermann (from DeepMind Berlin).

Analysing Mathematical Reasoning Abilities of Neural Models

GitHub Link

In this paper, the team of David Saxton, Edward Grefenstette, Felix Hill, and Pushmeet Kohli, presents a new challenge in the evaluation of — and at some point, the design of — neural architectures and similar systems. They developed a task suite of math problems involving sequential questions and answers in a free-form textual input/output format. They also perform a comprehensive analysis of models, and find notable differences in their abilities to solve problems and generalize their knowledge.

Emergent Coordination Through Competition

GitHub Link

This paper, by Siqi Liu, Guy Lever, Josh Merel, Saran Tunyasuvunakool, Nicolas Heess, and Thore Graepel, looks at and considers the cooperative behaviors that emerge in reinforcement learning agents. They’ve conducted this study by introducing a challenging competitive multi-agent soccer environment. They show that decentralized, population-based training leads to a progression of behavior: “from random, to simple ball chasing, and finally showing evidence of cooperation.” They also create a means of evaluation that can assess performance when there isn’t pre-defined evaluation tasks or real-life baselines.

The Hanabi Challenge: A New Frontier for AI Research

GitHub Link

This paper discusses the past of game environment testing, including the developments and successes of Go, Atari, and some poker games. It introduces a new challenge within the game Hanabi, suggesting that it’s the next step to showcase AI and game theory. They argue that the cooperative gameplay and variance in players pose new challenges to AI researchers. To help push this development, they created their Hanabi Environment: “an experimental framework for the research community to evaluate algorithmic advances, and assess the performance of current state-of-the-art techniques.”

[Related Article: Best Deep Reinforcement Learning Research of 2019 So Far]

By: Jakob N. Foerster (from University of Oxford); Nolan Bard, Neil Burch, Marc Lanctot, Edward Hughes, Iain Dunning, Shibl Mourad, H. Francis Song, Michael Bowling (all from DeepMind); Vincent Dumoulin, Sarath Chandar, Subhodeep Moitra, Hugo Larochelle, Marc G. Bellemare (all from GoogleBrain); and Emilio Parisotto (from Carnegie Mellon University)

Moving Forward

Research, and specifically research at DeepMind, is what pushes us forward as an industry and as a society. Open source data sets, learning environments, and research all serve as building blocks to reach the next level of artificial intelligence. Where some industries thrive on beating their competition and hiding their secrets, the data science community wouldn’t exist without the open sharing of ideas.

Original post here.

Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday.

--

--

ODSC - Open Data Science
ODSC - Open Data Science

Written by ODSC - Open Data Science

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.

No responses yet