Best Releases and Papers from OpenAI in 2019 So Far

ODSC - Open Data Science
4 min readOct 8, 2019

--

OpenAI is one of the leaders in research on artificial general intelligence, and as such, they’ve put out some pretty impressive content this year. Here’s our picks of the 9 best releases and papers from OpenAI in 2019 so far.

[Related Article: What Are a Few AI Research Labs on the West Coast?]

The Role of Cooperation in Responsible AI Development

This paper, by Amanda Askell, Miles Brundage, and Gillian Hadfield, discusses the importance of incentivising companies to create safe AI, rather than allowing them to be fueled by profit and create unsafe products. They consider cooperation and the factors or solutions that could lead to better cooperation within the industry towards responsible development of AI.

SGD on Neural Networks Learns Functions of Increasing Complexity

In this paper, Preetum Nakkiran, Gal Kaplun, Dimitris Kalimeris, Tristan Yang, Benjamin L. Edelman, Fred Zhang, and Boaz Barakthey (all from Harvard University) look at the dynamics of Stochastic Gradient Descent (SGD) in learning deep neural networks. With real and synthetic tasks, they suggest and back up the hypothesis that, “as iterations progress, SGD learns functions of increasing complexity.”

Transfer of Adversarial Robustness Between Perturbation Types

This paper, by Daniel Kang, Yi Sun, Tom Brown, Dan Hendrycks, and Jacob Steinhardt, looks at adversarial robustness of deep neural networks between multiple different perturbation types. Their research shows that you have to look at a wide variety of perturbation sizes to understand whether or not the transfer of adversarial robustness exists, and further shows that the robustness against one perturbation doesn’t necessarily show the same robustness against other types.

MuseNet

OpenAI created a deep neural network that can create 4 minute musical pieces with up to 10 different instruments, while combining styles (throughout genres). You can interact and compose with it, or hear pre-generation samples.

Generating Long Sequences with Sparse Transformers

This paper, by Rewon Child, Scott Gray, Alec Radford, and Ilya SutskeverIn introduces, “sparse factorizations of the attention matrix which reduce this to O(nn‾√)” as well as, “a) a variation on architecture and initialization to train deeper networks, b) the recomputation of attention matrices to save memory, and c) fast attention kernels for training.” With this, the team creates unconditional samples, shows diversity and global coherence, and proves it’s possible to use self-attention to model long sequences.

Implicit Generation and Generalization in Energy-Based Models

The team of Yilun Du (MIT CSAIL) and Igor Mordatch (OpenAI) tackle the issue of energy-based models, specifically their difficulty to train. This paper showcases ways to scale MCMC based EBM training on neural networks. So too, they’ve been able to show, “its success on the high-dimensional data domains of ImageNet32x32, ImageNet128x128, CIFAR-10, and robotic hand trajectories, achieving better samples than other likelihood models and nearing the performance of contemporary GAN approaches, while covering all modes of the data.” All of this highlights their idea that EBMs are useful and successful across many different tasks.

Neural MMO: A Massively Multiagent Game Environment for Training and Evaluating Intelligent Agents

This paper by Joseph Suarez, Yilun Du, Phillip Isola, and Igor Mordatch shows their use of a gaming system to suggest and conduct research on how complex life on earth might have started. In this case, the team turned a massively multiplayer online role-playing game (MMORPG) into an artificially intelligent research environment, with all of the right characteristics to encourage this microcosm. Their results show how groups break up in different ways to avoid competition: “population size magnifies and incentivizes the development of skillful behaviors and results in agents that outcompete agents trained in smaller populations,” and “the policies of agents with unshared weights naturally diverge to fill different niches in order to avoid competition.”

Better Language Models and Their Implications

OpenAI has trained a large-scale unsupervised language model. It’s able to create paragraphs of coherent text, exceeds expectations on many language modeling benchmarks, and, “performs rudimentary reading comprehension, machine translation, question answering, and summarization” without any task-specific training.

Computational Limitations in Robust Classification and Win-Win Results

This paper by Akshay Degwekar, Preetum Nakkiran, and Vinod Vaikuntanathan studies the statistical/computational tradeoffs in learning robust classifiers, through three main parts. They show classification tasks where computationally efficient robust classification is impossible, “ard-to-robustly-learn classification tasks in the large-perturbation regime,” and that, “any such counterexample implies the existence of cryptographic primitives such as one-way functions.” These results mean they end up in a win-win situation, where they can learn an efficient robust classifier, or they can construct new instances of cryptographic primitives.

Conclusion

The field of AI is ever-expanding, and it’s largely because of the research done by companies like OpenAI. They push AI forward into new, interesting, and most importantly, safe directions. Those were the best releases and papers from OpenAI in 2019 (so far, of course), but remember, if you’re interested in this company, see Pieter Abbeel, advisor to OpenAI, give his talk, “Tutorial on Deep Reinforcement Learning” at ODSC West 2019.

[Related Article: IBM Research Launches Explainable AI Toolkit]

Original post here.

Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday.

--

--

ODSC - Open Data Science
ODSC - Open Data Science

Written by ODSC - Open Data Science

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.

No responses yet