This repository contains implementations and illustrative code to accompany DeepMind publications. Along with publishing papers to accompany research conducted at DeepMind, we release open-source environments, data sets, and code to enable the broader research community to engage with our work and build upon it, with the ultimate goal of accelerating scientific progress to benefit society. For example, you can build on our implementations of the Deep Q-Network or Differential Neural Computer, or experiment in the same environments we use for our research, such as DeepMind Lab or StarCraft II.
If you enjoy building tools, environments, software libraries, and other infrastructure of the kind listed below, you can view open positions to work in related areas on our careers page.
For a full list of our publications, please see https://deepmind.com/research/publications/
- Hierarchical Probabilistic U-Net (HPU-Net)
- Training Language GANs from Scratch, NeurIPS 2019
- Temporal Value Transport, Nature Communications 2019
- Continual Unsupervised Representation Learning (CURL), NeurIPS 2019
- Unsupervised Learning of Object Keypoints (Transporter), NeurIPS 2019
- BigBiGAN, NeurIPS 2019
- Deep Compressed Sensing, ICML 2019
- Side Effects Penalties
- PrediNet Architecture and Relations Game Datasets
- Unsupervised Adversarial Training, NeurIPS 2019
- Graph Matching Networks for Learning the Similarity of Graph Structured Objects, ICML 2019
- REGAL: Transfer Learning for Fast Optimization of Computation Graphs
This is not an official Google product.