Supporting material for my talk, Machines that learn through action, from YOW! 2017
- Karl Friston explains the Free-energy principle (video)
- Consciousness is not a thing, but a process (article)
- Software 2.0 (blog post)
- Geoff Hinton and backpropagation (article)
- Yann LeCun & Gary Marcus debate innate machinery for AI (video)
- The impossibility of intelligence explosion (article)
- Berkeley AI Research Lab (BAIR) blog
Also referenced below.
The free-energy principle: a unified brain theory? (pdf)
Bansal, Trapit, et al. "Emergent complexity via multi-agent competition." arXiv preprint arXiv:1710.03748 (2017).
Brodeur, Simon, et al. "HoME: a Household Multimodal Environment." arXiv preprint arXiv:1711.11017 (2017).
Das, Abishek, et al. "Embodied Question Answering." arXiv preprint arXiv:1711.11543 (2017).
Finn, Chelsea, Sergey Levine, and Pieter Abbeel. "Guided cost learning: Deep inverse optimal control via policy optimization." International Conference on Machine Learning. 2016.
Finn, Chelsea, Ian Goodfellow, and Sergey Levine. "Unsupervised learning for physical interaction through video prediction." Advances in Neural Information Processing Systems. 2016.
Finn, Chelsea, and Sergey Levine. "Deep visual foresight for planning robot motion." Robotics and Automation (ICRA), 2017 IEEE International Conference on. IEEE, 2017.
Finn, Chelsea, Pieter Abbeel, and Sergey Levine. "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks." arXiv preprint arXiv:1703.03400 (2017).
Frans, Kevin, et al. "Meta Learning Shared Hierarchies." arXiv preprint arXiv:1710.09767(2017).
Friston, Karl, James Kilner, and Lee Harrison. "A free energy principle for the brain." Journal of Physiology-Paris 100.1 (2006): 70-87.
Friston, Karl. "The free-energy principle: a unified brain theory?." Nature Reviews Neuroscience 11.2 (2010): 127-138.
Friston, Karl J., Jean Daunizeau, and Stefan J. Kiebel. "Reinforcement learning or active inference?." PloS one 4.7 (2009): e6421.
Friston, Karl. "What is optimal about motor control?." Neuron 72.3 (2011): 488-498.
Friston, Karl, et al. "Active inference and learning." Neuroscience & Biobehavioral Reviews 68 (2016): 862-879.
Karl, Maximilian, et al. "Unsupervised Real-Time Control through Variational Empowerment." arXiv preprint arXiv:1710.05101 (2017).
Krishnan, Sanjay, et al. "Hirl: Hierarchical inverse reinforcement learning for long-horizon tasks with delayed rewards." arXiv preprint arXiv:1604.06508(2016).
Krishnan, Sanjay, et al. "DDCO: Discovery of Deep Continuous Options for Robot Learning from Demonstrations." Conference on Robot Learning. 2017.
Krishnan, Sanjay, et al. "Transition state clustering: Unsupervised surgical trajectory segmentation for robot learning." Robotics Research. Springer, Cham, 2018. 91-110.
Laskey, Michael, et al. "Dart: Noise injection for robust imitation learning." Conference on Robot Learning. 2017.
Mnih, Volodymyr, et al. "Human-level control through deep reinforcement learning." Nature518.7540 (2015): 529-533.
Murali, Adithyavairavan, et al. "Learning by observation for surgical subtasks: Multilateral cutting of 3d viscoelastic and 2d orthotropic tissue phantoms." Robotics and Automation (ICRA), 2015 IEEE International Conference on. IEEE, 2015.
Rao, Rajesh PN, and Dana H. Ballard. "Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects." Nature neuroscience 2.1 (1999): 79-87.
Schulman, John, et al. "High-dimensional continuous control using generalized advantage estimation." arXiv preprint arXiv:1506.02438(2015).
Sutton, Richard S., and Andrew G. Barto. Reinforcement learning: An introduction. Vol. 1. No. 1. Cambridge: MIT press, 1998.
Vezhnevets, Alexander Sasha, et al. "Feudal networks for hierarchical reinforcement learning." arXiv preprint arXiv:1703.01161 (2017).
Xu, Danfei, et al. "Neural Task Programming: Learning to Generalize Across Hierarchical Tasks." arXiv preprint arXiv:1710.01813 (2017).