Every time DeepMind publishes a new paper, there is frenzied media coverage around it. Often you will read phrases that are often misleading. For example, its new paper on relational reasoning networks has futurism reporting it like
DeepMind Develops a Neural Network That Can Make Sense of Objects Around It.
This is not only misleading, but it also makes the everyday non PhD person intimidated. In this post I will go through the paper in an attempt to explain this new architecture in simple terms.
You can find the original paper here.
This article assumes some basic knowledge about neural networks.