If you have been reading my blog, you may have seen that I was a TensorFlow contributor and built a lot of high-level APIs there.
In Feb 2017 though, I have left Google and co-founded my own company — NEAR.ai. Where we are teaching machines to write code from natural language.
As part of this work, we are building Deep Learning models that are reading or writing code in a tree format. After trying to manage this complexity in TensorFlow, I’ve decided to give a try to PyTorch.
PyTorch is a framework built by Facebook AI researchers and has been growing in popularity in Natural Language and Reinforcment Learning research community. It’s main benefit is in dynamic graph building principle — compared to Tensorflow, where graph is built once and then “executed” many times, PyTorch allows to dynamically rebuild graph using simple Python logic, as if you were doing computation with numpy arrays.