Tom Young (1), Devamanyu Hazarika (2), Soujanya Poria (3), Erik Cambria (4) ((1) School of Information and Electronics, Beijing Institute of Technology, China, (2) School of Computing, National University of Singapore, Singapore, (3) Temasek Laboratories, Nanyang Technological University, Singapore, (4) School of Computer Science and Engineering, Nanyang Technological University, Singapore)

Deep learning methods employ multiple processing layers to learn hierarchical representations of data, and have produced state-of-the-art results in many domains. Recently, a variety of model designs and methods have blossomed in the context of natural language processing (NLP). In this paper, we review significant deep learning related models and methods that have been employed for numerous NLP tasks and provide a walk-through of their evolution. We also summarize, compare and contrast the various models and put forward a detailed understanding of the past, present and future of deep learning in NLP.