Jason Brownlee

Long Short-Term Memory (LSTM) Recurrent Neural Networks are a powerful type of deep learning suited for sequence prediction problems.

A possible concern when using LSTMs is if the added complexity of the model is improving the skill of your model or is in fact resulting in lower skill than simpler models.

In this post, you will discover simple experiments you can run to ensure you are getting the most out of LSTMs on your sequence prediction problem.

After reading this post, you will know:

  • How to test if your model is exploiting order dependence in your input data.
  • How to test if your model is harnessing memory in your LSTM model.
  • How to test if your model is harnessing BPTT when fitting your model.

Let’s dive in.

location

Machine Learning Mastery

Advertisements