Keras LSTM predicted timeseries squashed and shifted

I presume you are overfitting, since the dimensionality of your data is 1, and a LSTM with 25 units seems rather complex for such a low-dimensional dataset. Here's a list of things that I would try:

  • Decreasing the LSTM dimension.
  • Adding some form of regularization to combat overfitting. For example, dropout might be a good choice.
  • Training for more epochs or changing the learning rate. The model might need more epochs or bigger updates to find the appropriate parameters.

UPDATE. Let me summarize what we discussed in the comments section.

Just for clarification, the first plot doesn't show the predicted series for a validation set, but for the training set. Therefore, my first overfitting interpretation might be inaccurate. I think an appropriate question to ask would be: is it actually possible to predict the future price change from such a low-dimensional dataset? Machine learning algorithms aren't magical: they'll find patterns in the data only if they exist.

If the past price change alone is indeed not very informative of the future price change then:

  • Your model will learn to predict the mean of the price changes (probably something around 0), since that's the value that produces the lowest loss in absence of informative features.
  • The predictions might appear to be slightly "shifted" because the price change at timestep t+1 is slightly correlated with the price change at timestep t (but still, predicting something close to 0 is the safest choice). That is indeed the only pattern that I, as an inexpert, would be able to observe (i.e. that the value at timestep t+1 is sometimes similar to the one at timestep t).

If values at timesteps t and t+1 happened to be more correlated in general, then I presume that the model would be more confident about this correlation and the amplitude of the prediction would be bigger.


  1. Increase the number of epochs. You can use EarlyStopping to avoid overfitting.
  2. How's you data scaled? Time series are very sensitive to outliers in the data. Try MinMax((0.1, 0.9)) for example and then RobustScaler is also a good choice.
  3. I'm not sure than LSTM(seq_len) is really necessary until you have a lot of data. Why not to try the smaller dimension?

Try all of this and try to overfit (mse should be around zero on a real dataset). Then apply regularizations.

UPDATE

Let me explain you why did you get by

plot(pred*12-0.03)

a good fit.

Ok, let we consider LSTM layer as black box and forget about it. It returns us 25 values - that's all. This value goes forward to the Dense layer, where we apply to the vector of 25 values function:

y = w * x + b

Here w and b - vectors that are defined by NN and in the beginning are usually near zero. x - your values after LSTM layer and y - target (single value).

While you have just 1 epoch: w and b are not fitted at all to your data (they are around zero actually). But what if you apply

plot(pred*12-0.03)

to your predicted value? You (someway) apply to the target variable w and b. Now w and b are single values, not vectors, and they are applied to single value. But they do (almost) the same work as Dense layer.

So, increase the number of epochs to get better fitting.

UPDATE2 By the way, I see some outliers in the data. You can try also to use MAE as loss/accuracy metrics.