Split Python sequence (time series/array) into subsequences with overlap

It is worth noting that the stride tricks can have unintended consequences when working on the transformed array. It is efficient because it modifies the memory pointers without creating a copy of the original array. If you update any values in the returned array is changes the values in the original array, and vice-versa.

l = np.asarray([1,2,3,4,5,6,7,8,9])
_ = rolling_window(l, 3)
print(_)
array([[1, 2, 3],
   [2, 3, 4],
   [3, 4, 5],
   [4, 5, 6],
   [5, 6, 7],
   [6, 7, 8],
   [7, 8, 9]])

_[0,1] = 1000
print(_)
array([[   1, 1000,    3],
   [1000,    3,    4],
   [   3,    4,    5],
   [   4,    5,    6],
   [   5,    6,    7],
   [   6,    7,    8],
   [   7,    8,    9]])

# create new matrix from original array
xx = pd.DataFrame(rolling_window(l, 3))
# the updated values are still updated
print(xx)
      0     1  2
0     1  1000  3
1  1000     3  4
2     3     4  5
3     4     5  6
4     5     6  7
5     6     7  8
6     7     8  9

# change values in xx changes values in _ and l
xx.loc[0,1] = 100
print(_)
print(l)
[[  1 100   3]
 [100   3   4]
 [  3   4   5]
 [  4   5   6]
 [  5   6   7]
 [  6   7   8]
 [  7   8   9]]
[  1 100   3   4   5   6   7   8   9]

# make a dataframe copy to avoid unintended side effects
new = xx.copy()
# changing values in new won't affect l, _, or xx

Any values that are changed in the xx or _ or l show up in the other variables because they are all the same object in memory.

See numpy docs for more detail: numpy.lib.stride_tricks.as_strided


This is 34x faster than your fast version in my machine:

def rolling_window(a, window):
    shape = a.shape[:-1] + (a.shape[-1] - window + 1, window)
    strides = a.strides + (a.strides[-1],)
    return np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides)

>>> rolling_window(ts.values, 3)
array([[0, 1, 2],
      [1, 2, 3],
      [2, 3, 4],
      [3, 4, 5],
      [4, 5, 6],
      [5, 6, 7],
      [6, 7, 8],
      [7, 8, 9]])

Credit goes to Erik Rigtorp.


I'd like to note that PyTorch offers a single function for this problem which is as memory efficient as the current best solution when working with Torch tensors but is much simpler and more general (i.e. when working with multiple dimensions):

# Import packages
import torch
import pandas as pd
# Create array and set window size
ts = pd.Series([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
window = 3
# Create subsequences with converting to/from Tensor
ts_torch = torch.from_numpy(ts.values)  # convert to torch Tensor
ss_torch = ts_torch.unfold(0, window, 1) # create subsequences in-memory
ss_numpy = ss_torch.numpy() # convert Tensor back to numpy (obviously now needs more memory)
# Or just in a single line:
ss_numpy = torch.from_numpy(ts.values).unfold(0, window, 1).numpy()

The main point is the unfold function, see the PyTorch docs for detailed explanation. The converting back to numpy may not be required if you're ok to work directly with PyTorch tensors - in that case the solution is just as memory efficient. In my use case, I found it easier to first create subsequences (and to do other preprocessing) using Torch tensors, and use .numpy() on these tensors to convert to numpy as and when needed.