How to vectorize a loop through pandas series when values are used in slice of another series

Basic Idea

As usual pandas would spend time on searching for that one specific index at data_series.loc[s:e], where s and e are datetime indices. That's costly when looping and that's exactly where we would improve. We would find all those indices in a vectorized manner with searchsorted. Then, we would extract the values off data_series as an array and use those indices obtained from searchsorted with simple integer-based indexing. Thus, there would be a loop with minimal work of simple-slicing off an array.

General mantra being - Do most work with pre-processing in a vectorized manner and minimal when looping.

The implementation would look something like this -

def select_slices_by_index(data_series, start, end):
    idx = data_series.index.values
    S = np.searchsorted(idx,start.values)
    E = np.searchsorted(idx,end.values)
    ar = data_series.values
    return [ar[i:j] for (i,j) in zip(S,E+1)]

Use NumPy-striding

For the specific case when the time-period between starts and ends are same for all entries and all slices are covered by that length, i.e. no out-of-bounds cases, we can use NumPy's sliding window trick.

We can leverage np.lib.stride_tricks.as_strided based scikit-image's view_as_windows to get sliding windows. More info on use of as_strided based view_as_windows.

from skimage.util.shape import view_as_windows

def select_slices_by_index_strided(data_series, start, end):
    idx = data_series.index.values
    L = np.searchsorted(idx,end.values[0])-np.searchsorted(idx,start.values[0])+1
    S = np.searchsorted(idx,start.values)
    ar = data_series.values
    w = view_as_windows(ar,L)
    return w[S]

Use this post if you don't have access to scikit-image.


Benchmarking

Let's scale up everything by 100x on the given sample data and test out.

Setup -

np.random.seed(0)
start = pd.Series(pd.date_range('20190412',freq='H',periods=2500))

# Drop a few indexes to make the series not sequential
start.drop([4,5,10,14]).reset_index(drop=True,inplace=True)

# Add some random minutes to the start as it's not necessarily quantized
start = start + pd.to_timedelta(np.random.randint(59,size=len(start)),unit='T')

end = start + pd.Timedelta('5H')
data_series = pd.Series(data=np.random.randint(20, size=(750*600)), 
                        index=pd.date_range('20190411',freq='T',periods=(750*600)))

Timings -

In [156]: %%timeit
     ...: frm = []
     ...: for s,e in zip(start,end):
     ...:     frm.append(data_series.loc[s:e].values)
1 loop, best of 3: 172 ms per loop

In [157]: %timeit select_slices_by_index(data_series, start, end)
1000 loops, best of 3: 1.23 ms per loop

In [158]: %timeit select_slices_by_index_strided(data_series, start, end)
1000 loops, best of 3: 994 µs per loop

In [161]: frm = []
     ...: for s,e in zip(start,end):
     ...:     frm.append(data_series.loc[s:e].values)

In [162]: np.allclose(select_slices_by_index(data_series, start, end),frm)
Out[162]: True

In [163]: np.allclose(select_slices_by_index_strided(data_series, start, end),frm)
Out[163]: True

140x+ and 170x speedups with these ones!