# Time series distance metric

nice question! using any standard distance of R^n (euclidean, manhattan or generically minkowski) over those time series cannot achieve the result you want, since those metrics are independent of the permutations of the coordinate of R^n (while time is strictly ordered and it is the phenomenon you want to capture).

A simple trick, that can do what you ask is using the **cumulated version of the time series** (sum values over time as time increases) and then apply a standard metric. *Using the Manhattan metric*, you would get as a distance between two time series the *area between their cumulated versions*.

Another approach would be by utilizing DTW which is an algorithm to compute the similarity between two temporal sequences. Full disclosure; I coded a Python package for this purpose called `trendypy`

, you can download via pip (`pip install trendypy`

). Here is a demo on how to utilize the package. You're just just basically computing the total min distance for different combinations to set the cluster centers.