# Dynamic Time Warping K-Nearest Neighbors¶

\(k\)-nearest neigbors (\(k\)-NN) is an algorithm for identifying the observations most similar to an example.

By combining \(k\)-NN with dynamic time warping as a distance measure, \(k\)-NN can be used as an effective method of sequence classification and regression.

## Recap of general \(k\)-NN¶

Suppose we have a dataset \(\mathcal{D}_\text{train}=\big\{\mathbf{x}^{(n)}\big\}_{n=1}^N\) with \(N\) training examples, such that \(\mathbf{x}^{(n)}\in\mathbb{R}^D\).

When querying against a new example \(\mathbf{x}'\in\mathbb{R}^D\), the \(k\)-NN algorithm does the following:

- Computes the distance from \(\mathbf{x}'\) to every training example in \(\mathcal{D}_\text{train}\), using a distance measure such as Euclidean distance \(d(\mathbf{x}',\mathbf{x}^{(n)})=\Vert\mathbf{x}'-\mathbf{x}^{(n)}\Vert\).
- Obtains a \(k\)-neighborhood \(\mathcal{K}'\) which is a set of the \(k\) nearest training examples to \(\mathbf{x}'\).

Using the intuition that \(\mathbf{x}'\) being physically close to the examples in \(\mathcal{K}'\) suggests that they have similar properties, we can use the \(k\)-neighborhood for inference (classification or regression) on \(\mathbf{x}'\).

## Extending \(k\)-NN to sequences¶

To apply \(k\)-NN to sequences, we use a distance measure that quantifies similarity between sequences.

Sequentia supports multivariate sequences for \(k\)-NN, which can be represented as an ordered sequence of observations \(O=\mathbf{o}^{(1)},\ldots,\mathbf{o}^{(T)}\), such that \(\mathbf{o}^{(t)}\in\mathbb{R}^D\). Indeed the lengths of any two observation sequences \(O^{(n)}\) and \(O^{(m)}\) may differ.

To compare sequences \(O^{(n)}\) and \(O^{(m)}\) we can use the **dynamic time warping** (DTW) distance measure, which is a dynamic programming algorithm for finding the optimal alignment between two sequences, and can be generalized to higher dimensions.