What does KFold in python exactly do?

KFold will provide train/test indices to split data in train and test sets. It will split dataset into k consecutive folds (without shuffling by default).Each fold is then used a validation set once while the k - 1 remaining folds form the training set (source).

Let's say, you have some data indices from 1 to 10. If you use n_fold=k, in first iteration you will get i'th (i<=k) fold as test indices and remaining (k-1) folds (without that i'th fold) together as train indices.

An example

import numpy as np
from sklearn.cross_validation import KFold

x = [1,2,3,4,5,6,7,8,9,10,11,12]
kf = KFold(12, n_folds=3)

for train_index, test_index in kf:
    print (train_index, test_index)

Output

Fold 1: [ 4 5 6 7 8 9 10 11] [0 1 2 3]

Fold 2: [ 0 1 2 3 8 9 10 11] [4 5 6 7]

Fold 3: [0 1 2 3 4 5 6 7] [ 8 9 10 11]

Import Update for sklearn 0.20:

KFold object was moved to the sklearn.model_selection module in version 0.20. To import KFold in sklearn 0.20+ use from sklearn.model_selection import KFold. KFold current documentation source


Sharing theoretical information about KF that I have learnt so far.

KFOLD is a model validation technique, where it's not using your pre-trained model. Rather it just use the hyper-parameter and trained a new model with k-1 data set and test the same model on the kth set.

K different models are just used for validation.

It will return the K different scores(accuracy percentage), which are based on kth test data set. And we generally take the average to analyse the model.

We repeat this process with all the different models that we want to analyse. Brief Algo:

  1. Split data in to training and test part.
  2. Trained different models say SVM, RF, LR on this training data.
   2.a Take whole data set and divide in to K-Folds.
   2.b Create a new model with the hyper parameter received after training on step 1.
   2.c Fit the newly created model on K-1 data set.
   2.d Test on Kth data set
   2.e Take average score.
  1. Analyse the different average score and select the best model out of SVM, RF and LR.

Simple reason for doing this, we generally have data deficiencies and if we divide the whole data set into:

  1. Training
  2. Validation
  3. Testing

We may left out relatively small chunk of data and which may overfit our model. Also possible that some of the data remain untouched for our training and we are not analysing the behavior against such data.

KF overcome with both the issues.