Fastest SVM implementation usable in Python

Alternatively you can run the grid search on 1000 random samples instead of the full dataset:

>>> from sklearn.cross_validation import ShuffleSplit
>>> cv = ShuffleSplit(3, test_fraction=0.2, train_fraction=0.2, random_state=0)
>>> gs = GridSeachCV(clf, params_grid, cv=cv, n_jobs=-1, verbose=2)
>>> gs.fit(X, y)

It's very likely that the optimal parameters for 5000 samples will be very close to the optimal parameters for 1000 samples. So that's a good way to start your coarse grid search.

n_jobs=-1 makes it possible to use all your CPUs to run the individual CV fits in parallel. It's using mulitprocessing so the python GIL is not an issue.


The most scalable kernel SVM implementation I know of is LaSVM. It's written in C hence wrap-able in Python if you know Cython, ctypes or cffi. Alternatively you can use it from the command line. You can use the utilities in sklearn.datasets to load convert data from a NumPy or CSR format into svmlight formatted files that LaSVM can use as training / test set.