Parallel optimizations in SciPy

Here's another try, based on my original answer and the discussion that followed.

As far as I know, the scipy.optimize module is for functions with scalar or vector inputs and a scalar output, or "cost".

Since you're treating each equation as independent of the others, my best idea is to use the multiprocessing module to do the work in parallel. If the functions you're minimizing are as simple as the ones in your question, I'd say it's not worth the effort.

If the functions are more complex, and you'd like to divide the work up, try something like:

import numpy as np
from scipy import optimize
from multiprocessing import Pool

def square(x, a=1):
    return [np.sum(x**2 + a), 2*x]

def minimize(args):
    f,x,a = args
    res = optimize.minimize(f, x, method = 'BFGS', jac = True, args = [a])
    return res.x

# your a values
a = np.arange(1,11)

# initial guess for all the x values
x = np.empty(len(a))
x[:] = 25

args = [(square,a[i],x[i]) for i in range(10)]
p = Pool(4)
print p.map(minimize,args)

I am a bit late to the party. But this may be interesting for people who want to reduce minimization time by parallel computing:

We implemented a parallel version of scipy.optimize.minimize(method='L-BFGS-B') in the package optimparallel available on PyPI. It can speedup the optimization by evaluating the objective function and the (approximate) gradient in parallel. Here is an example:

from optimparallel import minimize_parallel
def my_square(x, a=1):
    return (x - a)**2
minimize_parallel(fun=my_square, x0=1, args=11)

Note that the parallel implementation only reduces the optimization time for objective functions with a long evaluation time (say, longer than 0.1 seconds). Here is an illustration of the possible parallel scaling: enter image description here