python - error using L-BFGS-B in scipy -
i puzzling result when using 'l-bfgs-b' method in scipy.optimize.minimize:
import scipy.optimize optimize import numpy np def testfun(): prec = 1e3 func0 = lambda x: (float(x[0]*prec)/prec+0.5)**2+(float(x[1]*prec)/prec-0.3)**2 func1 = lambda x: (float(round(x[0]*prec))/prec+0.5)**2+(float(round(x[1]*prec))/prec-0.3)**2 result0 = optimize.minimize(func0, np.array([0,0]), method = 'l-bfgs-b', bounds=((-1,1),(-1,1))) print result0 print 'func0 @ [0,0]:',func0([0,0]),'; func0 @ [-0.5,0.3]:',func0([-0.5,0.3]),'\n' result1 = optimize.minimize(func1, np.array([0,0]), method = 'l-bfgs-b', bounds=((-1,1),(-1,1))) print result1 print 'func1 @ [0,0]:',func1([0,0]),'; func1 @ [-0.5,0.3]:',func1([-0.5,0.3]) def main(): testfun()
func0() , func1() identical quadratic functions precision difference of 0.001 input values. 'l-bfgs-b' method works func0. however, adding round() function in func1(), 'l-bfgs-b' stops search optimal values after first step , directly use initial value [0,0] optimal point.
this not restricted round(). replace round() in func1() int() results in same error.
does know reason this?
thanks lot.
bfgs method 1 of method relies on not function value, gradient , hessian (think of first , second derivative if wish). in func1()
, once have round()
in it, gradient no longer continuous. bfgs method therefore fails right after 1st iteration (think of this: bfgs searched around starting parameter , found gradient not changed, stopped). similarly, expect other methods requiring gradient fail bgfs.
you may able working precondition or rescaling x. better yet, should try gradient free method such 'nelder-mead' or 'powell'
Comments
Post a Comment