Free Python optimization framework

Friday, July 18, 2008

ALGENCAN diffInt issue

After some experiments with ALGENCAN 2.0.x series I have noticed that decreasing diffInt can be very helpful.

The diffInt parameter is used for getting derivatives via finite-difference approximation (when no derivatives are provided by user). Currently default value in OO for NLP is 1e-7, and it it recommended to try values 1e-8, 1e-9, 1e-10, 1e-11.

Seems like ALGENCAN is much more sensitive to the gradient precision than other OO-connected NLP solvers.

Drawback of so small diffInt can raise when some non-linear funcs are hard to evaluate precisely because of rather big numerical noise.

I don't know yet which diffInt value is used in ALGENCAN by default and is it fixed or somehow changes during calculations. If latter, mb in future I'll turn off OO-supplied gradient obtaining via finite-difference and let ALGENCAN evaluating it by itself. The drawback is the following: each separate evaluation of non-lin func (objfunc or non-lin constraints) is very costly, because it is wrapper that performs some checks, update counters of func calls, translates Python list, tuples, numpy matrices (that can be returning by user funcs) into numpy.ndarray type etc, while calling for derivatives is a little bit optimized and obtaining, for example, dc/dx is performed faster than separate obtaining (c(x + dx1)-c0)/diffInt, (c(x+dx2)-c0)/diffInt ..., (c(x+dxn)-c0)/diffInt, where dxk is all-zeros vector except of coord k with value diffInt.

Let me also note once again a good user-provided gradtol value can be helpful (try 1e-1...1e-10).

Probably, in future OO versions gradtol will be renamed to gtol (less to type).

No comments: