Free Python optimization framework

Wednesday, December 26, 2007

graphic output: initial estimations xlim, ylim

2 new graphical output params have been added:

xlim {(nan, nan)}, ylim {(nan, nan)} - initial estimation for graphical output borders

you can use for example p.xlim = (nan, 10) or p.ylim = [-8, 15] or p.xlim=[inf, 15], only real finite values will be taken into account

Of course, you can use p = NLP(..., xlim=[8,15], ylim=asfarray(nan, 15), ...) as well

for constrained problems ylim affects only 1st subplot.

Tuesday, December 25, 2007

Some changes in website and code

- some changes in website, some pics added (Naum Z. Shor, me, + I hope to add Petro I. Stetsyuk picture ASAP)
- some changes related to ralg stop criteria (constrained problems case)

NLP solver scipy_slsqp

One more constrained NLP solver is ready: scipy_slsqp
It requires updating scipy from svn, 25-Dec-2007 or later (there are some bugfixes related to fmin_slsqp)
Thanks to Rob Falck for connecting the solver to scipy.


Saturday, December 15, 2007

OpenOpt 0.15

We are glad to inform:
OpenOpt 0.15 (release) is available for download.

Changes since previous release (September 4):
  • some new classes
  • several new solvers written
  • some more solvers connected
  • NLP/NSP solver ralg can handle constrained problems
  • some bugfixes
  • some enhancements in graphical output (especially for constrained problems)

Thursday, December 13, 2007

fixed a bug related to some cvxopt solvers

A bug related to some cvxopt solvers have been fixed. That one was due to some problems with asfarray(cvxopt.base.matrix) work. I haven't observed the bug with earlier numpy/cvxopt versions, lp_1.py example had worked correctly.

Wednesday, December 12, 2007

coming soon: another one NLP solver scipy_slsqp

Rob Falck have informed: he is currently implementing the Sequential Least Squares Quadratic Programming (SLSQP) optimizer by Dieter Kraft into scipy svn (see more details here).

I intend to connect the one to OO ASAP and provide updated NLP benchmark example. The solver algorithm is similar to lincher but had been implemented much more thoroughly (and using Fortran language), so it should work much better.

Sunday, December 9, 2007

minor changes in graphics

As I had already mentioned, matplotlib still suffer some drawbacks.
For to avoid one of them one more OO graphics parameter have been implemented: show (boolean). It means does OpenOpt have to call pylab.show() function after solver finish or not.

p.show = {True} | False | 0 | 1
or p = NLP(f, x0,..., show=1,...) (other OO classes can be used as well)

It should be equal to False for some certain cases, for example in benchmark of some solvers or when some code should be run immediately after solver finish, no waiting for user to close the matplotlib window. When all calculations will be done and you finally want to handle your picture (resize, save, etc) user should call it by himself:
import pylab
pylab.show()
or
from pylab import show
show()
or
from pylab import *
show()

I have to notice: MATLAB plotting tool doesn't suffer the inconvenience, in OO for MATLAB/Octave version there were no needs to implement the feature. Maybe, in future this pylab drawback will be fixed and the "show" parameter will became unused.

Monday, December 3, 2007

constraints handling for NLP/NSP solver ralg

I have implemented some changes to ralg, now it can handle constrained problems (as well as 1st derivatives df, dc, dh; subgradents are also allowed). Currently only max residual is used, so you can pass just single constraint c(x)=max_i,j_ {0, c[i](x), h[j](x)}. I intend to add more accurate handling of constraints, especially box-bound and linear, in future.
Below you can see same benchmark example as published some days before.
I tried to reduce gradtol value for ALGENCAN for to achieve better f_opt but it yields either even more bigger (30.72) or endless calculations (or, at least, very huge, 1 minute was not enough).

One of features of the ralg constraints implementation is: no needs to calculate df in infeasible points (where max constr > contol). And (as it is common for non-smooth solvers) wise versa: no needs to calculate constraints derivatives in feasible points.


Other ralg features:
  • can handle ill-conditioned, non-smooth, noisy problems
  • each iteration consumes 5*n^2 multiplication operations (float64*float64); there is another modification of r-alg that consumes only 4*n^2, maybe it will be implemented in future
  • one of r-alg drawbacks is low precision (no better than ~1e6 on 32-bit architecture). Our department leader Petro I. Stetsyuk has an idea implemented in Fortran ralg version that allows to achieve precision up to 1e-40. Maybe, it will be implemented in Python ralg code in future.