Free Python optimization framework

Thursday, November 27, 2008

Ironclad v0.7 released (NumPy on IronPython)

IronClad developers have announced release v 0.7.
I guess it makes possible to use OO and some solvers (like ralg) from IronPython.

Tuesday, November 25, 2008

CorePy: Assembly Programming in Python

I've got to know about BSD-licensed v 1.0 release of CorePy - "a Python package for developing assembly-level applications on x86, Cell BE and PowerPC processors".

I guess it would be useful for those objective or non-linear functions that are required to be evaluated sufficiently faster than pure Python-coded.

Of course, using C, C++, Fortran code via Cython, f2py, ctypes, SWIG, Pyrex etc could yield some speedup as well.

Wednesday, November 19, 2008

new openopt API func: oosolver

I have committed entry for oosolver (that has been recently implemented) into OO Doc page.

Tuesday, November 18, 2008

changes for ralg linear constraints

I have committed some changes for general linear constraints (A x <= b, Aeq x = beq) handling by NLP/NSP ralg solver.

The changes are essential for len(b) >> 1 or len(beq) >> 1 only.

bugfix for nonlinear group + changes for ralg

  • I have found and fixed serious bug for non-linear problems group (NLP, NSP etc). Sometimes it has been triggered with some constrained problems and those solvers who can use splitting (ralg, algencan). Still algencan doesn't work essentially better (for those examples I had tried).
  • Some changes for ralg have been committed (to decrease non-linear inequality constraints evaluation number)

Monday, November 17, 2008

new converter: minimax to NLP

I have committed the converter along with usage example. Like MATLAB's fminimax and lots of similar MMP solvers, it works via solving NLP

t -> min

subjected to
t >= f0(x)
t >= f1(x)
...
t >= fk(x)

Let me note that the NLP problem obtained is always constrained (in addition to constraints lb, ub, A, Aeq, c, h from original mmp we get new non-linear inequality constraints written above).

Sunday, November 16, 2008

some doc updates for result structure

I have updated the doc page ResultStruct with description of
  • negative values of r.evals['df'], r.evals['dc'], r.evals['dh'] (it means they have been obtained via finite-difference approximation; in the case we take into account for r.evals['f'] all f calls - both from objFunc and from finite-difference derivatives approximation)
  • r.iterValues.rt, r.iterValues.ri (type and index of max residual).

Saturday, November 15, 2008

enhanced iterfcn connection for scipy fmin_cobyla

I have connected changes related to handling of scipy_cobyla iterfcn, so now instead of direct line (as before) it looks like this:

Of course, adequate text output is provided as well:

solver: scipy_cobyla problem: unnamed goal: minimum
iter objFunVal log10(maxResidual)
0 6.115e+01 2.13
10 2.015e+01 -2.82
20 2.029e+01 -6.46
30 2.030e+01 -7.60
40 2.032e+01 -8.70
50 2.032e+01 -10.30
60 2.033e+01 -9.78
70 2.033e+01 -13.41
80 2.033e+01 -15.58
90 2.033e+01 -12.50
96 2.033e+01 -21.03
istop: 1000
Solver: Time Elapsed = 0.72 CPU Time Elapsed = 0.69
Plotting: Time Elapsed = 6.72 CPU Time Elapsed = 5.31
objFunValue: 20.329368 (feasible, max constraint = 9.3314e-22)

Also, fEnough, maxTime, maxCPUTime and some other stop criteria work (for scipy_cobyla).

Initially users had to connect iterfcn by themselves (to df, dc, dh etc), then it had been done automatically by default to df, now (with latest changes) it is for f and df only and automatically.

Now NLP instance has parameter f_iter (default is max(nVars,4)), so when number of called objective function exceeds p.f_iter (for scipy_cobyla and other solvers w/o iterfcn connected and gradient using), OO iterfcn function is called (and hence user-supplied callback functions(s) if any are declared).

Also, for those solvers who hasn't native connection to OO iterfcn and use derivatives (algencan, ipopt, scipy_slsqp, some unconstrained and box-bounded ones) there is parameter df_iter, default True (use iterfcn each call for df); if positive integer s(s>1, 1 is same to default True) - use iterfcn each s-th objective function gradient call.

Mb I'll change "f_iter" and "df_iter" to more appropriate field names till next OO release.

Wednesday, November 12, 2008

Any Toronto openopt users?

hi all,
if anyone from Toronto is OO user (I had noticed some in webcounters) it would be nice would you mention OO in (imaginary) "Million Dollar Python Project" (link) by PyGTA (Toronto Python User's Group on Tuesday)Link

Sunday, November 9, 2008

OpenOpt in Debian packages

Yaroslav Halchenko from PyMVPA project (that uses OpenOpt) has organized python-scikits-openopt deb package and put it into Debian Linux repository.

However, I don't know how stable it is (I haven't tried/tested it yet and will hardly do in nearest future, currently I'm busy because of many other urgent things to be done, my dept chiefs require that ones).

some changes & bugfixes

I have committed
  • some changes for ralg
  • some bugfixes for oofun-oovar
  • some other changes