Free Python optimization framework

Monday, December 15, 2008

the blog is moved to new location

update your rss feeds if you want to get the posts furthermore.
New location:
http://forum.openopt.org/viewforum.php?id=19

OpenOpt (and my) Future

My interim in icyb.kiev.ua optimization dept is up to be finished.

Today I have spoken with my chiefs from about my further career.
They have informed me: because of financial crisis the situation is undefined.

Still I have some chances. Moreover, there are some chances I will be permitted to spent some work time for further openopt development.
However, it is provided some conditions to be satisfied.

Some days ago I had been forced to move my code out of scikits framework, my chiefs want me to host it inside Ukraine under our control (mb you know I had been refused to obtain veto rule for openopt code changes within scikits framework; I understand scipy community - they can't take taht something is hosted inside their server and is out of control, but I understand my chiefs position as well).

I have contacted uafoss.org (FOSS Ukraine dept), and they gladly allowed me to host it here.
Full transportation (svn repository, doc, wiki etc) will be finished within several days. But numerical optimization forum already works, you are welcome: http://forum.openopt.org. I had noticed openopt.com and openopt.net are already taken by someone, so it's definitely correct time to go.

Thanks to Michailo Danilenko and Volodimir M. Lisivka from linux.org.ua (LOU) community.
Special thanks to Wadim V. Mashkov and Michael Shigorin from linux.kiev.ua community.

OpenOpt release 0.21

Hi all,
I'm glad to inform you about new OpenOpt release: v 0.21.

Changes since previous release 0.19 (June 15, 2008):

Backward incompatibility:
  • instead of "from scikits.openopt" now you should use "from openopt import ..."
  • LSP has been renamed to NLLSP
  • oofun with ordinary variables (x) support had been ceased (it's hard to keep maintaining), use oovars instead.
Until OpenOpt subversion repository will be finally moved to new host, you can download v 0.21 from here

Welcome to http://forum.openopt.org - new forum about numerical optimization and related free and open source software.

For assist with new host and forum
Thanks to Michailo Danilenko and Volodimir M. Lisivka from linux.org.ua (aka LOU) community.
Special thanks to Wadim V. Mashkov and Michael Shigorin from linux.kiev.ua community.

Regards, Dmitrey.

Some changes for oofun

Some changes for oofun:
  • oolin now can handle matrices: oolin(C) creates oofun that returns numpy.dot(C, inp_array); oolin(C, d) yields oofun that returns numpy.dot(C, x) + d. See updated oolin example.
  • add fixed variables handling for oovars. Now you can declare v.fixed = True or just v = oovar(..., fixed = True). So all oofuns that recursively depend on fixed oovars only will be calculated only once, and derivatives will be all-zero (hence no calculations will be required for those parts of code that depend on fixed oofuns only). In future it will be good to have inner fixed coords for oovars, like this: v.fixed = [0, 1, 15] (i.e. positions of fixed coords inside the oovar). See the fixed oovars example.
However, let me remember you that currently oolin(anything) is considered as non-linear code, i.e. it goes to non-linear constraints and objective function only. In future it should be implemented oolin(some_oovars) -> general linear constraints (Ax <= b, Aeq x = beq). As for lb <= x <= ub, they are handled correctly from oovar fields.

Saturday, December 6, 2008

some changes

I have committed some changes to OO Kernel, most important is a fix reducing time for connecting oovars to prob instance (the recursive function throw all oofuncs took too much time previously).

Thursday, November 27, 2008

Ironclad v0.7 released (NumPy on IronPython)

IronClad developers have announced release v 0.7.
I guess it makes possible to use OO and some solvers (like ralg) from IronPython.

Tuesday, November 25, 2008

CorePy: Assembly Programming in Python

I've got to know about BSD-licensed v 1.0 release of CorePy - "a Python package for developing assembly-level applications on x86, Cell BE and PowerPC processors".

I guess it would be useful for those objective or non-linear functions that are required to be evaluated sufficiently faster than pure Python-coded.

Of course, using C, C++, Fortran code via Cython, f2py, ctypes, SWIG, Pyrex etc could yield some speedup as well.

Wednesday, November 19, 2008

new openopt API func: oosolver

I have committed entry for oosolver (that has been recently implemented) into OO Doc page.

Tuesday, November 18, 2008

changes for ralg linear constraints

I have committed some changes for general linear constraints (A x <= b, Aeq x = beq) handling by NLP/NSP ralg solver.

The changes are essential for len(b) >> 1 or len(beq) >> 1 only.

bugfix for nonlinear group + changes for ralg

  • I have found and fixed serious bug for non-linear problems group (NLP, NSP etc). Sometimes it has been triggered with some constrained problems and those solvers who can use splitting (ralg, algencan). Still algencan doesn't work essentially better (for those examples I had tried).
  • Some changes for ralg have been committed (to decrease non-linear inequality constraints evaluation number)

Monday, November 17, 2008

new converter: minimax to NLP

I have committed the converter along with usage example. Like MATLAB's fminimax and lots of similar MMP solvers, it works via solving NLP

t -> min

subjected to
t >= f0(x)
t >= f1(x)
...
t >= fk(x)

Let me note that the NLP problem obtained is always constrained (in addition to constraints lb, ub, A, Aeq, c, h from original mmp we get new non-linear inequality constraints written above).

Sunday, November 16, 2008

some doc updates for result structure

I have updated the doc page ResultStruct with description of
  • negative values of r.evals['df'], r.evals['dc'], r.evals['dh'] (it means they have been obtained via finite-difference approximation; in the case we take into account for r.evals['f'] all f calls - both from objFunc and from finite-difference derivatives approximation)
  • r.iterValues.rt, r.iterValues.ri (type and index of max residual).

Saturday, November 15, 2008

enhanced iterfcn connection for scipy fmin_cobyla

I have connected changes related to handling of scipy_cobyla iterfcn, so now instead of direct line (as before) it looks like this:

Of course, adequate text output is provided as well:

solver: scipy_cobyla problem: unnamed goal: minimum
iter objFunVal log10(maxResidual)
0 6.115e+01 2.13
10 2.015e+01 -2.82
20 2.029e+01 -6.46
30 2.030e+01 -7.60
40 2.032e+01 -8.70
50 2.032e+01 -10.30
60 2.033e+01 -9.78
70 2.033e+01 -13.41
80 2.033e+01 -15.58
90 2.033e+01 -12.50
96 2.033e+01 -21.03
istop: 1000
Solver: Time Elapsed = 0.72 CPU Time Elapsed = 0.69
Plotting: Time Elapsed = 6.72 CPU Time Elapsed = 5.31
objFunValue: 20.329368 (feasible, max constraint = 9.3314e-22)

Also, fEnough, maxTime, maxCPUTime and some other stop criteria work (for scipy_cobyla).

Initially users had to connect iterfcn by themselves (to df, dc, dh etc), then it had been done automatically by default to df, now (with latest changes) it is for f and df only and automatically.

Now NLP instance has parameter f_iter (default is max(nVars,4)), so when number of called objective function exceeds p.f_iter (for scipy_cobyla and other solvers w/o iterfcn connected and gradient using), OO iterfcn function is called (and hence user-supplied callback functions(s) if any are declared).

Also, for those solvers who hasn't native connection to OO iterfcn and use derivatives (algencan, ipopt, scipy_slsqp, some unconstrained and box-bounded ones) there is parameter df_iter, default True (use iterfcn each call for df); if positive integer s(s>1, 1 is same to default True) - use iterfcn each s-th objective function gradient call.

Mb I'll change "f_iter" and "df_iter" to more appropriate field names till next OO release.

Wednesday, November 12, 2008

Any Toronto openopt users?

hi all,
if anyone from Toronto is OO user (I had noticed some in webcounters) it would be nice would you mention OO in (imaginary) "Million Dollar Python Project" (link) by PyGTA (Toronto Python User's Group on Tuesday)Link

Sunday, November 9, 2008

OpenOpt in Debian packages

Yaroslav Halchenko from PyMVPA project (that uses OpenOpt) has organized python-scikits-openopt deb package and put it into Debian Linux repository.

However, I don't know how stable it is (I haven't tried/tested it yet and will hardly do in nearest future, currently I'm busy because of many other urgent things to be done, my dept chiefs require that ones).

some changes & bugfixes

I have committed
  • some changes for ralg
  • some bugfixes for oofun-oovar
  • some other changes

Tuesday, October 28, 2008

Major changes for ralg

I have committed some major changes for NLP/NSP solver ralg.

A couple of ideas have been stolen from SolvOpt (made by our dept worker Alex Kuntsevich during his voyage to Austria, in collaboration with Franz Kappel), that still remains to be most famous Naum Z. Shor r-algorithm implementation with Fortran90, C and MATLAB API.

However, the implementation is more heuristic than original Naum Z. Shor r-algorithm; since convergence of latter hasn't been proved yet even for smooth convex functions, proving convergence for SolvOpt's implementation will hardly be ever possible (especially because of those heuristics like backward line search approach, see SolvOpt's paper for algorithm details).

So my dept chief is not fond of SolvOpt heuristics (he consider they reduce % of problems that can be solved, because SolvOpt can stop further from optimum, then primal Naum Z. Shor r-algorithm), still SolvOpt remains rather popular in our optimization dept.

Currently I have taken 2 ideas from SolvOpt: modifications for initial step size and step multiplier for forward line search. As for backward search, I found SolvOpt's approach very unclear and currently my own way still remains in OO ralg implementation. Handling of constraints is also performed in a different way.

Also, some code cleanup for OO kernel have been committed.

Tuesday, October 21, 2008

PSwarm 1.3 has been connected

I have connected new PSwarm version: 1.3.

As PSwarm developer Ismael Vaz promised, now Python API has much more available parameters to modify (see updated GLP page for details) and outputfcn connected (that improves text and graphics output), so now user-provided callback function(s) can be handled by the solver (also, openopt GUI p.manage() function works better, see previous post for description).

Wednesday, October 15, 2008

Introducing OpenOpt GUI

I'm glad to introduce initial version of OpenOpt GUI.

I had some months of GUI experience in PyGTK and tcl/tk so I have some experience in GUI development; on the other hand I know very well what's the deuce it is: you have to try all possible combinations of buttons (and the dummy task is NP-Hard of course) to ensure all works ok, to enable/disable buttons in correct order, to story related variables etc.

For OO I decided to use Tkinter, because it doesn't require any additional soft installation (it is included in Python distribution).

I had took a look at AMPL and GAMS GUIs before starting my own one, and of course I have no intention to duplicate Python IDEs.

Running OO GUI is performed via

r = p.manage()

instead of p.solve(), or via

from scikits.openopt import manage
r = manage(p, solver, ...)

Let me also note: manage() can handle named argument start = {False}/True/0/1, that means start w/o waiting for user-pressed "Run".

Currently there are only 3 buttons: "Run/Pause", "Exit" and "Enough".
As you probably know lots of solvers have troubles with stop criteria. Especially it's relevant to NSP solvers, where calculating derivatives only for to check stop criteria isn't a good idea. So pressing "Enough" button yields triggering of stop criterion, like this:
------------
solver: ralg problem: GUI_example goal: minimum
iter objFunVal log10(maxResidual)
...
102 5.542e+01 -6.10
istop: 88 (button Enough has been pressed)
Solver: Time Elapsed = 1.19 CPU Time Elapsed = 1.12
Plotting: Time Elapsed = 6.86 CPU Time Elapsed = 5.97
objFunValue: 55.423444 (feasible, max constraint = 7.98936e-07)

Let me also note that
  • pressing "Exit" before "Enough" and before solver finish will return None, so there will be no fields r.ff, r.xf etc.
  • for some IDEs pressing "Exit" doesn't close matplotlib window (if you are using p.plot=1). You should either wait for newer matplotlib version (they intend to fix it) or try to fix it by yourself via choosing correct Agg, see here for details
  • OO Doc webpage entry related to OO GUI will be committed later
  • you could play with the GUI example by yourself

Saturday, October 11, 2008

bugfix for ooIterPrint.py in Python 2.4

I have been informed of Python 2.4 issue
File
"/usr/lib/python2.4/site-packages/scikits/openopt/Kernel/ooIterPrint.py",
line 6
'isFeasible': lambda p: ('+' if p.rk < p.contol else
'-')
^
SyntaxError: invalid syntax

I have committed workaround for Python 2.4. Still I guess it's better to use Python 2.5, as it is mentioned in OO install webpage.

Friday, October 10, 2008

Example of 10X speedup for NLP via oofun

I have committed the example of unconstrained NLP where using oofun yields 10X speedup vs classic style (via reducing of the time required for gradient approximation for a costly function).

Of course,
  • I could construct example where the profit is 100X or even more, but it's longer to wait for output.
  • There can be similar examples constructed for constrained NLP or some other classes from non-linear group (NSP, NLSP, LSP).
  • I used scipy_ncg solver but speedup ~ 4X...10X is observed while using any other non-linear solver as well (provided the solver deals with at least 1st derivatives, so scipy_cobyla, goldenSection, scipy_fminbound, scipy_powell or GLP solvers are inappropriate).
Also, some changes to oofun-, oovar-related and some other OO Kernel files have been committed.

Tuesday, October 7, 2008

oovar and oolin

I have committed my implementation of oovar and oolin to OpenOpt svn repository.

So, 2 new entries to oofun doc webpage have been added:
  • using oovar (OpenOpt Variable)
  • using oolin (that is linear oofun)
Currently oolin(f) returns (f, x) (that is numpy.dot). In future I intend to add c: oolin(f, c) is (f, x) + c.
Also, mb ooquad(H, f, c) will be implemented in future (that is 0.5 * xTHx + fx + c).

Friday, October 3, 2008

new Python optimization soft: Pyomo

I've got to know about Pyomo (thanks to Nils Wagner) that is made by Sandia Labs (Trilinos/PyTrilinos developers). In the article they say "However, Coopr Opt is not as mature as the OpenOpt package", I guess that's the reason why my site daily visitors number has jumped from ~ 50-60 to ~ 80-90 this week.

Also, in the paper they inform of willing to collaborate with OO. I had contacted them long time ago but the provided examples of their soft usage were too difficult for me to understand and connect to OO during an adequate time. I guess it will be much easier for them to understand my code instead.

Very important bugfix for nonlinear problems

I have committed very important bugfix for nonlinear problems.
I don't know how many days the nasty bug was hiding and lurkering (I guess several months), did the one affects some solvers work or no, because it has been somehow (but maybe not everytime) overwritten by other code, and only my latest changes (committed to subversion repository some days ago) have revealed the one.

Monday, September 29, 2008

Changes for converters

I have committed lots of changes for (all) converters; also, the issue with splitted funcs (for nlps2nlp and lsp2nlp) has been fixed.

PSwarm v 1.2: bugfixes for Linux

After another one OO user (Nils Wagner) has informed PSwarm developer Ismael Vaz about same bugs ("use -fPIC" and "undefined symbol 'opt'") the bugs have been fixed.

PSwarm v. 1.2 is already available with the bugfixes.

I have removed mention of the bug from GLP docpage entry.

Saturday, September 27, 2008

new converter: nlsp2nlp

I have committed nlsp2nlp converter, see NLSP page for details. It tries to minimize
   ||F(x)||2
and, of course, it is capable of handling constrained problems.

You can try updated nlsp_1.py or nlsp_constrained.py from /examples.

However, a requirement should be satisfied: non-linear functions shouldn't be splitted.

This minor issue is intended to be fixed in future (BTW, splitting can't yield benefits for nlsp2nlp converter with any NLP solver).

Tuesday, September 23, 2008

Some more openopt install issues

Latest NumPy (taken from subversion repository) forced some OO users to inform me about some more openopt's install issues.

First of all, sometimes openopt's "python setup.py install" starts to search and download numpy 1.1.x from internet, while recent numpy from svn is already installed.

At second (it's especially inconvenient to me), modifying any single line in openopt's sources and then running "python setup.py install" now recompiles all openopt's files. Even if there were no changes at all, anyway all OO files are recompiled. Taking into account that I run this dozens times per day it's very annoying, and my increases danger for my HDD. Of course I could use OO without "python setup.py install" but this is inconvenient because of some reasons (I had already tried).

So, please take into account: these issues are not up to me, they are up to numpy.distutils developers. I hope they will fixed it ASAP.

Monday, September 22, 2008

Install issues with latest numpy

Recently some changes have been committed to numpy, and previous openopt versions can't work with those ones.

So I have committed some changes to openopt, and AFAIK now openopt runs with all known to me numpy versions >= 1.1.0 correctly.

Hence, install/update OpenOpt from latest tarball or subversion repository is recommended.

Saturday, September 20, 2008

new converter: lsp2nlp

I have committed lsp2nlp converter, see LSP page for details.

Friday, September 19, 2008

new converter: lp2nlp

I have committed lp2nlp converter, see LP page for details.
See also the remark about ipopt.opt file here

Thursday, September 18, 2008

new converter: qp2nlp

I have committed qp2nlp converter, see QP page for details.
Let me note that if a problem have been converted to NLP from QP, LLSP, LP (latter will be added soon), then the following lines are automatically appended to ipopt.opt file:

jac_c_constant yes
jac_d_constant yes
hessian_constant yes

Also, bugfix for example qp_1.py has been committed:

Instead of
x1^2 + 2x2^2 + 3x3^2 + 15x1 + 8x2 + 80x3 -> min

we have
0.5 * (x1^2 + 2x2^2 + 3x3^2) + 15x1 + 8x2 + 80x3 -> min

I.e. if H = diag((1,2,3)), instead of
xHx + fx -> min

we search for
0.5 * xHx + fx -> min

Monday, September 15, 2008

OpenOpt release 0.19

Hi all,
I'm glad to inform you about new OpenOpt release: v 0.19.

Changes since previous release 0.18 (June 15, 2008):
  • Some changes for NLP/NSP solver ralg (especially related to handling linear constraints Ax <= b, Aeq x = beq, lb <= x <= ub)
  • Bugfix for ralg and IPOPT linear constraints handling
  • ALGENCAN v 2.0.x has been connected (v 1.0 is no longer supported, v 2.0.3 or later is required)
  • Bugfix for constrained NLSP graphic output (constrained nssolve isn't turned to latest ralg version yet)
  • Scale parameter for lpSolve (p.scale = {False} | True | 0 | 1)
  • New OO class LLAVP (linear least absolute values aka linear least deviations)
  • Improved handling of non-linear functions with restricted dom
  • GLP (global) solver galileo now can handle integer problems (via p.useInteger = 1 or True)
  • Another one GLP solver connected: pswarm
  • Lots of work related to oofun concept (see OO Doc page for details)
  • Add converters llsp2nlp, llavp2nsp
  • Convenient handling of maximization problems (via p.goal = 'max' or 'maximum')
  • Some code clean up and bugfixes

Backward incompatibility:
  • Changed objective function in LLSP
  • MATLAB-style gradtol renamed to gtol (for to provide same syntax to scipy.optimize fmin_bfgs, fmin_cg and less-to-type)
Regards, Dmitrey.

changes for ralg and oofun derivatives example

  • some ralg changes, mainly for linear constraints (A, Aeq, lb, ub)
  • example of oofun derivatives has been attached to OO Doc page
  • some code cleanup and minor bugfixes

Monday, September 1, 2008

changes for ralg, oofun, handling NaNs

I have committed some changes related to
  • ralg: change for matrix B rejuvenation criterium, now by default it performs check "cond(b) less than 1e5" each 250th iter or provided a stop criterion has triggered on. However, for large nVars evaluation of numpy.linalg.cond(b) takes a time (for nVars = 1000 I got ~5 seconds at my AMD 3800+ X2). So it would be good to get estimation of cond(b) instead of exact value (like MATLAB's condest for sparse matrices; however, b is dense).
  • oofun: recursive 1st derivatives, appropriate example will be committed soon
  • handling NaNs: if right-derivative (f(x+dx)-f(x))/dx is NaN then left-derivative (f(x)-f(x-dx))/dx will be used; it's valid for both oofun and classic style, however, it doesn't resolve all possible issues. Also, I intend to add double-side derivative possibility in future: (f(x+dx)-f(x-dx))/(2*dx) via an oofun field, for example c2 = oofun(..., useDoubleSideDerivative = True,...)
  • some other (minor) ralg changes

Tuesday, August 19, 2008

New GLP solver: pswarm

I'm glad to inform that new GLP solver has been connected to OO: pswarm.

Note that along with box-bound constraints
lb <= x <= ub
this one is capable of handling linear inequalities:
Ax <= b

However, I encountered some troubles in my KUBUNTU (such as "recompile with -fPIC" and then, after fixing it, "undefined symbol opt") and have connected pswarm within WinXP. I have mailed PSwarm author the issues but he hasn't responded yet.

Monday, August 18, 2008

some changes

  • minor changes for ralg
  • minor changes for tests. In order to check openopt after installation users wont to use the files (relevant to their class) from /examples, and now you can alternatively use the tests from /tests directory. First of all /tests/nlp1.py is recommended, because it uses ralg and hence doesn't require any other 3rd party solvers installation. Maybe in future openopt tests will use nose or texttest framework.

Thursday, August 7, 2008

Updates in 1) oofun doc 2) ralg

1. Some changes for NLP/NSP ralg solver have been made. A personal wiki page for ralg has been committed.

2. I have added new entry to openopt doc page about how to use oofun to prevent recalculating same parts of code.
(This is rather common problem, mentioned for example here and here).

Let me note once again that calling df(x1) doesn't guarantee that f(x1) (i.e. in same point) was called immediately before the df call (and same to c, h, dc, dh). Still in 90-95% cases it's true, so it would be convenient to check and (if input, according to dependencies, is same) substitute already calculated values automatically.

So oofun is an implementation of possible solutions to the issue. There are some other convenient tools based on oofun usage, already available in OO code. Still lots of other oofun-related work in Kernel remains to be done (recursive 1st derivatives, implementation of oovar etc).

Of course oofun concept isn't something new, for example, something like this is present in YALMIP (free MATLAB toolbox for some numerical optimization problems, translates YALMIP scripts to MATLAB). Also, there is some similar work involving in our dept using Visual C and Rational Rose.

Sunday, August 3, 2008

IBM to acquire ILOG (CPLEX developer)

I've got to know: IBM and ILOG announced that they had signed an agreement regarding to the proposed acquisition of ILOG by IBM (one of URLs is here).

Let me remember those ones who are not closely connected to numerical optimization, that ILOG develops CPLEX (and some other, less famous, numerical optimization solvers), so ILOG is known as leader of commercial LP/MILP (and probably QP) solvers vendors (along with other well-known commercial ones - XA, XPRESS, Mosek, LOQO etc). You can check some benchmark results here.

Also, IBM is know as sponsor of well-know project COIN-OR that hosts lots of free solvers under CPL (an OSI-approved license with copyleft), including IPOPT, most famous free NLP solver (BTW Python programmers can can use this one from OO, but currently for Linux OSes only).

So it means there are some chances CPLEX will move from commercial-only status to more permissive one.

some changes for ralg

I have committed some changes for NLP/NSP solver ralg (some ones to speedup and some ones for better handling NaNs, if x is outside of dom objFunc or dom of some non-linear constraints).

Another one parameter (mostly for NLP/NSP) have been added: isNaNInConstraintsAllowed (default False). This one means is nan (not a number) allowed in non-linear constraints for optim point (mostly for inequalities: p.c(r.xf)).
Non-default value True is encountered very seldom, for very special cases.

Also, some code cleanup for ralg and some examples, + some changes for tests/chain.py.

Sunday, July 27, 2008

integer problems for GLP solver galileo

I have committed new parameter "useInteger" for GLP solver galileo. Valid values are 0, 1, True, False (default). If true, galileo will search solution with all-integer variables.

Hence my GSoC schedule is finished. Nevertheless, mb I'll have written some more OO code till next OO release, first of all I intend to add some more converters (lp2nlp, qp2nlp, lsp2nlp), also, mb some modifications for ralg will be made.

Friday, July 25, 2008

non-linear functions with restricted dom

Some non-linear functions have much more restricted dom than R^nVars.
For example F(x) = log(x); dom F = R+ = {x: x>0}

I had posted a small manual of using ralg with the problems (via +inf and contol shift), however, optimization solvers wont to expect user-povided F(x) = nan (if x is out of dom).

So I have made ralg handling doms in accordance with the mentioned standard; now there are no needs to do that contol shift mentioned, and using [numpy.]nan instead of +inf is recommended.

An example has been attached to OO Doc Page.

Note also that some solvers (BTW all connected to OO except of ralg) require x0 from dom(objFunc).
As for ralg it doesn't matter, provided for each point x out of dom objFunc there is at least one active constraint (usually lb<=x<=ub are used, but any other constraints, including non-linear, are OK).

Note: some ralg iters have objFunc = nan:

-----------------------------------------------------
solver: ralg problem: unnamed goal: minimum
iter objFunVal log10(maxResidual)
0 1.925e+04 -100.00
50 nan 1.32
100 nan -0.26
150 1.045e+03 -100.00
200 1.027e+03 -100.00
250 nan -3.49
300 nan -3.19
350 1.016e+03 -100.00
400 nan -5.20
450 nan -6.75
500 nan -5.72
550 nan -5.91
555 1.015e+03 -100.00
istop: 4 (|| F[k] - F[k-1] || < ftol)
Solver: Time Elapsed = 2.45 CPU Time Elapsed = 2.37
objFunValue: 1015.082 (feasible, max constraint = 0)

Saturday, July 19, 2008

New OO class: LLAVP

New OO class is ready: LLAVP - linear least absolute value problem, aka linear least absolute deviation problem

||C*x-d||1 + damp*||x-X||1 -> min

subjected to box-bound constraints:
lb <= x <= ub (some coords of lb and ub can be +/- inf)

(some more constraints are intended to be added)

currently single solver available is converter llavp2nsp. Usage example:
r = p.solve('nsp:ralg')
Recommended solvers to try are ralg, ipopt (using maxIter for ipopt is recommended) and algencan.

However note: ipopt and algencan are NLP solvers and convergence for non-smooth problems like LLAVP is not guarantied. As for Naum Z. Shor r-algorithm implemented in ralg, convergence haven't been proved even for convex NL problems yet.

Also, I intended to connect Fortran-written toms615 but I got f2py error:
$ f2py -c -m toms615 toms615.f
...
File "/usr/lib/python2.5/site-packages/numpy/f2py/f2py2e.py", line 364, in run_main
raise TypeError,'All blocks must be python module blocks but got %s'%(`postlist[i]['block']`)
TypeError: All blocks must be python module blocks but got 'program'

Friday, July 18, 2008

ALGENCAN diffInt issue

After some experiments with ALGENCAN 2.0.x series I have noticed that decreasing diffInt can be very helpful.

The diffInt parameter is used for getting derivatives via finite-difference approximation (when no derivatives are provided by user). Currently default value in OO for NLP is 1e-7, and it it recommended to try values 1e-8, 1e-9, 1e-10, 1e-11.

Seems like ALGENCAN is much more sensitive to the gradient precision than other OO-connected NLP solvers.

Drawback of so small diffInt can raise when some non-linear funcs are hard to evaluate precisely because of rather big numerical noise.

I don't know yet which diffInt value is used in ALGENCAN by default and is it fixed or somehow changes during calculations. If latter, mb in future I'll turn off OO-supplied gradient obtaining via finite-difference and let ALGENCAN evaluating it by itself. The drawback is the following: each separate evaluation of non-lin func (objfunc or non-lin constraints) is very costly, because it is wrapper that performs some checks, update counters of func calls, translates Python list, tuples, numpy matrices (that can be returning by user funcs) into numpy.ndarray type etc, while calling for derivatives is a little bit optimized and obtaining, for example, dc/dx is performed faster than separate obtaining (c(x + dx1)-c0)/diffInt, (c(x+dx2)-c0)/diffInt ..., (c(x+dxn)-c0)/diffInt, where dxk is all-zeros vector except of coord k with value diffInt.

Let me also note once again a good user-provided gradtol value can be helpful (try 1e-1...1e-10).

Probably, in future OO versions gradtol will be renamed to gtol (less to type).

Thursday, July 17, 2008

ALGENCAN 1.0 is no longer supported

I have removed ALGENCAN solver from svn repository according to demand of my gsoc mentor (it makes svn repository unavailable from Win and Mac OS since ALGENCAN_oo.py and algencan_oo.py are case-insensitive same names for the OSes subversion clients). Still I think it could wait till next OO release.

So now you can use ALGENCAN v 2.x only, as "algencan" solver (lower-case). You should have at least 2.0.3 (for now latest is 2.0.4).

Tuesday, July 15, 2008

Introduction to oofun

I have added another one entry to OO Doc webpage, related to oofun. It can handle dependencies info and provide some other benefits.

For more info see the entry.

Of course, the concept of oofun is not something new, for example, even some my dept workers implement something similar using Visual C++ and Rational Rose.

I intend to continue oofun development.

Monday, July 14, 2008

ALGENCAN 2.0.3beta has been released

ALGENCAN developers have informed me they have fixed the bug with Python API installation, and today corrected ALGENCAN 2.0.3 beta has been released (works well with OO as "algencan" solver).

Still, as I had informed, those my examples (nlp_bench_1, nlp3) work better with ALGENCAN 1.0, mb because some funcs (that are used there) are too far from being quadratic.

Saturday, July 12, 2008

changes in ralg and derivatives check


  • Some major changes for NLP/NSP ralg solver.

  • I have committed some changes in check user-supplied derivatives.

    Old-style difference is not a good choice because sometimes you have values to compare like 1.24*105 (user-supplied) and 1.23*105 (finite-difference), while sometimes 1.24*10-5 (user-supplied) and 1.23*10-5 (finite-difference).

    The new way is based on comparing
    1 - derivative_user/derivative_finite_diff
    to maxViolation value

    The parameter maxViolation can be used as prob field or via for example p.checkdc(maxViolation=0.05)
    Note that x remains to be 1st argument for checkdc, checkdf, checkdh (default p.x0), and can be used as kwarg as well: p.checkdh(maxViolation=0.05, x = 0.5*(p.lb+p.ub)), that is same to p.checkdh(0.5*(p.lb+p.ub), maxViolation=0.05).

    So new text output is like this one below (this one is produced by updated examples/checkDerivatives.py file).

    Note that RD (relative difference) is defined as

    int(ceil(log10(abs(Diff) / maxViolation + 1e-150)))

    where

    Diff = 1 - (info_user+1e-150)/(info_numerical + 1e-150)

    (those small numbers are used to suppress zeros)
    ################################

    OpenOpt checks user-supplied gradient df (shape: (30,) )
    according to prob.diffInt = [9.9999999999999995e-08]
    lines with 1 - info_user/info_numerical greater than maxViolation = 0.01 will be shown
    df num user-supplied numerical RD
    0 +7.000e+00 -8.000e+00 3
    8 -2.291e+00 -1.029e+01 2
    max(abs(df_user - df_numerical)) = 14.9999995251
    (is registered in df number 0)
    ========================
    OpenOpt checks user-supplied gradient dc (shape: (2, 30) )
    according to prob.diffInt = [9.9999999999999995e-08]
    lines with 1 - info_user/info_numerical greater than maxViolation = 0.01 will be shown
    dc num i,j:dc[i]/dx[j] user-supplied numerical RD
    32 1 / 2 +1.417e+01 -8.323e-01 4
    max(abs(dc_user - dc_numerical)) = 14.9999999032
    (is registered in dc number 32)
    ========================
    OpenOpt checks user-supplied gradient dh (shape: (2, 30) )
    according to prob.diffInt = [9.9999999999999995e-08]
    lines with 1 - info_user/info_numerical greater than maxViolation = 0.01 will be shown
    dh num i,j:dh[i]/dx[j] user-supplied numerical RD
    58 1 / 28 -4.474e+01 -5.974e+01 2
    max(abs(dh_user - dh_numerical)) = 14.9999962441
    (is registered in dh number 58)
    ========================


  • Friday, July 4, 2008

    First converter is ready: llsp2nlp

    I have committed some changes, so LLSP class has new form:
    0.5*||C*x-d||2 + 0.5*damp*||x-X||2 + fTx-> min

    subjected to:
    lb <= x <= ub


    A*x <= b, Aeq*x = beq constraints are intended to be added in future.

    Using llsp2nlp converter is performed via

    r = p.solve('nlp:NLPSolverName')

    for example
    r = p.solve('nlp:ralg')

    For more details read LLSP webpage.

    Thursday, July 3, 2008

    some changes

    1. I have committed lots of changes, mostly code cleanup (still some more remain to be done) + some bugfixes.
    2. Objective function in LLSP have been changed from ||Cx-d|| (lapack-, BVLS-style) to 0.5*||Cx-d||^2 (BCLS, lsqlin, lots of other LLSP solvers style).These changes are required to simplify writing llsp2nlp converter. Also, some optional fields like those from BCLS will be added to LLSP soon.

    Sunday, June 29, 2008

    scale parameter for lpSolve

    I have committed scale parameter (bool flag) handling for lpSolve.

    It should be used as

    p.solve('lpSolve', scale = 1)
    or
    p.solve('lpSolve', scale = True)

    Friday, June 27, 2008

    bugfix for constrained NLSP graphic output

    I have committed the bugfix mentioned in OO GSoC timeline.
    Here's graphic output for a little bit modified /examples/nlsp_constrained.py.

    Tuesday, June 24, 2008

    bugfix for ralg A, Aeq constraints handling

    I have committed bugfix for ralg A, Aeq constraints handling.
    Unfortunately, the bug is present in 0.18, when I add Point class.

    connecting algencan 2.x beta

    Some weeks ago I had downloaded and compiled algencan 2.0 beta (version from March 25). Now new version is available: 2.0.1 beta, but currently I fail to compile both these versions (I guess because of some KUBUNTU updates). Fortunately, I can use those old compiled files from 2.0 beta.

    So I have provided OO-algencan 2.0 beta connection (I guess it should work with 2.0.1 as well).

    On the other hand, v 1.0 works better than 2.0beta (at least for those examples I have examined). Mb v. 2.0.1beta or v. 2 release will work better?

    also, pay attention to the issues:

  • you should remove old algencan files (first of all pywrapper.so that is situated somewhere on your PYTHONPATH). Mb I could use some tricks to allow both ALGENCAN (i.e. 1.0) and algencan (i.e. v 2.x) to be available in OO, but it requires some more efforts, and I don't see much sense because in future ALGENCAN support will be ceased.

  • provide algencan with either bigger gradtol (1e-1...1e-5 instead of default 1e-6) or other stop criteria like maxTime, maxCPUTime, maxIter, maxFunEvals

  • you can encountered text output "PYTHON INTERFACE ERROR: in evalg, PyEval_CallFunction returned NULL" - this one is due to stop triggering (maxTime, maxCPUTime etc, via Python exception) of artificially binded OO iterfcn, because currently algencan has no native one (here's the thread with my proposition to create the one)

  • Monday, June 16, 2008

    bugfix for ipopt linear constraints handling

    I have committed bugfix for ipopt linear constraints handling Ax<=b, Aeq x = beq.
    I just forgot to implement the one because ipopt API has only x_L <= x <= x_U and g_L <= g(x) <= g_U constraints.

    Sunday, June 15, 2008

    OpenOpt 0.18

    Hi all,
    I'm glad to inform you about new OpenOpt release: v 0.18.

    Changes since previous release 0.17 (March 15, 2008):
    • connection to glpk MILP solver (requires cvxopt v >= 1.0)

    • connection to NLP solver IPOPT (requires pyipopt installation, that is currently available for Linux only)

    • major changes for NLP/NSP solver ralg

    • splitting non-linear constraints can benefit for some solvers

    • unified text output for NLP solvers

    • handling of maximization problems (via p.goal = 'max' or 'maximum')

    • some bugfixes, lots of code cleanup

    Regards, Dmitrey.

    major changes for ralg

    I have committed some major changes for ralg (+ some more code for prob.point, to simplify my further OO development).

    Since I have been busy with connecting ipopt and these ralg changes, patterns are intended to be implemented in next OO release.

    Below is output of OO v0.17 ralg vs current ralg implementation (made by /examples/nlp_bench2.py for N=100). Usually OO v0.17 ralg consumes about 5-10% objFunc and 3-5% nonlinear constraints evaluations greater.

    Thursday, June 12, 2008

    Connecting IPOPT, Part 3

    I have wrote automatic generation of ipopt.opt file. That one is read from current directory; to omit creating of the one (and hence all-defaults or using ipopt.opt file from current directory) use
    r = p.solve('ipopt', optFile = False)
    (default value for optFile is "auto", also, I intend to provide possibility for loading the file from given location, like
    optFile = "\\home\\dmitrey\\myipoptsettings.opt"
    ).

    Currently it consists of 3 lines:
    print_level -2 (to suppress annoying huge text output)
    tol (p.ftol is put here)
    constr_viol_tol (p.contol is put here)


    I found it too long and complicated to implement all those parameters with their default values (and hence duplicate ipopt code) that are can be changed by IPOPT developers from time to time (and it's hard to monitor), so to add other fields from ipopt.opt file specification you should use "options" as string (comma and semicolon delimiters can be used here, and space or "=" can be delimiters for option name and value), not as Python *args or **kwargs.

    Example:

    r = p.solve('ipopt', options = 'dual_inf_tol=1.005; compl_inf_tol 1.005e-4, alpha_for_y dual-and-full')

    Let me also remember you that running ipopt from OO requires pyipopt installation, and currently pyipopt is Linux-only. Also, some days ago pyipopt bug related to fail on problems w/o non-linear constraints has been fixed.

    Saturday, June 7, 2008

    numpy-related bug

    If you have encountered problems (during using OO) like the one:
    [...]
      File "/usr/lib/python2.5/site-packages/scipy/sparse/base.py", line
    139, in spmatrix
    @deprecate
    TypeError: deprecate() takes exactly 3 arguments (1 given)
    then numpy update is recommended (and using OO from latest tarball or svn), as it is mentioned here.

    Friday, June 6, 2008

    Unified text output

    I have turned NLP/NSP/GLP solvers to show unified text output. That one can be edited by user (I'll publish detailed description in OO Doc page ASAP).

    Now it's like this for constrained problems:

    iter objFunVal log10(maxResidual)
    0 6.115e+01 2.13
    10 2.360e+01 0.72
    20 1.754e+01 0.38
    ..........
    1290 2.015e+01 -6.23

    and like this for unconstrained (or those problem-solver pairs which always have feasible iter points, for example some box-bounded problems and solvers):

    iter objFunVal
    0 -2.825e+06
    10 -7.106e+05
    ...
    1570 -4.787e-03

    Maximization problems

    I have committed some changes, they allows handling of maximization problems

    objFunc -> max
    (subjected to some constraints)

    via p.goal = 'max'
    (or 'maximum')

    Currently LP, MILP, QP classes have no the possibility, it's only valid for NLP, NSP, GLP

    of course, the parameter goal could be used in prob assignment or solve function as well:
    p = NLP(..., goal='max')
    or
    r = p.solve(solverName, ... (other parameters), goal='max')
    here's graphical output from modified nlp3.py (using -f instead of f, already in subversion).

    Tuesday, June 3, 2008

    Connecting IPOPT, Part 2

    I intend to have OpenOpt<->IPOPT bridge (via Eric's pyipopt) till next OO release.

    However, currently it works for Linux only (and I'm certainly not the one who could try to make it cross-platform) and requires hand-turn of makefile, first of all related to paths (python and numpy directories); also, probably you'll have to add -llapack (provided lapack-dev installation, if you have message "undefined symbol @dswap_") and -lgfortran (if you have "/usr/lib/libipopt.so.0: undefined symbol: _gfortran_concat_string", mb it requires gfortran installation, if you haven't the one installed). For more details see comments here (somehow they are not mentioned in Eric's README file).

    So after the issues I have IPOPT working for me. Here's picture below. This one referrs to nlp_bench2.py (from /examples) with modified h3: instead of
    (x[-5]+x[-6]-2*M+1.5)**6=0
    I have taken
    (x[-5]+x[-6]-2*M+1.5)**2=0
    (because, as I have noticed, IPOPT handles the constraint so badly that I thought I have bug(s) in IPOPT connection, and hunting for those ones took a long time for me, because that constraint is present in my other NLP tests involved). And (despite IPOPT work probably can be somehow enhances via ipopt.opt file options modification) this is another one proof that having several solvers (like OO provides) is always better than having a single one, even so famous as IPOPT, isn't it?

    Sunday, May 18, 2008

    major changes for non-linear constraints

    After some code cleanup (still there is some more to be done) I have committed some changes for non-linear constraints handling. If your constraints are split, for example
    p.c = [c1, c2, c3],
    or
    p.h = [h1, h2]

    some solvers can take benefits (ralg for c and h, ALGENCAN for c; others available for now in OO cannot).

    They will take less evaluations for these non-linear constraints.

    Thursday, May 8, 2008

    blas and lapack issues with CVXOPT installation

    As I have made conclusion from my own experience with CVXOPT installation (from source, i.e. tarball), as well as from some user comments, CVXOPT sometimes cannot find lapack and blas, despite they are present.

    I noticed that it searches for llapack and lblas, and despite I have libblas.so.3 and liblapack.so.3 (installed in my KUBUNTU 8.04, to /usr/lib, via aptitude) it doesn't work. So I had copied libblas.so.3 and liblapack.so.3 into files libblas.so and liblapack.so (same directory /usr/lib) and now all works ok (mb creating soft links would be enough?).

    I hope CVXOPT developers will take it into account, mb this problem happens with some other OSes as well.

    P.S. If you intend to use OO<->CVXOPT<->glpk connection don't forget to set BUILD_GLPK=1 in CVXOPT setup.py file. glpk should be already installed.

    Wednesday, May 7, 2008

    Some code cleanup

    I have started some code cleanup (mentioned in OO timeline), some changes have been already committed and some more will be done from time to time, so current OO taken from svn or latest tarball can work unstably (of course, I have checked my tests before code commit, but some other may fail).

    Thursday, May 1, 2008

    Python Wins "Favorite Scripting Language" Award

    (The information has been taken from PSF blog)
    --------------------------------------------------------------------------------------
    Python Wins "Favorite Scripting Language" Award.

    May 1, 2008 Linux Journal announced their 2008 Readers' Choice Awards today, and we are happy to say that Python won the Favorite Scripting Language category with 28.9% of the vote. PHP, Bash and Perl (in that order) won honorable mentions. Thanks to everyone who took the time to register their votes. Python's popularity does seem to be climbing this year, as attendance at the recent Chicago PyCon confirmed with a 77% increase in attendance. Let's hope that leads to career opportunities for Python users!

    --------------------------------------------------------------------------------------
    Favorite Scripting Language:
    • Python (28.9%)

    Honorable Mentions

    • PHP (21.7%)
    • bash (19.8%)
    • Perl (17%)

    Wednesday, April 30, 2008

    Using OpenOpt with KUBUNTU 8.04

    After KUBUNTU 8.04 installation + python-matplotlib from channel I had a error message "can't find gtk module" (while using p.plot=1). After installation python-gtk2 from here all works ok.

    Tuesday, April 29, 2008

    new OO MILP solver: glpk

    Another one MILP solver has been connected - glpk. This one can handle binary constraints: x_j should be from {0, 1} for all j from p.binVars (lpSolve cannot, but maybe setting for those coords
    lb[j], ub[j] = 0, 1
    and demand them to be integers (p.intVars = [...])
    can be helpful).

    Requires CVXOPT ver >= 1.0 + Python ver >= 2.4 (code contains Python set) + glpk.
    See more details + example in OO MILP webpage (mentioned above).

    Sunday, April 27, 2008

    CVXOPT 1.0 has been released

    CVXOPT developers have informed of release 1.0.

    As I had informed, one of the changes is interface to GLPK integer LP solver. I intend to have the one connected to OO (till next.OO release). Currently single MILP solver connected to OO is lp_solve (as lpSolve).

    Friday, April 25, 2008

    OpenOpt will participate in GSoC 2008

    My OpenOpt-related application have been accepted to participate in GSoC 2008.
    This year assigned mentors are Alan G Isaac (same as GSoC 2007) and (co-mentor) Nils Wagner.

    Here's intended schedule, mb I'll start to do some chapters from the one earlier.

    Tuesday, April 1, 2008

    Fortress v. 1.0 has been released

    Today Fortress language v. 1.0 (Fortran successor from Sun Microsystems, sponsored in DARPA HPCS project along with IBM X10 and Cray Chapel) has been released. I don't know how stable is the one - IIRC in 2006 fortress developers had promised rather stable one in 2009-2010 only. Note that this is still Java-based version, native code compilation is intended to be done some time later.

    Here's a letter from mail list below. Note also that along with Emacs plugin mentioned here Fortress has plugins for Eclipse and NetBeans already.

    ---------------------------
    Fortress Community Members:

    Today, we are releasing the first version of the Fortress specification with a compliant implementation: Fortress 1.0. Both the new specification and the implementation are available from the project website:

    http://projectfortress.sun.com

    The 1.0 implementation is available both as a stand-alone download and through a Subversion repository. Please follow the instructions on the website to get it.

    Our tandem release of a specification and matching interpreter is a major milestone for the project; it is a goal we have been working toward for some time. All Fortress source code appearing in this specification has been tested by executing it with the open source Fortress implementation. Moreover, all code has been rendered automatically with the tool Fortify, also ! included s distribution. (In case you haven't used it yet, Fortify is an open source tool contributed by Guy Steele that converts Fortress source code to LaTeX.) Also note that this release includes a Fortress mode for Emacs, contributed by Yuto Hayamizu, and over 10,000 lines of Fortress library code, contributed both by Sun and by Fortress community member Michael Spiegel.

    Our reference implementation has evolved gradually, in parallel with the evolution of the language specification and the development of the core libraries. In order to synchronize the specification with the implementation, it was necessary both to add features to the implementation and to drop features from the specification. Most significantly, most static checks in the implementation are currently turned off, as we are in the process of completing the static type checker and the type inference
    engine. Static constraints are still included in the specification as documentation. Contrary to the Fortress Version 1.0.beta, inference of static parameter instantiations is based on the runtime types of the arguments to a functional call. Support for syntactic abstraction is not included in this release. We do not yet support nontrivial distributions, nor parallel nested transactions. Moreover, many other language features defined in the Fortress Language Specification, Version 1.0.beta have been elided. Many of these features require additional research before they can be implemented reliably; this research and development is a high priority.

    With this release, our goal in moving forward is to incrementally add back features taken out of the specification as they are implemented. In particular, all language features included in the Fortress Specification version 1.0 beta remain goals for eventual inclusion in the language (perhaps with additional modification and evolution of their design). By proceeding in this manner, we hope that our implementation will be useful for more ta ill comply with the public specification. Moreover, the Fortress community will be better able to evaluate the design of new features, as users will be able to use them immediately, and developers will be able to contribute to the implementation effort more easily, as they will be able to build off of a relatively stable and well-specified base.

    Moving forward with the implementation, in concert with the open source community, our goal is to build off of the infrastructure of our interpreter to construct an optimizing Fortress compiler and to achieve our long-standing goal of constructing a new programming language with high performance and high programmer productivity, owned by the community that uses it, and able to grow gracefully with the tasks it is applied to.

    Thanks to all those who sent feedback on earlier versions; many of your suggestions have influenced changes in this new version. Please keep the feedback coming!

    Watch this space for future news from >-- Eric Allen

    Check out Fortress! http://projectfortress.sun.com
    ---------------------------

    See also: my impressions of Python, Fortress and some other langueges

    Thursday, March 27, 2008

    GSoC tasks for OO further development

    hi all,
    my GSoC 2007 mentor Alan G Isaac recommends to specify in my proposition which tasks from OO TODO list and/or beyond the one are intended to be done during GSoC 2008. So if you have a feature request(s) and/or other OpenOpt enhancement proposition(s) they can be considered to be appended to the list as well till student application deadline (March 31).

    Saturday, March 22, 2008

    constraints handling for NLSP solver nssolve

    I'm glad to inform that NLSP solver nssolve now can handle all types of constraints; user-supplied derivatives to f, c, h can be handled as well.

    See NLSP constrained example.

    Also, a small bugfix for nssolve iprint parameter handling has been made.

    However, currently graphic output for constrained NLSP does not work properly yet, it's intended to be fixed in future.

    Connecting IPOPT, Part 1

    a Ph.D. student from Washington University has informed me, that he spent just a single day and already has working Python-IPOPT connection (of course, it requires some more time to make it appropriate enough). That one is implemented via Python's C-API, but maybe in future numpy's C-API or something else will be used.

    Unfortunately, I still have some problems with build, for me it yields "/usr/lib/libipopt.so.0: undefined symbol: _gfortran_concat_string".
    You could take a look at the code by yourself, the one is available here

    You must have IPOPT installed (I have tried 3.3.1 and 3.3.5, note that this one requires LP solver installed such as MUMPS); modify links to IPOPT directories in batch.sh and run the file (note that for now the code is usable in Linux only).

    Thursday, March 20, 2008

    paypal does not allow payments to Ukraine

    I have been informed that paypal link from OO page refuses to make donation to OpenOpt, because paypal does not allow payments to Ukraine. So I will try to find another approach. Ideas or leads are very welcome.
    (Last edit: 2008/04/16 09:39 by Dmitrey)

    ALGENCAN connection bugfix

    I have fixed a bug in ALGENCAN <-> openopt connection.
    svn and tarball from OO install page have been updated.

    to openopt users

    hi all,

    I decided to create a list of openopt users, similar to T0M0PT's one (i.e. that one mentioned in OO start page)

    could anyone provide some info like the one from here?

    It would increase openopt's chances for obtaining participation in GSoC 2008 (students propositions start soon, some days later) or mb an other one.

    Regards,
    Dmitrey.

    Sunday, March 16, 2008

    mb CVXOPT will include glpk milp solver

    I have contacted CVXOPT developers (Lieven Vandenberghe, Joachim Dahl) about implementing glpk milp solver available in CVXOPT (since they already have glpk lp solver connected).
    They have answered: maybe next CVXOPT version will include connection to glpk milp solver (then it will be connected to OO as well).

    Saturday, March 15, 2008

    OpenOpt 0.17

    We're pleased to announce:
    OpenOpt 0.17 (release), free (license: BSD) optimization framework for Python language programmers, is available for download.

    Changes since previous release (December 15, 2007):
    • new classes: GLP (global problem), MMP (mini-max problem)

    • several new solvers written: goldenSection, nsmm

    • some more solvers connected: scipy_slsqp, bvls, galileo

    • possibility to change default solver parameters

    • user-defined callback functions

    • changes in auto derivatives check

    • "noise" parameter for noisy functions

    • some changes to NLP/NSP solver ralg

    • some changes in graphical output: initial estimations xlim, ylim

    • scaling

    • some bugfixes

    changes in auto derivatives check

    Old-style p.check.df=1, p.check.dc=1, p.check.dh=1 is no longer valid. That one was inspired from MATLAB syntax.

    So for now you should use p.checkdf(), p.checkdc(), p.checkdh() instead. If you want to check 1st derivatives in point x (i.e. other than p.x0) than you should use p.checkdf(x), p.checkdc(x), p.checkdh(x). See updated example checkDerivatives.py from openopt/examples (or the link from OO Doc webpage).

    Friday, March 14, 2008

    new LLSP solver: bvls

    I have searched for free constrained LLS solvers for some my own purposes - connecting the ones to scikits.openopt and using the ones for solving ralg/lincher subproblems (in future).

    So now new LLSP solver is available:bvls. It requires separate installation, see OO LLSP webpage for more details. Other well-known routines - NNLS and WNNLS - are BVLS ancestors.

    Thanks to Alan G Isaac and solver authors (Robert L. Parker & Philip B. Stark) who came to agreement of turning bvls license from free-for-non-commercial to GPL and then, moreover, to BSD.

    Also, thanks to Pearu Peterson for help with f2py usage.

    I have tried some examples with nVars ~ 400...500 and bvls returns better results (r.ff) than translating the problem to NLP and soling via scipy_lbfgsb or scipy_tnc (you could try /examples/llsp_2.py for more details. As for ralg, results are same, but currently time elapsed is much greater).
    The solver is intended to be connected to future scipy version. Unfortunately, the BVLS routine is intended for dense problems only (and it's quite bad for my ralg/lincher purposes).

    There are other free routines that could be considered - toms/587 (I don't know is the one intended for dense problems only or for sparse ones as well) and BCLS. Latter has GPL (written in ANSI C) and is capable of handling sparse problems, moreover, implicit A matrix via defining funcs Ax and A^T x. BCLS consists of lots files; it has convenient MATLAB API (2 single standalone func for implicit and explicit matrix A) but calling it from C API is very inconvenient, one function is not enough, so I can't connect it to Python via ctypes, the task is too complicated.

    As for toms/587, my OS KUBUNTU yields same bug as this one(with ALGENCAN), so I intend to try using the one with next KUBUNTU version.

    Sunday, March 9, 2008

    maybe ralg-related bug in recent svn changes

    Some users inform that recent openopt version from svn yields bug for ralg solver:

    OpenOptException: incorrect solver is called, maybe the solver "ralg" is not installed

    I can't reproduce the bug in my OS (KUBUNTU 7.10 & Ubuntu 7.04), so I'll try to find and fix the one ASAP.

    Thursday, February 21, 2008

    goldenSection vs scipy.optimize fminbound

    For some numerical experiments (intended by our chief Petro I. Stetsyuk and me) with r-algorithm (that is implemented in OO ralg solver) a good line-search optimizer is required.

    As you know scipy.optimize has single-variable box-bounded optimization routine "fminbound" (connected to OO as scipy_fminbound). Unfortunately, it has 2 drawbacks:
    - it uses mix of golden section and quadratic spline approximation, and latter works very bad for non-smooth and/or noisy funcs, that we deal with
    - sometimes solution returned is out of lb-ub bounds.

    So I have implemented pure golden section algorithm into OO solver "goldenSection" (nothing special, just several lines of code).

    Monday, February 11, 2008

    "noise" parameter for noisy functions

    1. For NSP problems a new parameter has been implemented: noise (currently default value is 0, mb it will be changed in future). Currently ralg can handle the param for objFunc only, not for non-lin constraints c and h.
    Also, the parameter can be used with nssolve and nsmm.

    examples/nssolveVSfsolve_1.py has been updated (add noise=1e-8 to prob definition).

    2. Some more changes to ralg.

    Thursday, February 7, 2008

    some changes to NLP/NSP solver ralg

    some changes for ralg have been committed, they affect both trajectory and stop criteria (for constrained problems).

    Scaling

    Possibility to use scale factor(s) is undoubtedly a "MUST HAVE" feature for any advisable optimization solver or framework.

    Since now scaling is available in OO (update from svn or tarball is required), and example of scale factor usage (p.scale, "scaleFactor" is too long to type) has been committed to OO Doc page.

    Note that using of p.diffInt as vector is also allowed.
    However, using scale is more preferred: it affects xtol for those solvers who use native OO stop criteria (ralg, lincher, nssolve, nsmm, mb some else).

    Tuesday, February 5, 2008

    MMP solver nsmm: constraints have been implemented

    constraints handling for MMP (minimax problem) solver nsmm has been implemented.
    Here's text and graphical output of the updated mmp example

    starting solver nsmm (license: BSD) with problem unnamed
    Niter ObjFun log10(maxResidual)
    0 6.47e+03 1.9
    10 6.45e+03 -1.1
    20 6.46e+03 -3.6
    30 6.46e+03 -5.8
    nsmm has finished solving the problem unnamed
    istop: 3 (|| X[k] - X[k-1] || < xtol)
    Solver: Time Elapsed = 0.26 CPU Time Elapsed = 0.21
    Plotting: Time Elapsed = 3.56 CPU Time Elapsed = 2.99
    objFunValue: 6461.0983572120513 (feasible, max constraint = 9.66019e-07)

    Sunday, February 3, 2008

    bugfix for some scipy NLP solvers (unconstrained)

    Another one bugfix have been committed (appered due to some recent svn changes for scipy_cg, scipy_ncg, scipy_powell, scipy_bfgs, scipy_fminbound).
    Thanks to Nils Wagner for reporting the issue.
    That one was due to definition of exception "isSolved" migrated from BaseProblem.py to ooMisc.py

    [Linux] ALGENCAN bug, maybe due to latest glibc

    Some days ago I have accepted a proposition for glibc recent version installation from Ubuntu software updates channel, and I guess it's the matter for the bug (while starting ALGENCAN):
    StdErr: *** glibc detected *** /usr/bin/python: free(): invalid pointer:
    0x00000000008362f0 ***

    bugfix for "incorrect func index type" (some OS/Arch)

    Some users informed me that some OS/Arch raises the error "incorrect func index type".
    As far as we understand for now, it's due to strange nmpy.argmax() return type (numpy.int32 with type id 1810419808, that differs from 1810419616). For more info see discuss in scipy-dev mail lists: url1, url2 .
    So some changes committed (also available from OO install page -> openopt.tar.gz) allows to avoid the bug.

    Saturday, February 2, 2008

    how to change solver default parameters

    A new OO doc page entry has been committed:
    how to change solver default parameters (via kwargs for p.solve()).

    AFAIK it's especially important for global solvers, like GA-based "galileo". However, currently only 3 parameters are available for the solver modification:
    population (default 15)
    crossoverRate (default 1.0, it means always do crossover)
    mutationRate (default 0.05, it means "not very often")
    There are some more parameters available for modification, but I lack free time and experience in global optimization.

    Old way p.solverParameters.(solvername) = {...} has been removed.

    Friday, February 1, 2008

    solver settings as kwargs for p.solve()

    An openopt user asked: could he modify GLP solver galileo parameters.
    Currently it's possible to access some ralg parameters, via for example
    p=NSP(...)
    p.solverParameters.ralg={'h0':0.1, 'alp':3} #I suspect it's too long to type, isn't it?

    (h0 is initial step length approximation, default 1, alp is space dilation parameter, default 2 (fixed for Python implementation, more advanced handling with non-fixed value in fortran version))

    For modifying other solvers parameters some minor changes should be done.
    Also, I had already decided how to use kwargs (and maybe some args in future, I didn't decided yet which ones) in solve(). I think it will be something like
    r = p.solve('galileo', crossoverRate=0.80, mutationRate=0.15)
    So, if a kwargs parameter is in dir(solverName.solverName) (i.e. is a solver parameter, like in the ralg case above:
    >>>import scikits.openopt, ralg
    >>>dir(ralg.ralg)
    ['__alg__', '__authors__', '__constraintsThatCannotBeHandled__', '__decodeIterFcnArgs__', '__doc__', '__economyMult__', '__expectedArgs__', '__homepage__', '__info__', '__init__', '__isIterPointAlwaysFeasible__', '__iterfcnConnected__', '__license__', '__module__', '__name__', '__solver__', 'alp', 'getRalgDirection', 'getRalgDirection2', 'h0', 'hmult', 'nh', 'q1', 'q2']
    ), then it will change the value of the solver setting, elseware if it's in dir(p) (i.e. is prob parameter, like maxIter=15) then it will change the prob field value, elseware a error message will be raised.
    I intend to implement the changes during this week.

    Thursday, January 31, 2008

    About prob structure redefinition

    I have noticed 2 lines of code sent by an openopt user:

    #unfortunately, as openopt is now, this needs to be defined again
    #each time it is solved
    It is followed by function for prob redefinition:
    def setupF2dProblem(init_point, maxIterates):
    ...
    return prob

    Let me explain the situation once again. Maybe, you have already noticed the error messages when trying to use prob instance one more time: r=p.solve(...). This problem is due to Python issue 1515: deepcopy doesn't copy instance methods (url)
    prob instance contains lots of function handlers, referring to each other, and while running p.solve(...) they seriously changes to other values, and some new fields (function handlers, flags True/False, some new values - Python lists, arrays etc) appear. It's impossible for me to remove all the changes done and successfully keep cleaning the prob instance of all that stuff after solving finish. It may cause inpredictable bugs, appearing once time-to-time, and hunting for this kind is very difficult.
    So I decided to wait until Python developers will fix the bug. They informed me of the url for issue 1515 during Python bug day, when I have committed the bugreport (some weeks later pastebin.com will remove the code). So, they recommended me temporary solution - to add the line
    d[types.MethodType] = _deepcopy_atomic
    to copy.py file (like it's mentioned in the url provided), but modifying Python core sources is inappropriate solution for the case - I can't demand the one from each OO user (moreover, not all of them has write access to Python core files). I haven't tried it by myself as well, so I don't know does it helps or no.
    So, it would be nice to increase the bug severity from "normal" (as it is assigned for now) to something bigger, maybe Python developers would increase their efforts to fix the one. Let me also attach in comment the bugreport, since it will disappear soon from pastebin.com.

    Tuesday, January 29, 2008

    New OO class: GLP (global problem)

    New OO class have been created: GLP (global problem)
    Currently single GLP solver is "galileo" (by Donald Goodman, alg: Genetic Algorithm; license: GPL)
    I have included galileo code to OO. For now it can solve only continuous problems with finite box-bounds. I intend to add handling all-discrete problems as well (AFAIK this solver can't handle mixed discrete-continuous).

    For GLP default p.plotOnlyCurrentMinimum = True, while for all other classes default value is False.
    Currently stop criteria for GLP are maxIter, maxFunEvals, maxTime, maxCPUTime, fEnough. AFAIK xtol and ftol are inappropriate stop criteria for GLP class. I intend to add something like maxNonSuccess = 15 (number of iterations when better value haven't been obtained).
    Here's graphical output for the example.

    Monday, January 28, 2008

    SAGE online Calc: very convenient

    Today I have created account (available for free) for SAGE Notebook (online calculator) and found the one very convenient. Here's snapshot from Mozilla browser:

    Some major changes to ralg

    Some major changes to ralg have been made. Almost all tests works better now.
    Also, since now ralg will take into account do you define problem as NLP or NSP (and hence behaviour and trajectory will be different).

    Thursday, January 24, 2008

    bugfix related to "ratAll undefined"

    some days ago I forget to commit some changes to svn
    if you had encountered "ratAll undefined" update svn (or try using updated openopt.tar.gz file)

    Monday, January 21, 2008

    some more changes to ralg

    some more changes to enhance solution precision.

    Some changes in ralg

    Some changes have been committed to NLP/NSP solver ralg (first of all to improve solution precision). They are similar to Fortran ralg version, however, equality constraints handling of the latter is still better than Python one.

    Sunday, January 20, 2008

    Some changes in graphics

    Since matplotlib has some problems with axhline+autoscaling, especially while using 2 subplots, I have made some changes.
    So now examples/nlp_bench_1.py works w/o backward xlim modifications that have been occurred earlier.
    Let me also remember you: using initial approximations for xlim, ylim can be very useful.

    Friday, January 18, 2008

    New OO class: MMP (mini-max problem)

    New OO class has been created: MMP (mini-max problem).

    Currently single OO solver available for MMP is nsmm ("non-somooth mini-max", unconstrained, can handle 1st derivatives, non-smooth, ill-conditioned and noisy funcs)
    It defines function
    F(x) = max_i {f[i](x)}
    and solves NSP
    F(x) -> min
    using solver ralg.

    It's very far from specialized solvers (like MATLAB fminimax), but it's better than having nothing at all.
    This solver is intended to be enhanced in future (first of all - add constraints).
    Here's example/mmp_1.py graphical output:

    Tuesday, January 15, 2008

    OpenOpt TODO list

    hi OpenOpt users,
    as you probably know OpenOpt still lacks finance support, so we should take care of OO further evolution by ourselves. Of course, I had applied for money support (and will apply furthermore), but some Ukrainian organizations answered they are not interested in Python scientific software, some (foreign ones) explained me I'm required to be their country's resident, and several (like Enthought) hadn't answered at all (OK, I don't mind); AFAIK PSF doesn't accept propositions (at least for last several years), so GSoC2007 (that had finished) donation remains being single one for now, and I'm not sure I'll be able to participate in GSoC2008 because maybe I'll already finished being student for the date required (April 11 or kind of).

    Unfortunately, OpenOpt for Python is not related neither to my work nor to my graduation work (latter is MATLAB code for Electro-Energetic Systems State Estimation problem), so I can't pay OpenOpt too much time, at least for now.

    So, here's list of OpenOpt TODO tasks, some of the ones can be done by yourselves for to speed up OpenOpt development.
    Making donations would also speed up making this and others tasks done.
    Regards, Dmitrey.

    Friday, January 11, 2008

    New doc page entry: description of output struct

    New entry has been added to OpenOpt doc webpage: description of output structure r (r = p.solve(...)) fields

    Link

    Thursday, January 10, 2008

    Changes to variable names, iterFcn and user-defined callback

    Some changes:
    • "nIter", "r.nEvals" have been renamed to "iter", "r.evals"
    • r.evals now has "iter" key, so it looks like
    >>> dir(r)
    ['__doc__', '__module__', 'advanced', 'elapsed', 'evals', 'ff', 'isFeasible', 'istop', 'iterValues', 'msg', 'rf', 'solverInfo', 'stopcase', 'xf']
    >>> r.evals
    {'c': 285, 'dh': 215, 'f': 285, 'df': 215, 'h': 285, 'dc': 208, 'iter': 248}
    >>> r.elapsed
    {'plot_time': 3.71, 'solver_cputime': 0.5, 'solver_time': 0.54, 'plot_cputime': 3.44}
    >>> r.msg
    '|| gradient F(X[k]) || < gradtol'
    >>> r.solverInfo
    {'alg': 'Augmented Lagrangian Multipliers', 'homepage': 'http://www.ime.usp.br/~egbirgin/tango/', 'license': 'GPL', 'authors': 'J. M. Martinez martinezimecc [at] gmail.com, Ernesto G. Birgin egbirgin [at] ime.usp.br, Jan Marcel Paiva Gentil jgmarcel [at] ime.usp.br'}
    • new field p.rk: current max residual have been added (can be used in user-defined callback function(s))
    • bugfix related to NLSP and LSP classes ("struct has no classIterFuncs field")
    • For solvers w/o native iterfcn (iter-to-iter callback function) using p.connectIterFcn('df') is no longer required (it will be done automatically, however, user can connect it to other func).
    • now all stop cases except ftol and xtol can be handled in artificial connected iterfcn (like ALGENCAN, scipy_slsqp) - for example, it can be MaxCPUTime, maxTime, fEnough, maxFunEvals, maxIter
    Example of user-provided callback function(s) have been updated according to latest changes.

    Monday, January 7, 2008

    Jan 2008: Python continue to grow (TIOBE)

    Today TIOBE index have been updated for Jan 2008.
    Python language continue to grow:

    6 8 Python 5.538% +2.04% A

    6th position this month, 8th - the one year ago, 5.538 - this month (gained 2.04 during year 2007)

    Now, what about other free script languages?
    Ruby goes a little bit down
    11 10 Ruby 2.345% -0.17% A
    However, Lua quickly grows up:
    16 46 Lua 0.579% +0.48% A--

    Fortunately, Python scientific software is much more mature than same for Lua, at least for now.
    BTW I remember reading a message about Lua-Python 2-way bridge (free software).
    Note that Ruby and Lua were created after Python so some Python drawbacks were taken into account (same to Python vs Perl and PHP).

    PHP is growing up but the language is so inconvenient (as well as Perl) that no doubts it will be suppressed.
    4 5 PHP 9.195% +1.25% A
    7 6 Perl 5.247% -0.99% A

    Java still remains 1st, C and C++ go down:
    1 1 Java 20.849% +1.69% A
    2 2 C 13.916% -1.89% A
    5 3 C++ 8.730% -1.70% A


    Also, Fortran and MATLAB should be mentioned:

    24 Fortran 0.305%


    26 MATLAB 0.241%

    However, noone knows what scientific language will suppress all others - will it be Sun Fortress (also known as Fortran successor, however, similar name is almost the single similar feature), or IBM X10 (parallel calculation libraries for Java), or Cray Chapel (C-like, something similar to D or Pyrex languages) or MS F#, or maybe another one, unknown for now.

    Or, maybe, it will be Python language? :)

    On the other hand, usage of Python as scientific language may become deprecated, like Ada language had been suppressed by C.

    I write a little programs time-to-time using the Fortress language, btw I had wrote ralg solver using the one. Fortress FAQ says "Fortress is general language, not only mathematical one". But the language requires strict type classification.
    For example:
    a1 = array[\RR64\](3,4).fill(1.5)
    b := a1 + a1 (or b = a1 + a1)
    yields error in Fortress ("unbound variable", it makes me remember my times of OCAML programming). But it works perfectly in Python or MATLAB.
    Almost all langs quickly growing up in TIOBE index (Python, Ruby, Lua etc) are free of strict type classification - users prefer to type as little code as possible, some percents of speed down are less important.

    One more serious Fortress drawback is loops parallel by default:
    for i <- 0#n, j <- seq(0#n)
    means parallel by i. So they provoke users to use parallel loops where sequential are much more safe. I don't want to get no sleep suspecting that someone from developers team turn out to be too lazy to write "seq()" everywhere where is should be done, and then aircraft crashing etc.
    For "general purpose language" sequential loops should be default ones. I had proposed using
    for i <= 0#n, j <- 0#n instead, but no answer have been obtained - have it been accepted or rejected.

    Fortress developers promise to yield stable compiler in 2009-2010 or so. And maybe Python, F# or other languages will have much more stronger positions (as scientific languages) for the date.

    Friday, January 4, 2008

    experiment: google search for optimization solvers

    I had noticed some people come to openopt webpages from google search for something like "free optimization solver", especially "free NLP solver" (since NL problems are most common in numerical optimization). So I have done some google searches and interesting results have been obtained:

    • google "free optimization solvers" yields OpenOpt for Python 7th position from 81700:




    • google "NLP solver" yields 4th position from 67300 (after gams.com, colan.org, optcontrolcentre.com):
    • google "free NLP solver" ... drumroll... 1st position from 5330!
    Be sure, I didn't use any hacks to enhance openopt positions in web search engines, as well as any other dishonest tricks. To prevent some side effects (I constantly had been redirected to google.com.ua), I did this search via connecting to a US proxy.