2 new graphical output params have been added:

xlim {(nan, nan)}, ylim {(nan, nan)} - initial estimation for graphical output borders

you can use for example p.xlim = (nan, 10) or p.ylim = [-8, 15] or p.xlim=[inf, 15], only real finite values will be taken into account

Of course, you can use p = NLP(..., xlim=[8,15], ylim=asfarray(nan, 15), ...) as well

for constrained problems ylim affects only 1st subplot.

## Wednesday, December 26, 2007

## Tuesday, December 25, 2007

### Some changes in website and code

- some changes in website, some pics added (Naum Z. Shor, me, + I hope to add Petro I. Stetsyuk picture ASAP)

- some changes related to ralg stop criteria (constrained problems case)

- some changes related to ralg stop criteria (constrained problems case)

### NLP solver scipy_slsqp

One more constrained NLP solver is ready: scipy_slsqp

It requires updating scipy from svn, 25-Dec-2007 or later (there are some bugfixes related to fmin_slsqp)

Thanks to Rob Falck for connecting the solver to scipy.

It requires updating scipy from svn, 25-Dec-2007 or later (there are some bugfixes related to fmin_slsqp)

Thanks to Rob Falck for connecting the solver to scipy.

## Saturday, December 15, 2007

### OpenOpt 0.15

We are glad to inform:

OpenOpt 0.15 (release) is available for download.

Changes since previous release (September 4):

OpenOpt 0.15 (release) is available for download.

Changes since previous release (September 4):

- some new classes
- several new solvers written
- some more solvers connected
- NLP/NSP solver ralg can handle constrained problems
- some bugfixes
- some enhancements in graphical output (especially for constrained problems)

## Thursday, December 13, 2007

### fixed a bug related to some cvxopt solvers

A bug related to some cvxopt solvers have been fixed. That one was due to some problems with asfarray(cvxopt.base.matrix) work. I haven't observed the bug with earlier numpy/cvxopt versions, lp_1.py example had worked correctly.

## Wednesday, December 12, 2007

### coming soon: another one NLP solver scipy_slsqp

Rob Falck have informed: he is currently implementing the Sequential Least Squares Quadratic Programming (SLSQP) optimizer by Dieter Kraft into scipy svn (see more details here).

I intend to connect the one to OO ASAP and provide updated NLP benchmark example. The solver algorithm is similar to lincher but had been implemented much more thoroughly (and using Fortran language), so it should work much better.

I intend to connect the one to OO ASAP and provide updated NLP benchmark example. The solver algorithm is similar to lincher but had been implemented much more thoroughly (and using Fortran language), so it should work much better.

## Sunday, December 9, 2007

### minor changes in graphics

As I had already mentioned, matplotlib still suffer some drawbacks.

For to avoid one of them one more OO graphics parameter have been implemented: show (boolean). It means does OpenOpt have to call pylab.show() function after solver finish or not.

p.show = {True} | False | 0 | 1

or p = NLP(f, x0,..., show=1,...) (other OO classes can be used as well)

It should be equal to False for some certain cases, for example in benchmark of some solvers or when some code should be run immediately after solver finish, no waiting for user to close the matplotlib window. When all calculations will be done and you finally want to handle your picture (resize, save, etc) user should call it by himself:

import pylab

pylab.show()

or

from pylab import show

show()

or

from pylab import *

show()

I have to notice: MATLAB plotting tool doesn't suffer the inconvenience, in OO for MATLAB/Octave version there were no needs to implement the feature. Maybe, in future this pylab drawback will be fixed and the "show" parameter will became unused.

For to avoid one of them one more OO graphics parameter have been implemented: show (boolean). It means does OpenOpt have to call pylab.show() function after solver finish or not.

p.show = {True} | False | 0 | 1

or p = NLP(f, x0,..., show=1,...) (other OO classes can be used as well)

It should be equal to False for some certain cases, for example in benchmark of some solvers or when some code should be run immediately after solver finish, no waiting for user to close the matplotlib window. When all calculations will be done and you finally want to handle your picture (resize, save, etc) user should call it by himself:

import pylab

pylab.show()

or

from pylab import show

show()

or

from pylab import *

show()

I have to notice: MATLAB plotting tool doesn't suffer the inconvenience, in OO for MATLAB/Octave version there were no needs to implement the feature. Maybe, in future this pylab drawback will be fixed and the "show" parameter will became unused.

## Monday, December 3, 2007

### constraints handling for NLP/NSP solver ralg

I have implemented some changes to ralg, now it can handle constrained problems (as well as 1st derivatives df, dc, dh; subgradents are also allowed). Currently only max residual is used, so you can pass just single constraint c(x)=max_i,j_ {0, c[i](x), h[j](x)}. I intend to add more accurate handling of constraints, especially box-bound and linear, in future.

Below you can see same benchmark example as published some days before.

I tried to reduce gradtol value for ALGENCAN for to achieve better f_opt but it yields either even more bigger (30.72) or endless calculations (or, at least, very huge, 1 minute was not enough).

One of features of the ralg constraints implementation is: no needs to calculate df in infeasible points (where max constr > contol). And (as it is common for non-smooth solvers) wise versa: no needs to calculate constraints derivatives in feasible points.

Other ralg features:

Below you can see same benchmark example as published some days before.

I tried to reduce gradtol value for ALGENCAN for to achieve better f_opt but it yields either even more bigger (30.72) or endless calculations (or, at least, very huge, 1 minute was not enough).

One of features of the ralg constraints implementation is: no needs to calculate df in infeasible points (where max constr > contol). And (as it is common for non-smooth solvers) wise versa: no needs to calculate constraints derivatives in feasible points.

Other ralg features:

- can handle ill-conditioned, non-smooth, noisy problems
- each iteration consumes 5*n^2 multiplication operations (float64*float64); there is another modification of r-alg that consumes only 4*n^2, maybe it will be implemented in future
- one of r-alg drawbacks is low precision (no better than ~1e6 on 32-bit architecture). Our department leader Petro I. Stetsyuk has an idea implemented in Fortran ralg version that allows to achieve precision up to 1e-40. Maybe, it will be implemented in Python ralg code in future.

## Tuesday, November 20, 2007

### an example of OO NLP solvers benchmark

Yesterday I have got a letter from Nils Wagner that having OO NLP solvers benchmark would be useful.

So I have committed my file that can be use for your own purposes, modifying the one for your problems and solvers. Let me note that some problems with matplotlib still exist.

Below you can observe a graphic output from the example (also available from openopt/examples/, after svn update). Let me note that usually ALGENCAN works much better that cobyla or lincher. Also, turning stop criteria is very important (for ALGENCAN you should try to set gradtol to 1e-1...1e-6).

Let me also note that current OO-ALGENCAN connection suffers the following drawback: constraints always are calculating, while ALGENCAN uses only active ones. Hence, if user supply constraints c as a sequence (c1, c2, ..., cm) calculations can be speed up. However, it's not implemented in OO-ALGENCAN connection yet.

Please don't forget that cobyla can't handle user-supplied 1st derivatives, so time, cputime, counter will differ significantly (vs lincher and ALGENCAN) if you'll provide the ones.

Also, I intend to provide constraints for ralg soon; maybe, then I'll publish the example with ralg as well.

So I have committed my file that can be use for your own purposes, modifying the one for your problems and solvers. Let me note that some problems with matplotlib still exist.

Below you can observe a graphic output from the example (also available from openopt/examples/, after svn update). Let me note that usually ALGENCAN works much better that cobyla or lincher. Also, turning stop criteria is very important (for ALGENCAN you should try to set gradtol to 1e-1...1e-6).

Let me also note that current OO-ALGENCAN connection suffers the following drawback: constraints always are calculating, while ALGENCAN uses only active ones. Hence, if user supply constraints c as a sequence (c1, c2, ..., cm) calculations can be speed up. However, it's not implemented in OO-ALGENCAN connection yet.

Please don't forget that cobyla can't handle user-supplied 1st derivatives, so time, cputime, counter will differ significantly (vs lincher and ALGENCAN) if you'll provide the ones.

Also, I intend to provide constraints for ralg soon; maybe, then I'll publish the example with ralg as well.

## Monday, November 19, 2007

### another one NLSP solver: nssolve for nonsmooth and noisy funcs

New NLSP solver is available: nssolve, designed primarily for non-smooth and noisy funcs (scipy_fsolve usually is much more better with smooth funcs than nssolve).

This one is based on minimizing max residual via a NSP solver (default: ralg), can handle user-supplied gradient/subgradient. If the one is not available - splitting equations to separate functions is recommended (to speedup nssolve calculations):

f = [func1, func2, ...] or f = (func1, func2, ...)

(not

f = lambda x: ...

or

def f(x): ...)

OO native graphical output is available

(w/o using p.connectIterFcn(..) like it is implemented for scipy_fsolve. Just set p.plot = True or p.plot = 1):

Pictures like the one below are automatically generated by the py-file

This example shows results of nssolve vs scipy.optimize fsolve with a little numerical noise (1e-8) . Here both nssolve and fsolve obtain same gradients from OO Kernel with p.diffInt = 1e-7 (default). ns- can be interpreted as

NonSmooth

or NoiSy

or Naum Shor (Ukrainian academician, my teacher, r-algorithm (implemented in openopt ralg) inventor)

Currently the nssolve is much far from recent nonsmooth solve algorithms (even those ones from our department), but it's better than having nothing at all, and I intend to enhance both r-alg and nssolve from time to time.

This one is based on minimizing max residual via a NSP solver (default: ralg), can handle user-supplied gradient/subgradient. If the one is not available - splitting equations to separate functions is recommended (to speedup nssolve calculations):

f = [func1, func2, ...] or f = (func1, func2, ...)

(not

f = lambda x: ...

or

def f(x): ...)

OO native graphical output is available

(w/o using p.connectIterFcn(..) like it is implemented for scipy_fsolve. Just set p.plot = True or p.plot = 1):

Pictures like the one below are automatically generated by the py-file

This example shows results of nssolve vs scipy.optimize fsolve with a little numerical noise (1e-8) . Here both nssolve and fsolve obtain same gradients from OO Kernel with p.diffInt = 1e-7 (default). ns- can be interpreted as

NonSmooth

or NoiSy

or Naum Shor (Ukrainian academician, my teacher, r-algorithm (implemented in openopt ralg) inventor)

Currently the nssolve is much far from recent nonsmooth solve algorithms (even those ones from our department), but it's better than having nothing at all, and I intend to enhance both r-alg and nssolve from time to time.

## Sunday, November 11, 2007

### LSP (non-lin least squares) graphical output

### NLSP graphic output

Now you can use NLSP (non-linear solve) graphic output:

p = NLSP(f, x0, plot=True, ...)

p.connectIterFcn('df') # scipy.optimize.fsolve has no native iter func that could be connected to OO

#optional: provide gradient df

#p.df = ...

r = p.solve('scipy_fsolve')

(for other NLSP solvers that cannot handle df you should use p.connectIterFcn('f'), however, this is still buggy, maybe I will fix the 'f' case in future).

Let me also remember that you can use p.xlabel = 'time' (default), 'cputime', 'nIter' (case-unsensetive). Old-style p.graphics.xlabel works as well.

The dotted line in the picture is log10(p.contol)

p = NLSP(f, x0, plot=True, ...)

p.connectIterFcn('df') # scipy.optimize.fsolve has no native iter func that could be connected to OO

#optional: provide gradient df

#p.df = ...

r = p.solve('scipy_fsolve')

(for other NLSP solvers that cannot handle df you should use p.connectIterFcn('f'), however, this is still buggy, maybe I will fix the 'f' case in future).

Let me also remember that you can use p.xlabel = 'time' (default), 'cputime', 'nIter' (case-unsensetive). Old-style p.graphics.xlabel works as well.

The dotted line in the picture is log10(p.contol)

### bugfix related to NLSP and LSP

fixed a bug related to NLSP (non-lin solve) and LSP (least squares), when user-supplied gradient was absent.

## Saturday, November 10, 2007

### one more NLP UC solver connected: scipy_powell

one more NLP UC solver: scipy.optimize.fmin_powell as scipy_powell

(cannot handle user-supplied gradient).

(cannot handle user-supplied gradient).

## Thursday, November 8, 2007

### ALGENCAN HPTYPE value

As I had mentioned, if an NLP/NSP solver can handle 1st derivatives, OpenOpt will always pass the ones (obtained numerically according to user-defined p.diffInt, default values for NLP and NSP may differ); elseware sometimes Python cycles significantly slow down calculations (more times to update counters, more checks to be done etc).

So some solvers can't know does 1st derivatives obtained via numerical approximation (inexactly) or analytically.

I tried to supply correct value of HPTYPE right now, but some difficulties were encountered. Then ALGENCAN developers had informed me, that in future ALGENCAN versions HPTYPE will be adjusted more automatically, depending on which funcs & derivatives does user provide. Still I hope they will implement possibility somehow (via a param) to inform the solver that the derivatives supplied are obtained numerically (it can affect on choosing algorithms involved).

So some solvers can't know does 1st derivatives obtained via numerical approximation (inexactly) or analytically.

I tried to supply correct value of HPTYPE right now, but some difficulties were encountered. Then ALGENCAN developers had informed me, that in future ALGENCAN versions HPTYPE will be adjusted more automatically, depending on which funcs & derivatives does user provide. Still I hope they will implement possibility somehow (via a param) to inform the solver that the derivatives supplied are obtained numerically (it can affect on choosing algorithms involved).

## Wednesday, November 7, 2007

### graphic output: subplot for constraints

First of all let me remember that graphic output currently is available for some NLP and NSP solvers via p.plot=True (or p.plot=1).

Also, you can modify p.graphics.xlabel (or, with latest svn changes, p.xlabel as well): default is 'time', also possible 'cputime', 'nIter' (case-insensitive).

Some solvers (for example ALGENCAN) still don't have convenient ability for iter-to-iter data output via a func that could be connected to OO. So you should use (regardless does user provide gradient or not - if latter, OO finite-difference approximation will be used instead) p.connectIterFcn('df') (you can use other func as well, for example 'f' is good for scipy_fminbound, but bad choise can significantly slow your calculations). BTW matplotlib output of 2 subplots is significantly slower than single plot - I hope it will be either fixed in future matplotlib releases or your hardware is(/will be) adequate for your problems.

So, here's an example of ALGENCAN graphic output:

for unconstrained problems (as well as those solvers that have each iter point feasible, for example some box-bound solvers) graphic output remains same:Some minor problems still remain (plotting several solvers in single window or no Python control return till figure close), but I suspect they are due to pylab native bugs (I consider using MATLAB plotting tool was more convenient). I will try to fix(/avoid) that ones when I'll have enough time, I'm short of the one for now.

See also: NLSP (non-linear solve) graphical output.

See also: LSP (non-linear least squares) graphical output.

Also, you can modify p.graphics.xlabel (or, with latest svn changes, p.xlabel as well): default is 'time', also possible 'cputime', 'nIter' (case-insensitive).

Some solvers (for example ALGENCAN) still don't have convenient ability for iter-to-iter data output via a func that could be connected to OO. So you should use (regardless does user provide gradient or not - if latter, OO finite-difference approximation will be used instead) p.connectIterFcn('df') (you can use other func as well, for example 'f' is good for scipy_fminbound, but bad choise can significantly slow your calculations). BTW matplotlib output of 2 subplots is significantly slower than single plot - I hope it will be either fixed in future matplotlib releases or your hardware is(/will be) adequate for your problems.

So, here's an example of ALGENCAN graphic output:

for unconstrained problems (as well as those solvers that have each iter point feasible, for example some box-bound solvers) graphic output remains same:Some minor problems still remain (plotting several solvers in single window or no Python control return till figure close), but I suspect they are due to pylab native bugs (I consider using MATLAB plotting tool was more convenient). I will try to fix(/avoid) that ones when I'll have enough time, I'm short of the one for now.

See also: NLSP (non-linear solve) graphical output.

See also: LSP (non-linear least squares) graphical output.

## Monday, November 5, 2007

### scipy fminbound connected

scipy.optimize.fminbound have been connected as

**scipy_fminbound**(single variable problems only, requires finite lb-ub, ignores x0)## Thursday, November 1, 2007

### another one NLP UC solver

scipy.optimize.fmin_cg (nonlinear conjugate gradient algorithm of Polak and Ribiere See Wright, and Nocedal 'Numerical Optimization', 1999, pg. 120-122) has been connected as scipy_cg.

+ some minor changes to text output of scipy_bfgs and scipy_ncg have been made.

+ some minor changes to text output of scipy_bfgs and scipy_ncg have been made.

## Tuesday, October 30, 2007

### some changes to scipy_bfgs and scipy_ncg

Some changes have been committed to scipy_bfgs and scipy_ncg. Now they handle more stop criteria, and plotting (via p.plot=1) yields point for every iteration, not just 2 points start-finish joined by direct line.

I intend to implement similar changes for ALGENCAN, but since the solver doesn't have iterfcn ability (ALGENCAN developers had been informed about the issue, but no answer have been obtained), I intend to connect iterfcn to df (gradient) (OO numerical gradient will be used if no user-supplied one will be available).

I intend to implement similar changes for ALGENCAN, but since the solver doesn't have iterfcn ability (ALGENCAN developers had been informed about the issue, but no answer have been obtained), I intend to connect iterfcn to df (gradient) (OO numerical gradient will be used if no user-supplied one will be available).

## Monday, October 29, 2007

### some more scipy NLSP solvers have been connected

I have connected some more NLSP solvers from scipy, however, all they works very unstable and can't use derivatives (at least for current scipy version 0.6.0, maybe in future they will be enhanced by someone).

So the new solvers are

scipy_anderson

scipy_anderson2

scipy_broyden1

scipy_broyden2

scipy_broyden3

scipy_broyden_generalized

you can read more about the ones (particularly, algorithms used) here.

So the new solvers are

scipy_anderson

scipy_anderson2

scipy_broyden1

scipy_broyden2

scipy_broyden3

scipy_broyden_generalized

you can read more about the ones (particularly, algorithms used) here.

## Saturday, October 27, 2007

### Elefant solvers

Some months ago I got to know about the Elefant package, it contains some optimization routines that could be useful for OO (although, I haven't start to connect Elefant solvers yet).

However, their API seems to be very strange, for example:

Ok, maybe for [a, +inf) bounds they mean using d (that is r) = inf, but how should I handle (-inf, a] bounds?

Maybe Elefant developers will fix the issue.

However, their API seems to be very strange, for example:

So I can't understand how should I implement those linear inequalities b ≤ Ax ≤ b + d where lower or upper coordinate is +/- inf (btw a small bug is using 'd' literal instead of 'r' or wise versa). I hadn't get any explanations from Elefant documentation.

minimize (c, x)

subject to b ≤ Ax ≤ b + d and l ≤ x ≤ u:

mysolver = intpointsolver.CLPSolver()

x, y, how = mysolver.Solve( c, A, b, r, l, u)

minimize 0.5*(x, Hx) + (c, x)

subject to b ≤ Ax ≤ b + d and l ≤ x ≤ u :

mysolver = intpointsolver.CLPSolver()

x, y, how = mysolver.Solve( c, H, A, b, r, l, u)

Ok, maybe for [a, +inf) bounds they mean using d (that is r) = inf, but how should I handle (-inf, a] bounds?

Maybe Elefant developers will fix the issue.

### IronPython and NumPy

IronPython users, here is an information about using numpy (from IronPython), I guess it could be useful for you.

Let me remember you once again: the only one dependence for OpenOpt is NumPy.

Let me remember you once again: the only one dependence for OpenOpt is NumPy.

## Friday, October 26, 2007

### d2f: 2nd derivatives

I have connected another one unconstrained scipy.optimize solver: fmin_ncg.

This one can handle Hesse matrix (objFunc 2nd derivatives), see the example for more details.

Also, some changes were made to scipy_bfgs, now the solver (along with scipy_ncg) handles more user-supplied stop criteria passed from OpenOpt API. Also, graphic output for scipy_bfgs is no just 2 points connected by direct line, now it's similar to output of lincher or ralg; same to scipy_ncg.

Some of my primary goals are using d2c, d2h, d2l in ALGENCAN and/or CVXOPT and/or other OO NLP solvers, but I guess it will take much time, I'm short of the one for now.

This one can handle Hesse matrix (objFunc 2nd derivatives), see the example for more details.

Also, some changes were made to scipy_bfgs, now the solver (along with scipy_ncg) handles more user-supplied stop criteria passed from OpenOpt API. Also, graphic output for scipy_bfgs is no just 2 points connected by direct line, now it's similar to output of lincher or ralg; same to scipy_ncg.

Some of my primary goals are using d2c, d2h, d2l in ALGENCAN and/or CVXOPT and/or other OO NLP solvers, but I guess it will take much time, I'm short of the one for now.

## Wednesday, October 24, 2007

### some minor bugfixes

3 minor bugs have been fixed (svn and box.net have been updated).

1. OpenOpt required pylab (matplotlib) installed for problems with p.plot turned off as well.

2. p.check.df didn't work with LSP and NLSP.

3. Some code related to GNLP and LLSP were not committed into svn (those who had downloaded OpenOpt from box.net didn't encountered any problems).

1. OpenOpt required pylab (matplotlib) installed for problems with p.plot turned off as well.

2. p.check.df didn't work with LSP and NLSP.

3. Some code related to GNLP and LLSP were not committed into svn (those who had downloaded OpenOpt from box.net didn't encountered any problems).

## Friday, October 19, 2007

### LLSP: linear least squares problem

New OO class have been created:

LLSP - Linear Least Squares Problem.

2 solvers are connected for now: lapack_dgelss and lapack_sgelss (double and single precision). Requires LAPACK and scipy.

A simple example is provided.

Already available in svn and box.net!

P.S. As I had been informed by developers of dgelss/sgelss, they have more powerful routines: dgelsa and dgelsb, available at

LLSP - Linear Least Squares Problem.

2 solvers are connected for now: lapack_dgelss and lapack_sgelss (double and single precision). Requires LAPACK and scipy.

A simple example is provided.

Already available in svn and box.net!

P.S. As I had been informed by developers of dgelss/sgelss, they have more powerful routines: dgelsa and dgelsb, available at

www.hpca.uji.es -> Software Efforts -> LS LibraryHowever, they are not included in LAPACK yet.

## Sunday, October 14, 2007

### Some changes

Yestorday I committed to svn a small bug, so all check of 1st derivatives dc and dh always yield zero difference. Today I have fixed the bug.

DFP (data fit) class have been renamed to LSP (least squares), this is much more correct both for single solver (scipy_leastsq) and for usage. Maybe I will implement DFP later.

Some changes have been made in OO website.

DFP (data fit) class have been renamed to LSP (least squares), this is much more correct both for single solver (scipy_leastsq) and for usage. Maybe I will implement DFP later.

Some changes have been made in OO website.

## Saturday, October 13, 2007

### Visitors statistic

I have added new OpenOpt site web page - visitors statistic.

So for now 51% of browsers are Firefox 1.x. I guess it's one more prove that some visitors remain uncounted due to my asp-based web counter. I hope Jeff Strunk will implement more advanced, javascript-based counter soon (currently javascript is turned off in the Trac settings).

So for now 51% of browsers are Firefox 1.x. I guess it's one more prove that some visitors remain uncounted due to my asp-based web counter. I hope Jeff Strunk will implement more advanced, javascript-based counter soon (currently javascript is turned off in the Trac settings).

## Friday, October 12, 2007

### ALGENCAN output files

Maybe some of you had already noticed that ALGENCAN yields 4 files:

algencan-tabline.out, gencan-tabline.out, algencan.out, solution.txt

As I have been informed by ALGENCAN developers, currently the only one way to prevent creating these files is removing all those lines 'write(10' in algencan.f, recompiling and reinstalling ALGENCAN.

I hope the problem will have more convenient solution in next ALGENCAN releases.

algencan-tabline.out, gencan-tabline.out, algencan.out, solution.txt

As I have been informed by ALGENCAN developers, currently the only one way to prevent creating these files is removing all those lines 'write(10' in algencan.f, recompiling and reinstalling ALGENCAN.

I hope the problem will have more convenient solution in next ALGENCAN releases.

## Wednesday, October 10, 2007

### Additional user args usage

I have committed an example of using user-supplied additional arguments for f, c, h functions.

See the example here.

(Updating OpenOpt from svn or box.net is required)

See the example here.

(Updating OpenOpt from svn or box.net is required)

## Tuesday, October 9, 2007

### Bug in voting have been fixed

So now you can vote.

Also, I noticed some web browsers (IE, Opera) are blocking my asp-based visiting counter (by default or by user settings), so some visitors remain uncounted.

I will try to find other counter, not asp-based, that will take into account more site visitors.

Also, I noticed some web browsers (IE, Opera) are blocking my asp-based visiting counter (by default or by user settings), so some visitors remain uncounted.

I will try to find other counter, not asp-based, that will take into account more site visitors.

## Saturday, October 6, 2007

### DFP: Data Fit Problem

New OO class have been created:

DFP - Data Fit Problem (non-linear)

currently the only one solver is scipy.optimize.leastsq as scipy_leastsq

see example here

DFP - Data Fit Problem (non-linear)

currently the only one solver is scipy.optimize.leastsq as scipy_leastsq

see example here

## Friday, September 28, 2007

### NLSP: Non-Linear Solve Problem (new OO class)

New class has been added to OO:

NLSP: Non-Linear Solve Problem

currently the single solver for the class is scipy.optimize.fsolve as scipy_fsolve.

I intended to connect one more solver from Trilinos NOX package, but this will not be done, at least in nearest future, because of lack of documentation (see discussion of the problem here)

NLSP: Non-Linear Solve Problem

currently the single solver for the class is scipy.optimize.fsolve as scipy_fsolve.

I intended to connect one more solver from Trilinos NOX package, but this will not be done, at least in nearest future, because of lack of documentation (see discussion of the problem here)

## Monday, September 24, 2007

### Auto check: can constraints be handled?

Now OpenOpt will automatically check can solver handle the problem constraints (c(x)<=0, h(x)=0, lb<=x<=ub, Ax <= b, Aeq x = beq) or no.

Error message will be shown, for example

"Error! the solver scipy_lbfgsb cannot handle 'c' constraints"

Error message will be shown, for example

"Error! the solver scipy_lbfgsb cannot handle 'c' constraints"

## Sunday, September 23, 2007

### scipy.optimize.fmin_bfgs connected as scipy_bfgs

Another one solver connected to OpenOpt:

scipy.optimize.fmin_bfgs as "scipy_bfgs"

(requires scipy)

scipy.optimize.fmin_bfgs as "scipy_bfgs"

(requires scipy)

## Wednesday, September 19, 2007

### fmin_cobyla: connected

constrained NLP solver fmin_cobyla from scipy.optimize have been connected to openopt as scipy_cobyla.

Please remeber: the solver doesn't use user-provided gradient info.

Please remeber: the solver doesn't use user-provided gradient info.

## Monday, September 17, 2007

### ALGENCAN have migrated from numeric to numpy

Today I was informed that ALGENCAN have finally migrated from using numeric to numpy.

Those who has encountered the error message with ALGENCAN-openopt connection:

struct has no "ndim" attribute

should re-install ALGENCAN according to new instructions

(you can get it here)

Those who has encountered the error message with ALGENCAN-openopt connection:

struct has no "ndim" attribute

should re-install ALGENCAN according to new instructions

(you can get it here)

## Wednesday, September 12, 2007

### minor updates in lincher

I have committed some changes that allow NLP solver lincher (Python-written, BSD license) to solve problems where no other constraints than box-bound ones are present. Also, lincher doesn't require any QP solver for the case (currently the only one connected is CVXOPT one). However, specialized solvers (connected to openopt) like scipy_lbfgsb or tnc or ALGENCAN are usually much more faster.

## Thursday, September 6, 2007

### examples of openopt graphics output

Here are some pictures automatically generated by openopt (matplotlib should be installed) via setting p.plot = 1. Please note, that for some solvers with constrained problems they can turn out to be uninformative - for example objFun can go down while max constraint violation go up , or wise versa, so in future, would openopt get further development, one of features that should be implemented will be adding 2nd subplot - max constraint value. Also note, that some solvers like ALGENCAN has no cross-iter output function (at least for now) that could be easily connected to openopt's one, so their pictures looks like 2 points (start and finish), connected by single line.

Other possible directions of openopt development could be: connecting solvers of NLPy package, implementing 2nd derivatives, automatic scaling, using patterns of sparsity, connecting well-known NLP IPOPT solver, connecting some global solvers and/or writing openopt own ones, writing a good QP solver (that one is required by lincher).

However, since Google Summer of Code program is finished, it requires new sponsor(s), w/o this one openopt development will be very limited.

Other possible directions of openopt development could be: connecting solvers of NLPy package, implementing 2nd derivatives, automatic scaling, using patterns of sparsity, connecting well-known NLP IPOPT solver, connecting some global solvers and/or writing openopt own ones, writing a good QP solver (that one is required by lincher).

However, since Google Summer of Code program is finished, it requires new sponsor(s), w/o this one openopt development will be very limited.

## Monday, September 3, 2007

### ALGENCAN: connected

So ALGENCAN have been connected and tested with all types of constraints.

However, 2nd derivatives are not implemented in openopt at all, so you can't pass the funcs (like d2f, d2c, d2h, d2L) to ALGENCAN from openopt framework for now.

However, 2nd derivatives are not implemented in openopt at all, so you can't pass the funcs (like d2f, d2c, d2h, d2L) to ALGENCAN from openopt framework for now.

## Friday, August 31, 2007

### ALGENCAN news

I continue trying to connect ALGENCAN to openopt.

Yestorday I informed ALGENCAN developers about minor bug in toyprob.py, and they had fix that one.

One more bug, more serious, is that they continue using obsolete Numeric module in one of files (at least, in pywrapper.c) instead of numpy, despite we agreed long tima ago they will use numpy everywhere. I informed them today and I hope they will fix it soon.

As for me I have temporary solution - replacing x by asfarray(x) in all python funcs related to ALGENCAN. So it works now, at least for UC problems, I will check constrained today. However, there are still lots of problems, and 2-nd order problems (that use d2f, d2c, d2h) are still unavailable in openopt (at all, not only for ALGENCAN).

Yestorday I informed ALGENCAN developers about minor bug in toyprob.py, and they had fix that one.

One more bug, more serious, is that they continue using obsolete Numeric module in one of files (at least, in pywrapper.c) instead of numpy, despite we agreed long tima ago they will use numpy everywhere. I informed them today and I hope they will fix it soon.

As for me I have temporary solution - replacing x by asfarray(x) in all python funcs related to ALGENCAN. So it works now, at least for UC problems, I will check constrained today. However, there are still lots of problems, and 2-nd order problems (that use d2f, d2c, d2h) are still unavailable in openopt (at all, not only for ALGENCAN).

### 2 more steps

2 more steps were implemented in optimizers module:

- Goldfeld

- Goldstein-Price.

Matthieu and I have some difficalties in other 2-nd order steps related to negative curvature, because available Python libraries (including numpy) hasn't modified Cholesky factorization, that is required. As we agreed no other than Python code is included to openopt, it means no fortran, etc, only python and numpy, so those fortran-written routines are inappropriate. The proposed article contains 2-pages algorithm w/o vectorization, so Python implementation will be too slow and would take much time for implementation, unittests, bugfixes.

So we decided to switch to other tasks.

- Goldfeld

- Goldstein-Price.

Matthieu and I have some difficalties in other 2-nd order steps related to negative curvature, because available Python libraries (including numpy) hasn't modified Cholesky factorization, that is required. As we agreed no other than Python code is included to openopt, it means no fortran, etc, only python and numpy, so those fortran-written routines are inappropriate. The proposed article contains 2-pages algorithm w/o vectorization, so Python implementation will be too slow and would take much time for implementation, unittests, bugfixes.

So we decided to switch to other tasks.

## Wednesday, August 22, 2007

### Barzilai-Borwein algorithms

both Barzilai-Borwein algs have been written and tested.

Matthieu informed me he intends to rename some variables, especially containing word "step" because it yields a big ambiguity (alpha_step or x_step, and some more). previously direction also had "step" in variable name.

However, I have some problems with default sigma1, sigma2, alpha1, alpha2, rho, sigma values (from non-monotone Barzilai-Borwein alg). My initial guesses were too bad, as I noticed. Currently it works so-so - but maybe only for my tests.

Matthieu informed me he intends to rename some variables, especially containing word "step" because it yields a big ambiguity (alpha_step or x_step, and some more). previously direction also had "step" in variable name.

However, I have some problems with default sigma1, sigma2, alpha1, alpha2, rho, sigma values (from non-monotone Barzilai-Borwein alg). My initial guesses were too bad, as I noticed. Currently it works so-so - but maybe only for my tests.

## Tuesday, August 21, 2007

### UC solvers

Cubic Interpolation Search has been connected to Matthieu's code.

BarzilaiBorwein alg is written but not connected yet, I still need some more digging in Matthieu's code to connect it correctly.

BarzilaiBorwein alg is written but not connected yet, I still need some more digging in Matthieu's code to connect it correctly.

## Thursday, August 16, 2007

### l_bfgs_b from scipy.optimize have been connected to openopt

One more box-bounded NLP solver have been connected to openopt as scipy_lbfgsb.

Is solves the example from previous post much faster: ~0.1 sec.

I tried to connect cobyla, but some problems have been encountered. The main one is: all constraints should be passed as sequence of funcs. As far as I learned, even linear constraints Ax<=b or x>=lb should be separated into funcs, each one returning single value. At least, current python-cobyla binding requires this approach. Of course, I can implement the trick, but it can slow calculations very much; maybe, some changes in Python-cobyla binding will turn out to be much more effective solution. But I have no time for now, I'm busy with other things.

One more example have been added to /examples directory: nlp1.py. It uses lincher, but any other solver, capable of handling all constraints, can be use instead.

Is solves the example from previous post much faster: ~0.1 sec.

I tried to connect cobyla, but some problems have been encountered. The main one is: all constraints should be passed as sequence of funcs. As far as I learned, even linear constraints Ax<=b or x>=lb should be separated into funcs, each one returning single value. At least, current python-cobyla binding requires this approach. Of course, I can implement the trick, but it can slow calculations very much; maybe, some changes in Python-cobyla binding will turn out to be much more effective solution. But I have no time for now, I'm busy with other things.

One more example have been added to /examples directory: nlp1.py. It uses lincher, but any other solver, capable of handling all constraints, can be use instead.

## Wednesday, August 15, 2007

### tnc have been connected to openopt

(tnc is scipy.optimize NLP solver for box-bounded problems, requires scipy installed)

example of tnc usage:

----------------------------

from scikits.openopt import NLP

from numpy import *

N = 15

ff = lambda x: ((x-15)**4).sum() + 64*((x-1)**2).sum()

p = NLP(ff, cos(arange(N)))

p.lb = arange(N)

p.ub = 5 + arange(N)

r = p.solve('scipy_tnc')

----------------------------

So it should yield something like

istop: 4 (|| F[k] - F[k-1] || < funtol)

Solver: Time elapsed = 0.12 CPU Time Elapsed = 0.12

objFunValue: 10370.1398294 (feasible)

r.xf should be equal to

array([ 5. , 6. , 7. , 8. ,

8.72421634, 8.72421837, 8.72421093, 8.72420843,

8.72420744, 9. , 10. , 11. ,

12. , 13. , 14. ])

if N = 150

lincher:

itn 37 : Fk= 887272370.281 maxResidual= 5.24576648786e-09

Solver: Time elapsed = 4.16 CPU Time Elapsed = 4.01

objFunValue: 887272370.281 (feasible)

scipy_tnc:

istop: 4 (|| F[k] - F[k-1] || < funtol)

Solver: Time elapsed = 3.02 CPU Time Elapsed = 2.93

objFunValue: 887273034.135 (feasible)

maxResidual = 0

So, despite tnc has a little bit worst objFun value, it solve the example much quicker and is more suitable for box-bounded problems - of course, because it's intended for that ones and only for that ones, while lincher handles all constraints.

On the other hand, I'm not fond of CVXOPT QP solver, using better one could significantly increase lincher speed.

However, this is the only one example I had tested.

Maybe tomorrow I'll connect and publish lbfgsb and cobyla results.

----------------------------

+ today some examples to lincher have been written, I'll commit them tomorrow after minor changes.

example of tnc usage:

----------------------------

from scikits.openopt import NLP

from numpy import *

N = 15

ff = lambda x: ((x-15)**4).sum() + 64*((x-1)**2).sum()

p = NLP(ff, cos(arange(N)))

p.lb = arange(N)

p.ub = 5 + arange(N)

r = p.solve('scipy_tnc')

----------------------------

So it should yield something like

istop: 4 (|| F[k] - F[k-1] || < funtol)

Solver: Time elapsed = 0.12 CPU Time Elapsed = 0.12

objFunValue: 10370.1398294 (feasible)

r.xf should be equal to

array([ 5. , 6. , 7. , 8. ,

8.72421634, 8.72421837, 8.72421093, 8.72420843,

8.72420744, 9. , 10. , 11. ,

12. , 13. , 14. ])

if N = 150

lincher:

itn 37 : Fk= 887272370.281 maxResidual= 5.24576648786e-09

Solver: Time elapsed = 4.16 CPU Time Elapsed = 4.01

objFunValue: 887272370.281 (feasible)

scipy_tnc:

istop: 4 (|| F[k] - F[k-1] || < funtol)

Solver: Time elapsed = 3.02 CPU Time Elapsed = 2.93

objFunValue: 887273034.135 (feasible)

maxResidual = 0

So, despite tnc has a little bit worst objFun value, it solve the example much quicker and is more suitable for box-bounded problems - of course, because it's intended for that ones and only for that ones, while lincher handles all constraints.

On the other hand, I'm not fond of CVXOPT QP solver, using better one could significantly increase lincher speed.

However, this is the only one example I had tested.

Maybe tomorrow I'll connect and publish lbfgsb and cobyla results.

----------------------------

+ today some examples to lincher have been written, I'll commit them tomorrow after minor changes.

## Tuesday, August 14, 2007

### news from ALGENCAN

I received another one letter from ALGENCAN developers. They informed me about some changes they made according to my suggestions (like using numpy.array instead of Python list etc).

So I don't think all is implemented very nice for now, especially the way they handle linear constraints in input data. But anyway it's already much more better and could be connected, have I enough time - now I'm a little bit out of GSoC schedule, so, maybe, I will write ALGENCAN binding in September or so. It still requires much more work than lbfgsb or tnc or cobyla (from scipy), because it has much more input parameters and non-standard (vs scipy) interface. Maybe I will write bindings to the ones before GSoC finish.

Also, ALGENCAN can use 2nd derivatives (as well as HESSmult func), but 2-nd order problems (all those d2f, d2c, d2h) are not implemented in openopt yet.

So, I sent them some more suggestions, like using iterfcn for to enable real-time openopt graphical output and user-defined stop criteria.

So I don't think all is implemented very nice for now, especially the way they handle linear constraints in input data. But anyway it's already much more better and could be connected, have I enough time - now I'm a little bit out of GSoC schedule, so, maybe, I will write ALGENCAN binding in September or so. It still requires much more work than lbfgsb or tnc or cobyla (from scipy), because it has much more input parameters and non-standard (vs scipy) interface. Maybe I will write bindings to the ones before GSoC finish.

Also, ALGENCAN can use 2nd derivatives (as well as HESSmult func), but 2-nd order problems (all those d2f, d2c, d2h) are not implemented in openopt yet.

So, I sent them some more suggestions, like using iterfcn for to enable real-time openopt graphical output and user-defined stop criteria.

### openopt tickets 33 and 34

I have been out of internet connection for some days and now I continue my work.

So I was informed by Nils Wagner about the ticket33 and ticket34

I didn't encountered 33 (as Nils wrote it's already had been fixed); fix to ticket 34 have been committed to svn, as well as some other changes. One of them is stand-alone line-search optimizer based on Armijo rule, so it didn't require scipy or Matthieu code. On the other hand, the latter can be very useful when no constraints are present (however, UC solvers like bfgs etc usually will be better in the case), or if the user's problem has just 1 constraint (in the case max(constraints) = the constraint = smooth func).

I tried to close the tickets but I have some problems with register/login, I hope Jeff Strunk will fix them soon.

So I was informed by Nils Wagner about the ticket33 and ticket34

I didn't encountered 33 (as Nils wrote it's already had been fixed); fix to ticket 34 have been committed to svn, as well as some other changes. One of them is stand-alone line-search optimizer based on Armijo rule, so it didn't require scipy or Matthieu code. On the other hand, the latter can be very useful when no constraints are present (however, UC solvers like bfgs etc usually will be better in the case), or if the user's problem has just 1 constraint (in the case max(constraints) = the constraint = smooth func).

I tried to close the tickets but I have some problems with register/login, I hope Jeff Strunk will fix them soon.

## Tuesday, August 7, 2007

### line-search: still not ready

I have tried 3 line-search routines: scipy.optimize.line_search, scipy.optimize.linesearch.line_search and Matthieu's. Every one has it's own drawbacks, as I informed scipy dev mail list. So, at last, Matthieu have installed cvxopt (because lincer requires qp solver from that one; Matthieu had some problems with cvxopt compile), and begin to debug his routines. So he suspects the problem is in the algorithm he had took from a book, and he wants to implement other one (during 1-2 days).

## Friday, August 3, 2007

### lincher: Problems with line-search subproblem

Today I began converting openopt docstrings to new format + connecting Matthieu's line-search optimizer to lincher (instead of scipy.optimize.line_search that seems to be obsolete and no maintained any more, noone can (or want) answer about old_fval and old_old_fval parameters)

However, I encountered some difficulties, and some problems were not solved in the Matthieu's code, now he is looking at the ones.

Currently I try to use StrongWolfePowellRule, 1st iteration is performed ok but 2nd iteration returns the same point (I found that line-search solver returns same point):

>>> starting solver lincher (BSD license) with problem unnamed

itn 0: Fk= 8596.39550577 maxResidual= 804.031293334

itn 1 : Fk= 291.063506679 maxResidual= 0.686929320414

N= 672.327620905

itn 2 : Fk= 291.063506679 maxResidual= 0.686929320414

N= 2040.4800449

solver lincher has finished solving the problem unnamed

istop: 4 (|| F[k] - F[k-1] || < elapsed =" 0.53" elapsed =" 0.5" residual =" 0.686929320414)">>> lsF(x0)

1692.7290772406229

>>> lsF(x0+1e-4*direction)

1691.0958067992992#grow down

>>> lsF(x0+1e-3*direction)

1681.525182413252

>>> lsF(x0+1e-2*direction)

1681.5725957967709# starting to grow up again

>>> lsF(x0+1e-1*direction)

1684.4384883880082

I informed Matthieu and I hope to obtain an answer tomorrow.

However, I encountered some difficulties, and some problems were not solved in the Matthieu's code, now he is looking at the ones.

Currently I try to use StrongWolfePowellRule, 1st iteration is performed ok but 2nd iteration returns the same point (I found that line-search solver returns same point):

>>> starting solver lincher (BSD license) with problem unnamed

itn 0: Fk= 8596.39550577 maxResidual= 804.031293334

itn 1 : Fk= 291.063506679 maxResidual= 0.686929320414

N= 672.327620905

itn 2 : Fk= 291.063506679 maxResidual= 0.686929320414

N= 2040.4800449

solver lincher has finished solving the problem unnamed

istop: 4 (|| F[k] - F[k-1] || < elapsed =" 0.53" elapsed =" 0.5" residual =" 0.686929320414)">>> lsF(x0)

1692.7290772406229

>>> lsF(x0+1e-4*direction)

1691.0958067992992#grow down

>>> lsF(x0+1e-3*direction)

1681.525182413252

>>> lsF(x0+1e-2*direction)

1681.5725957967709# starting to grow up again

>>> lsF(x0+1e-1*direction)

1684.4384883880082

I informed Matthieu and I hope to obtain an answer tomorrow.

## Wednesday, August 1, 2007

### more info on ticket 464

some email messages to/from Nils Wagner + some hours were elapsed for to found one more bug related to ticket 464.

So it turns out that asfarray(matrix([[0.3]])) returns array in numpy 1.0.1 and matrix in numpy 1.0.4.dev3937.

I think 1st way (array) is more correct.

Nils have promised to organize a ticket.

So it turns out that asfarray(matrix([[0.3]])) returns array in numpy 1.0.1 and matrix in numpy 1.0.4.dev3937.

I think 1st way (array) is more correct.

Nils have promised to organize a ticket.

### tickets: ready to be closed

all tickets assigned to me (+some more from Nils Wagner) now are ready to be closed.

The only one remaining is 399 (connecting PSwarm), but maybe it will be assigned to other person, because it requires connecting Python and C.

Now I'm trying to connect some Matthieu's solvers to openopt, for my own usage (in solvers lincher and ralg) as well.

The only one remaining is 399 (connecting PSwarm), but maybe it will be assigned to other person, because it requires connecting Python and C.

Now I'm trying to connect some Matthieu's solvers to openopt, for my own usage (in solvers lincher and ralg) as well.

## Monday, July 30, 2007

### ticket 285: ready to close

rev. 3209 contains changes to optimize.brent (I implemented the brent

class in the way proposed by Alan Isaac + 4 tests related to brent were

added).

So I guess the ticket 285 should be closed.

Also, I have changed some docstrings related to brent and some other

funcs ():

If bracket is two numbers *(a,c)* then they are

assumed to be a starting interval for a downhill bracket search

(see bracket);* it doesn't always mean that obtained solution will

satisfy a<=x<=c*.

(As for me some weeks ago I was surprised to obtain solution from

outside of the (a,c) interval, MATLAB has only one func fminbound that

yields solution strictly from the given interval, but some

scipy.optimize line-search routines use strictly the interval and some

other, like brent, use only as starting interval for bracket search, and

this is not properly descriebed in documentation).

## Thursday, July 26, 2007

### more tickets to be closed

So some more tickets are ready:

http://projects.scipy.org/scipy/scipy/ticket/464

(optimize.fmin_powell doesn't accept a matrix input for the initial guess)

rev 3199

http://projects.scipy.org/scipy/scipy/ticket/416

(MATRIXC2F transposing the wrong way in optimize.leastsq)

I will commit my patch if nothing better will be proposed till some hours

One more change in tnc - now return x value (optim point) is numpy.array, not Python list.

Also, now it consumes x0 as numpy.array, not Python list

(so http://projects.scipy.org/scipy/scipy/ticket/384 should be closed)

About a day was spent for a bug related to fmin_ncg, but finally it turned out to be related to sparse matrices.

Now I'm thinking about http://projects.scipy.org/scipy/scipy/ticket/285 (bracket parameters). I intend to discuss my suggestions in scipy dev mailing list tomorrow.

## Tuesday, July 24, 2007

### tnc 1.3 connected to scipy

so tnc 1.3 (ticket296) is available in scipy svn rev. 3185

Maybe, it's worth to connect the one to openopt (i.e. it should be available in openopt syntax provided scipy is installed).

Maybe, the same should be done to fmin_cobyla and l_bfgs_b.

These fruits are rather low-hanging.

Maybe, it's worth to connect the one to openopt (i.e. it should be available in openopt syntax provided scipy is installed).

Maybe, the same should be done to fmin_cobyla and l_bfgs_b.

These fruits are rather low-hanging.

## Monday, July 23, 2007

### scipy optimize tickets

So, scipy dev team asked me to close some tickets this week.

For now the tickets ready to close are:

http://projects.scipy.org/scipy/scipy/ticket/234

http://projects.scipy.org/scipy/scipy/ticket/377

http://projects.scipy.org/scipy/scipy/ticket/390

http://projects.scipy.org/scipy/scipy/ticket/344

http://projects.scipy.org/scipy/scipy/ticket/389

http://projects.scipy.org/scipy/scipy/ticket/388

now I'm working on the ticket

http://projects.scipy.org/scipy/scipy/ticket/296

(connecting tnc 1.3). It require some more time.

Seems like the same changes should be done each time the new tnc version appear, for example handling of lower and upper bounds differs in tnc native interface and it scipy version. I think it would be much more better, has tnc single interface with single svn home directory.

For now the tickets ready to close are:

http://projects.scipy.org/scipy/scipy/ticket/234

http://projects.scipy.org/scipy/scipy/ticket/377

http://projects.scipy.org/scipy/scipy/ticket/390

http://projects.scipy.org/scipy/scipy/ticket/344

http://projects.scipy.org/scipy/scipy/ticket/389

http://projects.scipy.org/scipy/scipy/ticket/388

now I'm working on the ticket

http://projects.scipy.org/scipy/scipy/ticket/296

(connecting tnc 1.3). It require some more time.

Seems like the same changes should be done each time the new tnc version appear, for example handling of lower and upper bounds differs in tnc native interface and it scipy version. I think it would be much more better, has tnc single interface with single svn home directory.

## Thursday, July 5, 2007

### f, c, h patterns: progress is quite limited and slow

I have understood that it will take much more time that I previously planned (I hope one week will be enough).

Also, it will require using sparse matrices in some cases.

The patterns are implemented in OpenOpt for MATLAB, but in other, very limited, way, that is appropriate for non-smooth solvers (like UkrOpt ralg) only.

Would the patterns implementation be easy, all free NLP solvers like fmin_cobyla would use those ones.

Of course, Python allows to spend much less time for (patterns-related) code creation than for example Fortran, but this one is still rather significant.

Also, it will require using sparse matrices in some cases.

The patterns are implemented in OpenOpt for MATLAB, but in other, very limited, way, that is appropriate for non-smooth solvers (like UkrOpt ralg) only.

Would the patterns implementation be easy, all free NLP solvers like fmin_cobyla would use those ones.

Of course, Python allows to spend much less time for (patterns-related) code creation than for example Fortran, but this one is still rather significant.

## Tuesday, July 3, 2007

### Michigan State Formula Racing: using solver ralg from OpenOpt for MATLAB

The video of nonsmooth ralg solver work from OpenOpt for MATLAB/Octave, linked with Ansys, on a car design, is available at

http://www.egr.msu.edu/~strefli3/Ralg_Opt.mpeg (~3.3MB)

DMX and SMX are the Maximum Displacement and the Maximum Von Mises stress respectfully

This file is also available here

see also: Michigan State Formula Racing website

http://www.egr.msu.edu/~strefli3/Ralg_Opt.mpeg (~3.3MB)

DMX and SMX are the Maximum Displacement and the Maximum Von Mises stress respectfully

This file is also available here

see also: Michigan State Formula Racing website

### Unified number of funcs evaluation

r.nFEvals, r.nGradEvals etc have been renamed to

r.nEvals.f, r.nEvals.df, r.nEvals.c, r.nEvals.dc, r.nEvals.h, r.nEvals.dh

the same will be done to 2nd derivatives (they are unimplemented for now)

r.nEvals.f, r.nEvals.df, r.nEvals.c, r.nEvals.dc, r.nEvals.h, r.nEvals.dh

the same will be done to 2nd derivatives (they are unimplemented for now)

### Automatic gradient check: ready!

here's an example of usage:

let's consider the NLP problem that I published 2 days ago (see here), and let's add the following lines:

p.check.df = 1

p.check.dc = 1

p.check.dh = 1

#also you may add/modify these ones:

prob.diffInt = 1e-07# default value is 1e-7

prob.check.maxViolation = 1e-05 #default value is 1e-5

(diffInt is ised in numerical gradient/subgradient obtaining;

lines where difference between user-supplied and OO-obtained (numerically) gradients are less than prob.check.maxViolation will not be shown)

Run this example and check your output with mine (that is attached in comment to the message)

let's consider the NLP problem that I published 2 days ago (see here), and let's add the following lines:

p.check.df = 1

p.check.dc = 1

p.check.dh = 1

#also you may add/modify these ones:

prob.diffInt = 1e-07# default value is 1e-7

prob.check.maxViolation = 1e-05 #default value is 1e-5

(diffInt is ised in numerical gradient/subgradient obtaining;

lines where difference between user-supplied and OO-obtained (numerically) gradients are less than prob.check.maxViolation will not be shown)

Run this example and check your output with mine (that is attached in comment to the message)

## Monday, July 2, 2007

### Automatic gradient check

I have committed some code related to automatic gradient check in svn (+some more minor changes), but it doesn't work properly yet.

It will provide text output in a way, similar to OpenOpt for MATLAB/Octave, that is more convenient one than MATLAB7 fmincon do.

I hope tomorrow it will be ready.

It will provide text output in a way, similar to OpenOpt for MATLAB/Octave, that is more convenient one than MATLAB7 fmincon do.

I hope tomorrow it will be ready.

## Friday, June 29, 2007

### constrained NLP example with gradients

Here's an example of general constrained NL problem and solution by OO solver lincher

(x0-5)^2 + (x1-5)^2 + ... +(x149-5)^2 -> min

subjected to

# lb<= x <= ub:

x4 <= 4

8 <= x5 <= 15

# Ax <= b

x0+...+x149 >= 800

x10+x11 <= 9

# Aeq x = beq

x100+x101 = 8

# c(x) <= 0

2*x0^4-32 <= 0

x1^2+x2^2-8 <= 0

# h(x) = 0

1e6*(x[149]-1)**4 = 0

(x[148]-1.5)**4 = 0

----------------------------------------------------------------

from scikits.openopt import NLP

from numpy import cos, arange, ones, asarray, zeros

N = 150

p = NLP(lambda x: ((x-5)**2).sum(), 8*cos(arange(N)), iprint = 25, maxIter = 1e3)

#f(x) gradient (optional):

p.df = lambda x: 2*(x-5)

p.c = lambda x: [2* x[0] **4-32, x[1]**2+x[2]**2 - 8]

#c(x) gradients (optional):

nc = 2 # number of c(x)<=0 constraints

def DC(x):

r = zeros((p.n, nc))

r[0,0] = 2 * 4 * x[0]**3

r[1,1] = 2 * x[1]

r[2,1] = 2 * x[2]

return r

p.dc = DC

h1 = lambda x: 1e6*(x[-1]-1)**4

h2 = lambda x: (x[-2]-1.5)**4

p.h = [h1, h2]

#h(x) gradients (optional):

nh = len(p.h)

def DH(x):

r = zeros((p.n, nh))

r[-1,0] = 1e6*4*(x[-1]-1)**3

r[-2,1] = 4*(x[-2]-1.5)**3

return r

p.dh = DH

p.lb = -6*ones(p.n)

p.ub = 6*ones(p.n)

p.ub[4] = 4

p.lb[5], p.ub[5] = 8, 15

p.A = -ones((2, p.n))

p.A[1, 2:] = 0

p.A[1, 10:12] = 1

p.b = [-800, 9]

p.Aeq = zeros(p.n)

p.Aeq[100:102] = 1

p.beq = 8

r = p.solve('lincher')

--------------------------------------------

>>> starting solver lincher (BSD license) with problem unnamed

itn 0: Fk= 8596.39550577 maxResidual= 60585923.7208

itn 25 : Fk= 99.1913689183 maxResidual= 0.219982291475

N= 5534.59379932 alpha= 0.999266862564

itn 50 : Fk= 98.8504029574 maxResidual= 0.0025405684671

N= 111496.623829 alpha= 0.41746278006

itn 70 : Fk= 98.8329244399 maxResidual= 9.99877777019e-07

N= 115603.527987 alpha= 0.424851108558

solver lincher has finished solving the problem unnamed

istop: 4 (|| F[k] - F[k-1] || < funtol)

Solver: Time elapsed = 7.25 CPU Time Elapsed = 6.95

objFunValue: 98.8329244399 (feasible)

(x0-5)^2 + (x1-5)^2 + ... +(x149-5)^2 -> min

subjected to

# lb<= x <= ub:

x4 <= 4

8 <= x5 <= 15

# Ax <= b

x0+...+x149 >= 800

x10+x11 <= 9

# Aeq x = beq

x100+x101 = 8

# c(x) <= 0

2*x0^4-32 <= 0

x1^2+x2^2-8 <= 0

# h(x) = 0

1e6*(x[149]-1)**4 = 0

(x[148]-1.5)**4 = 0

----------------------------------------------------------------

from scikits.openopt import NLP

from numpy import cos, arange, ones, asarray, zeros

N = 150

p = NLP(lambda x: ((x-5)**2).sum(), 8*cos(arange(N)), iprint = 25, maxIter = 1e3)

#f(x) gradient (optional):

p.df = lambda x: 2*(x-5)

p.c = lambda x: [2* x[0] **4-32, x[1]**2+x[2]**2 - 8]

#c(x) gradients (optional):

nc = 2 # number of c(x)<=0 constraints

def DC(x):

r = zeros((p.n, nc))

r[0,0] = 2 * 4 * x[0]**3

r[1,1] = 2 * x[1]

r[2,1] = 2 * x[2]

return r

p.dc = DC

h1 = lambda x: 1e6*(x[-1]-1)**4

h2 = lambda x: (x[-2]-1.5)**4

p.h = [h1, h2]

#h(x) gradients (optional):

nh = len(p.h)

def DH(x):

r = zeros((p.n, nh))

r[-1,0] = 1e6*4*(x[-1]-1)**3

r[-2,1] = 4*(x[-2]-1.5)**3

return r

p.dh = DH

p.lb = -6*ones(p.n)

p.ub = 6*ones(p.n)

p.ub[4] = 4

p.lb[5], p.ub[5] = 8, 15

p.A = -ones((2, p.n))

p.A[1, 2:] = 0

p.A[1, 10:12] = 1

p.b = [-800, 9]

p.Aeq = zeros(p.n)

p.Aeq[100:102] = 1

p.beq = 8

r = p.solve('lincher')

--------------------------------------------

>>> starting solver lincher (BSD license) with problem unnamed

itn 0: Fk= 8596.39550577 maxResidual= 60585923.7208

itn 25 : Fk= 99.1913689183 maxResidual= 0.219982291475

N= 5534.59379932 alpha= 0.999266862564

itn 50 : Fk= 98.8504029574 maxResidual= 0.0025405684671

N= 111496.623829 alpha= 0.41746278006

itn 70 : Fk= 98.8329244399 maxResidual= 9.99877777019e-07

N= 115603.527987 alpha= 0.424851108558

solver lincher has finished solving the problem unnamed

istop: 4 (|| F[k] - F[k-1] || < funtol)

Solver: Time elapsed = 7.25 CPU Time Elapsed = 6.95

objFunValue: 98.8329244399 (feasible)

## Wednesday, June 27, 2007

### NLP solver "lincher": already works

lincher means "LINearization solver written in CHERkassy town, Ukraine"

currently it requires QP solver (the only one connected to OO is cvxopt one, also mosek)

Here's a small example, tomorrow I will have it made more exact:

from scikits.openopt import NLP

from numpy import cos, arange, ones, asarray

N = 15

p = NLP(lambda x: (x**2).sum()+((x-10)**2).sum(), cos(arange(N)))

p.c = lambda x: [2* x[0] **4-32, x[1]**2+x[2]**2 - 8]

h1 = lambda x: 1e6*(x[-1]-1)**4

h2 = lambda x: (x[-2]-1.5)**4

p.h = [h1, h2]

p.lb = -6*ones(p.n)

p.ub = 6*ones(p.n)

p.maxIter = 1e3

r = p.solve('lincher')

>>> starting solver lincher (BSD license) with problem unnamed

itn 0: Fk= 1485.60535217 maxResidual= 555356.768901

itn 25 : Fk= 861.006426698 maxResidual= 0.0815128971501

N= 196862.294712 alpha= 0.0140694378966

itn 50 : Fk= 860.925453409 maxResidual= 0.000180521169415

N= 197037.161793 alpha= 0.155520860502

itn 75 : Fk= 860.890756449 maxResidual= 1.74602521597e-06

N= 200312.271054 alpha= 0.0503775225937

itn 85 : Fk= 860.887158576 maxResidual= 9.89853580435e-07

N= 203978.915945 alpha= 0.0328485602443

solver lincher finished solving the problem unnamed

istop: 4 (|| F[k] - F[k-1] || < funtol)

Solver: Time elapsed = 1.49 CPU Time Elapsed = 1.42

objFunValue: 860.887158576 (feasible)

>>> r.xf

array([ 2.00000001, 2.00015789, 1.99984234, 5.00000007, 5.0000001 ,

5.0000001 , 4.99999931, 5.00000009, 5.00000007, 5.00000008,

5.00000002, 4.99999999, 4.99999994, 1.47340852, 0.99915843])

other changes:

r.msg field has been added (see line above:

istop: 4 (|| F[k] - F[k-1] || < funtol)

)

r contains iterFvals and iterXvals - values of x[k] and f[k] - k-th iter X and objFunc(x)

+some minor changes

currently it requires QP solver (the only one connected to OO is cvxopt one, also mosek)

Here's a small example, tomorrow I will have it made more exact:

from scikits.openopt import NLP

from numpy import cos, arange, ones, asarray

N = 15

p = NLP(lambda x: (x**2).sum()+((x-10)**2).sum(), cos(arange(N)))

p.c = lambda x: [2* x[0] **4-32, x[1]**2+x[2]**2 - 8]

h1 = lambda x: 1e6*(x[-1]-1)**4

h2 = lambda x: (x[-2]-1.5)**4

p.h = [h1, h2]

p.lb = -6*ones(p.n)

p.ub = 6*ones(p.n)

p.maxIter = 1e3

r = p.solve('lincher')

>>> starting solver lincher (BSD license) with problem unnamed

itn 0: Fk= 1485.60535217 maxResidual= 555356.768901

itn 25 : Fk= 861.006426698 maxResidual= 0.0815128971501

N= 196862.294712 alpha= 0.0140694378966

itn 50 : Fk= 860.925453409 maxResidual= 0.000180521169415

N= 197037.161793 alpha= 0.155520860502

itn 75 : Fk= 860.890756449 maxResidual= 1.74602521597e-06

N= 200312.271054 alpha= 0.0503775225937

itn 85 : Fk= 860.887158576 maxResidual= 9.89853580435e-07

N= 203978.915945 alpha= 0.0328485602443

solver lincher finished solving the problem unnamed

istop: 4 (|| F[k] - F[k-1] || < funtol)

Solver: Time elapsed = 1.49 CPU Time Elapsed = 1.42

objFunValue: 860.887158576 (feasible)

>>> r.xf

array([ 2.00000001, 2.00015789, 1.99984234, 5.00000007, 5.0000001 ,

5.0000001 , 4.99999931, 5.00000009, 5.00000007, 5.00000008,

5.00000002, 4.99999999, 4.99999994, 1.47340852, 0.99915843])

other changes:

r.msg field has been added (see line above:

istop: 4 (|| F[k] - F[k-1] || < funtol)

)

r contains iterFvals and iterXvals - values of x[k] and f[k] - k-th iter X and objFunc(x)

+some minor changes

## Monday, June 25, 2007

### QP class: ready!

QP: constructor for Quadratic Problem assignment

1/2 x' H x + f' x -> min

subjected to

A x <= b Aeq x = beq lb <= x <= ub Alternatively to A/Aeq you can use Awhole matrix as it's descriebed in LP documentation (or both A, Aeq, Awhole) examples of valid calls: p = QP(H, f, params)

p = QP(numpy.ones((3,3)), f=numpy.array([1,2,4]), params)

p = QP(f=range(8)+15, H = numpy.diag(numpy.ones(8)), params)

p = QP(H, f, A=A, Aeq=Aeq, b=b, beq=beq, lb=lb, ub=ub, params)

INPUT:

H: size n x n matrix, symmetric, positive-definite

f: size n x 1 vector

A: size m1 x n matrix, subjected to A * x <= b Aeq: size m2 x n matrix, subjected to Aeq * x = beq OUTPUT: OpenOpt QP class instance Solving of QPs is performed via r = p.solve(string_name_of_solver) Currently string_name_of_solver can be: cvxopt_qp (GPL) - requires CVXOPT (http://abel.ee.ucla.edu/cvxopt) (unfortunately, I can't suppress cvxopt text output, its solvers don't have appropriate params) Let's concider the problem x1^2 + 2x2^2 + 3x3^2 + 15x1 + 8x2 + 80x3 -> min (1)

subjected to

x1 + 2x2 + 3x3 <= 150 (2)

8x1 + 15x2 + 80x3 <= 800 (3)

x2 - x3 = 25 (4)

x1 <= 15 (5)

from numpy import diag, matrix

from scikits.openopt import QP

p = QP(diag([1,2,3]), [15,8,80], A = matrix('1 2 3; 8 15 80'), b = [150, 800], Aeq = [0, 1, -1], beq = 25, ub = [15,inf,inf])

# or p = QP(H=diag([1,2,3]), f=[15,8,80], A = matrix('1 2 3; 8 15 80'), b = [150, 800], Aeq = [0, 1, -1], beq = 25, ub = [15,inf,inf])

r = p.solve('cvxopt_qp')

f_opt, x_opt = r.ff, r.xf

# x_opt = array([-14.99999995, -2.59999996, -27.59999991])

# f_opt = -2453.7999916

1/2 x' H x + f' x -> min

subjected to

A x <= b Aeq x = beq lb <= x <= ub Alternatively to A/Aeq you can use Awhole matrix as it's descriebed in LP documentation (or both A, Aeq, Awhole) examples of valid calls: p = QP(H, f, params)

p = QP(numpy.ones((3,3)), f=numpy.array([1,2,4]), params)

p = QP(f=range(8)+15, H = numpy.diag(numpy.ones(8)), params)

p = QP(H, f, A=A, Aeq=Aeq, b=b, beq=beq, lb=lb, ub=ub, params)

INPUT:

H: size n x n matrix, symmetric, positive-definite

f: size n x 1 vector

A: size m1 x n matrix, subjected to A * x <= b Aeq: size m2 x n matrix, subjected to Aeq * x = beq OUTPUT: OpenOpt QP class instance Solving of QPs is performed via r = p.solve(string_name_of_solver) Currently string_name_of_solver can be: cvxopt_qp (GPL) - requires CVXOPT (http://abel.ee.ucla.edu/cvxopt) (unfortunately, I can't suppress cvxopt text output, its solvers don't have appropriate params) Let's concider the problem x1^2 + 2x2^2 + 3x3^2 + 15x1 + 8x2 + 80x3 -> min (1)

subjected to

x1 + 2x2 + 3x3 <= 150 (2)

8x1 + 15x2 + 80x3 <= 800 (3)

x2 - x3 = 25 (4)

x1 <= 15 (5)

from numpy import diag, matrix

from scikits.openopt import QP

p = QP(diag([1,2,3]), [15,8,80], A = matrix('1 2 3; 8 15 80'), b = [150, 800], Aeq = [0, 1, -1], beq = 25, ub = [15,inf,inf])

# or p = QP(H=diag([1,2,3]), f=[15,8,80], A = matrix('1 2 3; 8 15 80'), b = [150, 800], Aeq = [0, 1, -1], beq = 25, ub = [15,inf,inf])

r = p.solve('cvxopt_qp')

f_opt, x_opt = r.ff, r.xf

# x_opt = array([-14.99999995, -2.59999996, -27.59999991])

# f_opt = -2453.7999916

## Friday, June 22, 2007

### QP class is required

Most of NLP linearisation algorithms from ASAI are based on QP, not LP subproblems. I have tried some lp-based and I'm not fond of results - convergence isn't good enough.

So I have started to implement QP class (some code already have been added to svn; first of all I intend to connect cvxopt default qp solver). I had spent much time thinking what is the best letter for the QP matrix: MATLAB uses H, cvxopt - P, TOMLAB uses F, etc, etc.

I desided to chose H, because it seems the most pretty to me, however, the drawback of the latter - some people can associate the one with Hesse matrix (for example, TOMLAB uses the notation). In OO Hesse matrix will be passed as d2f (also I intend to implement d2c, d2h, d2L), for example:

p = NLP(..., c=myIneqConstr, dc=myDIneqConstr, d2c=myD2IneqConstr)

p.h = myEqConstr

p.dh = myDEqConstr

p.d2h = myD2EqConstr

(myIneqConstr etc are names of corresponding functions).

So I have started to implement QP class (some code already have been added to svn; first of all I intend to connect cvxopt default qp solver). I had spent much time thinking what is the best letter for the QP matrix: MATLAB uses H, cvxopt - P, TOMLAB uses F, etc, etc.

I desided to chose H, because it seems the most pretty to me, however, the drawback of the latter - some people can associate the one with Hesse matrix (for example, TOMLAB uses the notation). In OO Hesse matrix will be passed as d2f (also I intend to implement d2c, d2h, d2L), for example:

p = NLP(..., c=myIneqConstr, dc=myDIneqConstr, d2c=myD2IneqConstr)

p.h = myEqConstr

p.dh = myDEqConstr

p.d2h = myD2EqConstr

(myIneqConstr etc are names of corresponding functions).

## Thursday, June 21, 2007

### ALGENCAN: maybe, promising OO NLP solver?

As I have mentioned in suggestions about COIN-OR IPOPT and other solvers, adding several solvers with similar algorithms isn't a best idea, because they will suffer the same drawbacks. I guess if a user's vote would be organized "what do you want first of all from openopt", first of all they would chose "several good NLP solvers with as much types of constraints as possible, as well as 1st and 2nd derivatives", such as implementation of SQP, rSQP, IP, GRG etc (one from each class).

Now I'm busy with the OO native linearisation-based solver ('"lincher", today some more code have been added to svn repository), but maybe in future the University of Campinas and University of SÃ£o Paulo (Brazilia) joint project ALGENCAN (GPL) solver should be added. The most important feature - it has Python bindings. I compiled the one rather easily in my Linux system (unfortunately, windows users can expect some problems) and executed a test from py-file example. However, some things are organised very inconvenient - ALGENCAN authors propose to solve a python-written optimization problem via modifying a py-file attached (toyprob.py), and some funcs there have rather non-standard interface.

Also, ALGENCAN misses linear constraints (Ax<=b, Aeq x = beq), there is only c(x)<=0, h(x)=0, their 1st derivatives (optionally) and lb-ub bounds. Of course, I can add linear constraints as non-lin (+ corresponding derivatives obtaining) automatically, like I do (primarily for ralg solver) in my openopt for matlab/octave package.

Now I'm busy with the OO native linearisation-based solver ('"lincher", today some more code have been added to svn repository), but maybe in future the University of Campinas and University of SÃ£o Paulo (Brazilia) joint project ALGENCAN (GPL) solver should be added. The most important feature - it has Python bindings. I compiled the one rather easily in my Linux system (unfortunately, windows users can expect some problems) and executed a test from py-file example. However, some things are organised very inconvenient - ALGENCAN authors propose to solve a python-written optimization problem via modifying a py-file attached (toyprob.py), and some funcs there have rather non-standard interface.

Also, ALGENCAN misses linear constraints (Ax<=b, Aeq x = beq), there is only c(x)<=0, h(x)=0, their 1st derivatives (optionally) and lb-ub bounds. Of course, I can add linear constraints as non-lin (+ corresponding derivatives obtaining) automatically, like I do (primarily for ralg solver) in my openopt for matlab/octave package.

## Tuesday, June 19, 2007

### suggestions about COIN-OR IPOPT and other solvers

I have received a question from Alan Isaac about IPOPT, and I decided it's better to describe my opinion here.

>The IPopt solver (in C++) might be a nice addition to OpenOpt? https://projects.coin-or.org/Ipopt

I know about the IPOPT solver, as well as whole COIN-OR project, very well. BTW I heard there is very nice LP solver, that is used in the linearisation-based IPOPT solver (and it's one of the reasons why IPOPT works so good).

It is really very good solver, according to the Hans D. Mittelmann benchmark results:

ampl-nlp

mpec (with equality constraints)

The problems are:

1) I had seen only 1-2 python bindings to COIN-OR solvers. As far as I remember, it was PuLP (A Python Linear Programming modeler), and the way it was implemented is unsuitable for solving large-scale tasks (it demands its own GAMS-like syntax for describing LPs).

Other python binding (that I got from web search) was dated to 2001 or so, and hence is currently unsupported.

I'm enable to organize efficient python-C++ binding, especially to IPOPT, because it requires passing function handles, that is much more difficult than just simple matrices, as it is in LP or QP problems (btw COIN-OR QP solver isn't very good, see here). So, the problem of writing coin-or to python bridge is hard by itself, and keeping the code up-to-date is also very hard time-consuming task.

2) IPOPT, as well as whole COIN-OR project, has CPL license, that has copyleft statement.

3) I has already started writing a linearisation - based NLP solver, and it will have BSD license. Since it has similar to IPOPT algorithm (linearisation - based), it will have about the same tasks where it yields same results (either good or bad) as IPOPT.

Particularly, for linearisation-based solvers main problems are:

1) if some linearised constraints forms a linear-dependent system in a point x[k] (i.e. obtained in k-th iteration), then it's very hard for LP solvers to handle the situation, and it requires much more time (and cputime). The same problem arises if x_opt has same linear-dependent system of some linearised constraints.

2) Usually (if objfunc and NL constraints are not very costly) most of time/cputime in linearisation solvers is spend to solving of LPs (or QPs, if quadratic approach is used), hence good LP/QP solver is required, as I have mentioned above.

Since I've connected 3 LP solvers (they already had python bindings: glpk with cvxopt binding, cvxopt native lp solver and lp_solve with its own binding; also, non-free mosek lp solver with cvxopt binding is available, however, I didn't test the one properly). So I think number of free LP solvers is enough for now, and connecting COIN-OR LP solver shouldn't be primary task; the same to writing IPOPT or some other coin-or solvers binding.

>The IPopt solver (in C++) might be a nice addition to OpenOpt? https://projects.coin-or.org/Ipopt

I know about the IPOPT solver, as well as whole COIN-OR project, very well. BTW I heard there is very nice LP solver, that is used in the linearisation-based IPOPT solver (and it's one of the reasons why IPOPT works so good).

It is really very good solver, according to the Hans D. Mittelmann benchmark results:

ampl-nlp

mpec (with equality constraints)

The problems are:

1) I had seen only 1-2 python bindings to COIN-OR solvers. As far as I remember, it was PuLP (A Python Linear Programming modeler), and the way it was implemented is unsuitable for solving large-scale tasks (it demands its own GAMS-like syntax for describing LPs).

Other python binding (that I got from web search) was dated to 2001 or so, and hence is currently unsupported.

I'm enable to organize efficient python-C++ binding, especially to IPOPT, because it requires passing function handles, that is much more difficult than just simple matrices, as it is in LP or QP problems (btw COIN-OR QP solver isn't very good, see here). So, the problem of writing coin-or to python bridge is hard by itself, and keeping the code up-to-date is also very hard time-consuming task.

2) IPOPT, as well as whole COIN-OR project, has CPL license, that has copyleft statement.

3) I has already started writing a linearisation - based NLP solver, and it will have BSD license. Since it has similar to IPOPT algorithm (linearisation - based), it will have about the same tasks where it yields same results (either good or bad) as IPOPT.

Particularly, for linearisation-based solvers main problems are:

1) if some linearised constraints forms a linear-dependent system in a point x[k] (i.e. obtained in k-th iteration), then it's very hard for LP solvers to handle the situation, and it requires much more time (and cputime). The same problem arises if x_opt has same linear-dependent system of some linearised constraints.

2) Usually (if objfunc and NL constraints are not very costly) most of time/cputime in linearisation solvers is spend to solving of LPs (or QPs, if quadratic approach is used), hence good LP/QP solver is required, as I have mentioned above.

Since I've connected 3 LP solvers (they already had python bindings: glpk with cvxopt binding, cvxopt native lp solver and lp_solve with its own binding; also, non-free mosek lp solver with cvxopt binding is available, however, I didn't test the one properly). So I think number of free LP solvers is enough for now, and connecting COIN-OR LP solver shouldn't be primary task; the same to writing IPOPT or some other coin-or solvers binding.

### an other one dayly report

Today I have elapsed some hours for searching an efficient spline toolkit, that allows usage of 2nd order (quadratic) splines. The was some compiling problems with module spline from scipy.sandbox, only some minutes ago I have installed it. Tomorrow I shall learn how it works (i.e. read documentation).

I have uploaded scanned book "Linearisation method" by Ukrainian academician Phenichniy: now it's available here (in russian, I have no English version)

I decided to name the new constrained solver "lincher" - LINearisation solver that had been written in CHERkasy town, Ukraine. Some solver code has been added to svn.

I have uploaded scanned book "Linearisation method" by Ukrainian academician Phenichniy: now it's available here (in russian, I have no English version)

I decided to name the new constrained solver "lincher" - LINearisation solver that had been written in CHERkasy town, Ukraine. Some solver code has been added to svn.

## Monday, June 18, 2007

### NL constrained problems

Some NLP constraints code has been added.

However, I still nedd some more time for to learn the NLP algorithms received from ASAI.

However, I still nedd some more time for to learn the NLP algorithms received from ASAI.

## Thursday, June 14, 2007

### QP: later, NLP: learning alg

I decided to make creating of QP class some time later: I should think twice of QP constructor interface. I mean even for LP there are 3 main approaches for assigning linear constraints:

1) A x <= b, Aeq x = beq (MATLAB, CVXOPT) (implemented in OO)

2) A x {less, equal, greater} b (glpk, lp_solve) (implemented in OO)

3) b_l <= A x <= b_u (TOMLAB) (maybe I will implement the one in OO later)

3rd way is more powerful - it allows to reduce A matrix in a factor up to 2 for some cases, but, on the other hand, creating of such matrices A, b_l, b_u is less convenient, and I guess very small amount of LP solvers is capable of taking benefites from the case.

So, a number of QP assignment ways is even greater.

I have to spend several days for investigating some versions of NLP linearisation algorithm that I have obtained from applied systems analysis institute (ASAI), by dr. Danilin. Unfortunately, I haven't English version so I can't discuss them in my blog or scipy mailing lists.

1) A x <= b, Aeq x = beq (MATLAB, CVXOPT) (implemented in OO)

2) A x {less, equal, greater} b (glpk, lp_solve) (implemented in OO)

3) b_l <= A x <= b_u (TOMLAB) (maybe I will implement the one in OO later)

3rd way is more powerful - it allows to reduce A matrix in a factor up to 2 for some cases, but, on the other hand, creating of such matrices A, b_l, b_u is less convenient, and I guess very small amount of LP solvers is capable of taking benefites from the case.

So, a number of QP assignment ways is even greater.

I have to spend several days for investigating some versions of NLP linearisation algorithm that I have obtained from applied systems analysis institute (ASAI), by dr. Danilin. Unfortunately, I haven't English version so I can't discuss them in my blog or scipy mailing lists.

## Wednesday, June 13, 2007

### MILP class created

1) LPSolve has been renamed to lpSolve (requires lp_solve installed)

2) MILP class created

Currently the only one MILP solver connected to OO is lpSolve (requires lp_solve installed)

see

from scikits.openopt import MILP

help(MILP)

2) MILP class created

Currently the only one MILP solver connected to OO is lpSolve (requires lp_solve installed)

see

from scikits.openopt import MILP

help(MILP)

## Tuesday, June 12, 2007

## Monday, June 11, 2007

### latest svn changes

1) docs improved

see

from scikits.openopt import LP, NLP, NSP

help(LP)

help(NLP)

help(NSP)

2) some changes in linear constraints.

Next things to do:

create MILP, QP classes, connect lp_solve to MILP and CVXOPT QP solver to openopt QP class

see

from scikits.openopt import LP, NLP, NSP

help(LP)

help(NLP)

help(NSP)

2) some changes in linear constraints.

Next things to do:

create MILP, QP classes, connect lp_solve to MILP and CVXOPT QP solver to openopt QP class

## Friday, June 8, 2007

### OpenOpt LP solvers

some days ago I published a short benchmark of LP solvers available in OpenOpt.

Currently they are:

LPSolve (LGPL) - requires lpsolve + Python bindings installations (all mentioned is available in here)

cvxopt_lp (GPL) - requires CVXOPT

cvxopt_glpk(GPL2) - requires CVXOPT & glpk

cvxopt_mosek(non-free) - requires CVXOPT & mosek

However, I have some problems with mosek (even with trial license), it don't want to run in my AMD 64 + Linux, yielding 'inner mosek error'. Maybe, you will be more lucky:)

So, that report showed LPSolve to be much more better than other solvers.

Now I found the matter (and it's not my bug).

So, this is due to handling of lb-ub bounds: they are absent in CVXOPT (hence in glpk, mosek), and I used a routine that translated lb-ub bounds to Ax> nVars (however, other tasks can yield other results).

BTW, I think that LP solvers by themself should handle lines with single non-zero, but these ones involved don't. So, in future (but I guess not nearest) I intend to add the handling to the OpenOpt Kernel.

However, it has disadvantage: no correct duals will be returned, or it will require restart LP solver with unchanged LP problem from x_opt once again.

--------------------------------------------------------------------------------------

See whole LP bench results in my comment.

Currently they are:

LPSolve (LGPL) - requires lpsolve + Python bindings installations (all mentioned is available in here)

cvxopt_lp (GPL) - requires CVXOPT

cvxopt_glpk(GPL2) - requires CVXOPT & glpk

cvxopt_mosek(non-free) - requires CVXOPT & mosek

However, I have some problems with mosek (even with trial license), it don't want to run in my AMD 64 + Linux, yielding 'inner mosek error'. Maybe, you will be more lucky:)

So, that report showed LPSolve to be much more better than other solvers.

Now I found the matter (and it's not my bug).

So, this is due to handling of lb-ub bounds: they are absent in CVXOPT (hence in glpk, mosek), and I used a routine that translated lb-ub bounds to Ax

BTW, I think that LP solvers by themself should handle lines with single non-zero, but these ones involved don't. So, in future (but I guess not nearest) I intend to add the handling to the OpenOpt Kernel.

However, it has disadvantage: no correct duals will be returned, or it will require restart LP solver with unchanged LP problem from x_opt once again.

--------------------------------------------------------------------------------------

See whole LP bench results in my comment.

## Thursday, June 7, 2007

### hunting for bugs: history repeats

Almost whole day (today) I spent hunting for ralg() bug. It yields difference with OpenOpt for MATLAB (with my test case, where I noticed incorrect behaviour) only after 4th iteration. When I finally fix the one, I remembered that I had spent similar time for same bug in my OpenOpt for MATLAB (year ago) and several hours in OpenOpt for fortress (some months ago) (there are nothing but the ralg for now).

### OpenOpt integrated to scikits svn

At last!...

I can't say all is perfect, but, at least, it works.

Some code were stolen from other scikits projects (pymat, pyaudiolab) setup files.

I noticed David Cournapeau uses "... = i for i in ...", but it is available only since Python 2.5, while scipy developers still think should they keep support for 2.3. I had "i for i in ..." in my code but after the message about 2.3 in mailing lists I had commented the lines for a while.

Some days have been spent for studing the svn and setuptools stuff.

One more thing that I dislike in scikits svn is repeating module name:

as you see, 'mlabwrap' is present for twice

also, __init__ can be omitted (since Python 2.5), and I'm not fond of creating and maintaining of additional files.

--------------------------------------------------------------------------------------

Ok, so now I intend to spend some days adding MILP and QP classes to OpenOpt.

I can't say all is perfect, but, at least, it works.

Some code were stolen from other scikits projects (pymat, pyaudiolab) setup files.

I noticed David Cournapeau uses "... = i for i in ...", but it is available only since Python 2.5, while scipy developers still think should they keep support for 2.3. I had "i for i in ..." in my code but after the message about 2.3 in mailing lists I had commented the lines for a while.

Some days have been spent for studing the svn and setuptools stuff.

One more thing that I dislike in scikits svn is repeating module name:

trunk(taken from scikits page)

mlabwrap/

setup.py # setuptools based

setup.cfg # optional, needed e.g. for development releases

README.txt

scikits/

__init__.py # __import__('pkg_resources').declare_namespace(__name__)

mlabwrap/

as you see, 'mlabwrap' is present for twice

also, __init__ can be omitted (since Python 2.5), and I'm not fond of creating and maintaining of additional files.

--------------------------------------------------------------------------------------

Ok, so now I intend to spend some days adding MILP and QP classes to OpenOpt.

## Tuesday, June 5, 2007

### OpenOpt for Python

Hi all,

this blog is related to OpenOpt module for scikits (see scipy.org) - free Python-based optimization alternative to TOMLAB, AMPL, GAMS etc.

I had some experience with OpenOpt for MATLAB/Octave before. You can take a look at reviews and download that one from here

Now Python version includes 2 non-smooth UC solvers: ralg (an implementation of Naum Z.Shor r-alg) and ShorEllipsoid (Naum Z.Shor modificated method of ellipsoids). Former is intended for medium-scale problems and is used very intensively in lots software solutions of our non-smooth optimization department, cybernetics institute, Ukrainian science academy; latter is very special solver for small-scale problems only (and knowing r0 required), I add the one only because it's very simple to implement.

There is basic graphic output implemented (provided you have matplotlib installed), see example.py

Also, connections to some LP solvers have been made, see exampleLP.py.

I shall connect openopt to scikits svn during 1-2 days (I need to decide how to split all my files into subdirectories very carefully and learn how to use svn - previously I used only cvs). Currently you can download the OpenOpt for Python from here

WBR, Dmitrey

Cherkassy, Ukraine

this blog is related to OpenOpt module for scikits (see scipy.org) - free Python-based optimization alternative to TOMLAB, AMPL, GAMS etc.

I had some experience with OpenOpt for MATLAB/Octave before. You can take a look at reviews and download that one from here

Now Python version includes 2 non-smooth UC solvers: ralg (an implementation of Naum Z.Shor r-alg) and ShorEllipsoid (Naum Z.Shor modificated method of ellipsoids). Former is intended for medium-scale problems and is used very intensively in lots software solutions of our non-smooth optimization department, cybernetics institute, Ukrainian science academy; latter is very special solver for small-scale problems only (and knowing r0 required), I add the one only because it's very simple to implement.

There is basic graphic output implemented (provided you have matplotlib installed), see example.py

Also, connections to some LP solvers have been made, see exampleLP.py.

I shall connect openopt to scikits svn during 1-2 days (I need to decide how to split all my files into subdirectories very carefully and learn how to use svn - previously I used only cvs). Currently you can download the OpenOpt for Python from here

WBR, Dmitrey

Cherkassy, Ukraine

Subscribe to:
Posts (Atom)