Free Python optimization framework

Sunday, July 27, 2008

integer problems for GLP solver galileo

I have committed new parameter "useInteger" for GLP solver galileo. Valid values are 0, 1, True, False (default). If true, galileo will search solution with all-integer variables.

Hence my GSoC schedule is finished. Nevertheless, mb I'll have written some more OO code till next OO release, first of all I intend to add some more converters (lp2nlp, qp2nlp, lsp2nlp), also, mb some modifications for ralg will be made.

Friday, July 25, 2008

non-linear functions with restricted dom

Some non-linear functions have much more restricted dom than R^nVars.
For example F(x) = log(x); dom F = R+ = {x: x>0}

I had posted a small manual of using ralg with the problems (via +inf and contol shift), however, optimization solvers wont to expect user-povided F(x) = nan (if x is out of dom).

So I have made ralg handling doms in accordance with the mentioned standard; now there are no needs to do that contol shift mentioned, and using [numpy.]nan instead of +inf is recommended.

An example has been attached to OO Doc Page.

Note also that some solvers (BTW all connected to OO except of ralg) require x0 from dom(objFunc).
As for ralg it doesn't matter, provided for each point x out of dom objFunc there is at least one active constraint (usually lb<=x<=ub are used, but any other constraints, including non-linear, are OK).

Note: some ralg iters have objFunc = nan:

-----------------------------------------------------
solver: ralg problem: unnamed goal: minimum
iter objFunVal log10(maxResidual)
0 1.925e+04 -100.00
50 nan 1.32
100 nan -0.26
150 1.045e+03 -100.00
200 1.027e+03 -100.00
250 nan -3.49
300 nan -3.19
350 1.016e+03 -100.00
400 nan -5.20
450 nan -6.75
500 nan -5.72
550 nan -5.91
555 1.015e+03 -100.00
istop: 4 (|| F[k] - F[k-1] || < ftol)
Solver: Time Elapsed = 2.45 CPU Time Elapsed = 2.37
objFunValue: 1015.082 (feasible, max constraint = 0)

Saturday, July 19, 2008

New OO class: LLAVP

New OO class is ready: LLAVP - linear least absolute value problem, aka linear least absolute deviation problem

||C*x-d||1 + damp*||x-X||1 -> min

subjected to box-bound constraints:
lb <= x <= ub (some coords of lb and ub can be +/- inf)

(some more constraints are intended to be added)

currently single solver available is converter llavp2nsp. Usage example:
r = p.solve('nsp:ralg')
Recommended solvers to try are ralg, ipopt (using maxIter for ipopt is recommended) and algencan.

However note: ipopt and algencan are NLP solvers and convergence for non-smooth problems like LLAVP is not guarantied. As for Naum Z. Shor r-algorithm implemented in ralg, convergence haven't been proved even for convex NL problems yet.

Also, I intended to connect Fortran-written toms615 but I got f2py error:
$ f2py -c -m toms615 toms615.f
...
File "/usr/lib/python2.5/site-packages/numpy/f2py/f2py2e.py", line 364, in run_main
raise TypeError,'All blocks must be python module blocks but got %s'%(`postlist[i]['block']`)
TypeError: All blocks must be python module blocks but got 'program'

Friday, July 18, 2008

ALGENCAN diffInt issue

After some experiments with ALGENCAN 2.0.x series I have noticed that decreasing diffInt can be very helpful.

The diffInt parameter is used for getting derivatives via finite-difference approximation (when no derivatives are provided by user). Currently default value in OO for NLP is 1e-7, and it it recommended to try values 1e-8, 1e-9, 1e-10, 1e-11.

Seems like ALGENCAN is much more sensitive to the gradient precision than other OO-connected NLP solvers.

Drawback of so small diffInt can raise when some non-linear funcs are hard to evaluate precisely because of rather big numerical noise.

I don't know yet which diffInt value is used in ALGENCAN by default and is it fixed or somehow changes during calculations. If latter, mb in future I'll turn off OO-supplied gradient obtaining via finite-difference and let ALGENCAN evaluating it by itself. The drawback is the following: each separate evaluation of non-lin func (objfunc or non-lin constraints) is very costly, because it is wrapper that performs some checks, update counters of func calls, translates Python list, tuples, numpy matrices (that can be returning by user funcs) into numpy.ndarray type etc, while calling for derivatives is a little bit optimized and obtaining, for example, dc/dx is performed faster than separate obtaining (c(x + dx1)-c0)/diffInt, (c(x+dx2)-c0)/diffInt ..., (c(x+dxn)-c0)/diffInt, where dxk is all-zeros vector except of coord k with value diffInt.

Let me also note once again a good user-provided gradtol value can be helpful (try 1e-1...1e-10).

Probably, in future OO versions gradtol will be renamed to gtol (less to type).

Thursday, July 17, 2008

ALGENCAN 1.0 is no longer supported

I have removed ALGENCAN solver from svn repository according to demand of my gsoc mentor (it makes svn repository unavailable from Win and Mac OS since ALGENCAN_oo.py and algencan_oo.py are case-insensitive same names for the OSes subversion clients). Still I think it could wait till next OO release.

So now you can use ALGENCAN v 2.x only, as "algencan" solver (lower-case). You should have at least 2.0.3 (for now latest is 2.0.4).

Tuesday, July 15, 2008

Introduction to oofun

I have added another one entry to OO Doc webpage, related to oofun. It can handle dependencies info and provide some other benefits.

For more info see the entry.

Of course, the concept of oofun is not something new, for example, even some my dept workers implement something similar using Visual C++ and Rational Rose.

I intend to continue oofun development.

Monday, July 14, 2008

ALGENCAN 2.0.3beta has been released

ALGENCAN developers have informed me they have fixed the bug with Python API installation, and today corrected ALGENCAN 2.0.3 beta has been released (works well with OO as "algencan" solver).

Still, as I had informed, those my examples (nlp_bench_1, nlp3) work better with ALGENCAN 1.0, mb because some funcs (that are used there) are too far from being quadratic.

Saturday, July 12, 2008

changes in ralg and derivatives check


  • Some major changes for NLP/NSP ralg solver.

  • I have committed some changes in check user-supplied derivatives.

    Old-style difference is not a good choice because sometimes you have values to compare like 1.24*105 (user-supplied) and 1.23*105 (finite-difference), while sometimes 1.24*10-5 (user-supplied) and 1.23*10-5 (finite-difference).

    The new way is based on comparing
    1 - derivative_user/derivative_finite_diff
    to maxViolation value

    The parameter maxViolation can be used as prob field or via for example p.checkdc(maxViolation=0.05)
    Note that x remains to be 1st argument for checkdc, checkdf, checkdh (default p.x0), and can be used as kwarg as well: p.checkdh(maxViolation=0.05, x = 0.5*(p.lb+p.ub)), that is same to p.checkdh(0.5*(p.lb+p.ub), maxViolation=0.05).

    So new text output is like this one below (this one is produced by updated examples/checkDerivatives.py file).

    Note that RD (relative difference) is defined as

    int(ceil(log10(abs(Diff) / maxViolation + 1e-150)))

    where

    Diff = 1 - (info_user+1e-150)/(info_numerical + 1e-150)

    (those small numbers are used to suppress zeros)
    ################################

    OpenOpt checks user-supplied gradient df (shape: (30,) )
    according to prob.diffInt = [9.9999999999999995e-08]
    lines with 1 - info_user/info_numerical greater than maxViolation = 0.01 will be shown
    df num user-supplied numerical RD
    0 +7.000e+00 -8.000e+00 3
    8 -2.291e+00 -1.029e+01 2
    max(abs(df_user - df_numerical)) = 14.9999995251
    (is registered in df number 0)
    ========================
    OpenOpt checks user-supplied gradient dc (shape: (2, 30) )
    according to prob.diffInt = [9.9999999999999995e-08]
    lines with 1 - info_user/info_numerical greater than maxViolation = 0.01 will be shown
    dc num i,j:dc[i]/dx[j] user-supplied numerical RD
    32 1 / 2 +1.417e+01 -8.323e-01 4
    max(abs(dc_user - dc_numerical)) = 14.9999999032
    (is registered in dc number 32)
    ========================
    OpenOpt checks user-supplied gradient dh (shape: (2, 30) )
    according to prob.diffInt = [9.9999999999999995e-08]
    lines with 1 - info_user/info_numerical greater than maxViolation = 0.01 will be shown
    dh num i,j:dh[i]/dx[j] user-supplied numerical RD
    58 1 / 28 -4.474e+01 -5.974e+01 2
    max(abs(dh_user - dh_numerical)) = 14.9999962441
    (is registered in dh number 58)
    ========================


  • Friday, July 4, 2008

    First converter is ready: llsp2nlp

    I have committed some changes, so LLSP class has new form:
    0.5*||C*x-d||2 + 0.5*damp*||x-X||2 + fTx-> min

    subjected to:
    lb <= x <= ub


    A*x <= b, Aeq*x = beq constraints are intended to be added in future.

    Using llsp2nlp converter is performed via

    r = p.solve('nlp:NLPSolverName')

    for example
    r = p.solve('nlp:ralg')

    For more details read LLSP webpage.

    Thursday, July 3, 2008

    some changes

    1. I have committed lots of changes, mostly code cleanup (still some more remain to be done) + some bugfixes.
    2. Objective function in LLSP have been changed from ||Cx-d|| (lapack-, BVLS-style) to 0.5*||Cx-d||^2 (BCLS, lsqlin, lots of other LLSP solvers style).These changes are required to simplify writing llsp2nlp converter. Also, some optional fields like those from BCLS will be added to LLSP soon.