I'm glad to inform that new GLP solver has been connected to OO: pswarm.
Note that along with box-bound constraints
lb <= x <= ub
this one is capable of handling linear inequalities:
Ax <= b
However, I encountered some troubles in my KUBUNTU (such as "recompile with -fPIC" and then, after fixing it, "undefined symbol opt") and have connected pswarm within WinXP. I have mailed PSwarm author the issues but he hasn't responded yet.
Tuesday, August 19, 2008
Monday, August 18, 2008
some changes
- minor changes for ralg
- minor changes for tests. In order to check openopt after installation users wont to use the files (relevant to their class) from /examples, and now you can alternatively use the tests from /tests directory. First of all /tests/nlp1.py is recommended, because it uses ralg and hence doesn't require any other 3rd party solvers installation. Maybe in future openopt tests will use nose or texttest framework.
Thursday, August 7, 2008
Updates in 1) oofun doc 2) ralg
1. Some changes for NLP/NSP ralg solver have been made. A personal wiki page for ralg has been committed.
2. I have added new entry to openopt doc page about how to use oofun to prevent recalculating same parts of code.
(This is rather common problem, mentioned for example here and here).
Let me note once again that calling df(x1) doesn't guarantee that f(x1) (i.e. in same point) was called immediately before the df call (and same to c, h, dc, dh). Still in 90-95% cases it's true, so it would be convenient to check and (if input, according to dependencies, is same) substitute already calculated values automatically.
So oofun is an implementation of possible solutions to the issue. There are some other convenient tools based on oofun usage, already available in OO code. Still lots of other oofun-related work in Kernel remains to be done (recursive 1st derivatives, implementation of oovar etc).
Of course oofun concept isn't something new, for example, something like this is present in YALMIP (free MATLAB toolbox for some numerical optimization problems, translates YALMIP scripts to MATLAB). Also, there is some similar work involving in our dept using Visual C and Rational Rose.
2. I have added new entry to openopt doc page about how to use oofun to prevent recalculating same parts of code.
(This is rather common problem, mentioned for example here and here).
Let me note once again that calling df(x1) doesn't guarantee that f(x1) (i.e. in same point) was called immediately before the df call (and same to c, h, dc, dh). Still in 90-95% cases it's true, so it would be convenient to check and (if input, according to dependencies, is same) substitute already calculated values automatically.
So oofun is an implementation of possible solutions to the issue. There are some other convenient tools based on oofun usage, already available in OO code. Still lots of other oofun-related work in Kernel remains to be done (recursive 1st derivatives, implementation of oovar etc).
Of course oofun concept isn't something new, for example, something like this is present in YALMIP (free MATLAB toolbox for some numerical optimization problems, translates YALMIP scripts to MATLAB). Also, there is some similar work involving in our dept using Visual C and Rational Rose.
Sunday, August 3, 2008
IBM to acquire ILOG (CPLEX developer)
I've got to know: IBM and ILOG announced that they had signed an agreement regarding to the proposed acquisition of ILOG by IBM (one of URLs is here).
Let me remember those ones who are not closely connected to numerical optimization, that ILOG develops CPLEX (and some other, less famous, numerical optimization solvers), so ILOG is known as leader of commercial LP/MILP (and probably QP) solvers vendors (along with other well-known commercial ones - XA, XPRESS, Mosek, LOQO etc). You can check some benchmark results here.
Also, IBM is know as sponsor of well-know project COIN-OR that hosts lots of free solvers under CPL (an OSI-approved license with copyleft), including IPOPT, most famous free NLP solver (BTW Python programmers can can use this one from OO, but currently for Linux OSes only).
So it means there are some chances CPLEX will move from commercial-only status to more permissive one.
Let me remember those ones who are not closely connected to numerical optimization, that ILOG develops CPLEX (and some other, less famous, numerical optimization solvers), so ILOG is known as leader of commercial LP/MILP (and probably QP) solvers vendors (along with other well-known commercial ones - XA, XPRESS, Mosek, LOQO etc). You can check some benchmark results here.
Also, IBM is know as sponsor of well-know project COIN-OR that hosts lots of free solvers under CPL (an OSI-approved license with copyleft), including IPOPT, most famous free NLP solver (BTW Python programmers can can use this one from OO, but currently for Linux OSes only).
So it means there are some chances CPLEX will move from commercial-only status to more permissive one.
some changes for ralg
I have committed some changes for NLP/NSP solver ralg (some ones to speedup and some ones for better handling NaNs, if x is outside of dom objFunc or dom of some non-linear constraints).
Another one parameter (mostly for NLP/NSP) have been added: isNaNInConstraintsAllowed (default False). This one means is nan (not a number) allowed in non-linear constraints for optim point (mostly for inequalities: p.c(r.xf)).
Non-default value True is encountered very seldom, for very special cases.
Also, some code cleanup for ralg and some examples, + some changes for tests/chain.py.
Another one parameter (mostly for NLP/NSP) have been added: isNaNInConstraintsAllowed (default False). This one means is nan (not a number) allowed in non-linear constraints for optim point (mostly for inequalities: p.c(r.xf)).
Non-default value True is encountered very seldom, for very special cases.
Also, some code cleanup for ralg and some examples, + some changes for tests/chain.py.
Subscribe to:
Posts (Atom)