Nonlinear Optimization
Research grant POCTI/35059/MAT/2000 funded by
FCT.
January 2001 - December 2004
Doctoral Members: Joaquim Joăo Júdice and
Luís Nunes Vicente (PI)
Graduate Students:
Catarina Avelino,
Maria C. Brás,
Ana M. Monteiro, and
Renata D. Silva
Consultants:
Matthias Heinkenschloss,
Michael Ulbrich,
Stefan Ulbrich,
and
Stephen J. Wright
This grant deals with research in numerical solution of nonlinear
programming problems (optimization problems where an objective function of several
variables is minimized in a set defined by a number of equality and
inequality constraints). The objective and constraint functions are assumed
smooth, nonlinear, and not necessarily convex. The goal is to find a stationary
or critical point (preferably a local minimizer). It is assumed that the optimization
problem is posed in finite dimension, although it might result from the discretization
of an infinite dimensional optimization problem. We are not concerned with "global
optimization", where the best local minimizer is sought.
Many nonlinear optimization problems, typically those arising from applications,
pose several difficulties to nonlinear optimization algorithms and codes, that
can include numerical instability, convergence to spurious minimizers, and
slow convergence. The algorithms developed and analyzed in this project
consider, depending on the origin of the problem, some of following pitfalls:
- large scale;
- high nonlinearity and non-availability of derivatives;
- mesh dependence of the optimization algorithms when applied to discretized problems;
- degeneracy (linear dependence of gradients of the active or binding constraints)
and lack of strict complementarity (primal and dual complementarity variables
both vanishing at the same time).
Research that is already underway includes:
- Interior-point methods, a technique used to handle the inequality constraints.
The minimizer is approached through the "interior" of the feasible set.
Interior-point methods are based on the application of Newton's method
to a perturbed version of the first-order optimality conditions.
The perturbation incorporates the (numerically efficient) notion
of centrality, forcing the iterates to stay as much as possible
away from the boundary of the feasible set.
Filter methods, a scheme to enforce "global convergence" from arbitrary starting
points. No "merit functions" are used to balance function value and constraint
residuals. Instead, the concept of non-dominance is borrowed from multi-criteria
optimization to build a "filter" able to reject poor trial iterates.
The filter mechanism has been used for the first
time to globalize interior-point methods:
M. Ulbrich, S. Ulbrich, and L. N. Vicente,
A Globally Convergent Primal-Dual Interior-Point Filter Method for
Nonlinear Programming,
Mathematical Programming, 100 (2004) 379-410.
PDF
The new primal-dual interior-point filter is being numerically tested and
further studied.
- Augmented Lagrangian penalty methods, a more classical type of methods
where the original problem is replaced by a sequence of penalized problems,
is being studied and
a new scheme for updating multipliers (dual variables) has been developed
for distributed optimal
control or design problems with bounds on both the control and the
state variables:
L. N. Vicente,
Local Analysis of a New Multipliers Method,
European Journal of Operational Research, 143 (2002) 432-451.
PDF
The new scheme considers an implicit update of the multipliers corresponding
to equality constraints, allowing the possibility of using
problem-dependent adjoint formulas.
In the new scheme, a sequence of penalized problems is solved
where only the bound constraints
on the variables are left unpenalized.
The solution of each penalized problem determines in turn
an explicit update of the multipliers corresponding to the
inequality constraints, automatically maintaining its nonnegativity.
The new multipliers scheme has been also studied in the context of general
NLP:
C. P. Avelino and L. N. Vicente,
Updating the Multipliers Associated with Inequality Constraints
in an Augmented Lagrangian Multiplier Method,
Journal of Optimization Theory and Applications, 119 (2003) 215-233.
PDF