Combining Pattern Search and Filter Algorithms for Derivative Free Optimization
Charles Audet
The class of pattern search algorithms suggested and analyzed by Lewis and Torczon are a useful tool for derivative-free linearly constrained optimization. They have defined and analyzed an augmented Lagrangian version for general constraints based on pattern search inexact solutions of the Conn, Gould and Toint bound constrained subproblems. Here, we suggest and analyze a pattern search method for general constraints based on filter methods for step acceptance. Our algorithm does not compute nor approximate any derivatives, and it does not use any penalty nor Lagrangian function. It is very similar to the original pattern search algorithms in its division into SEARCH and POLL steps. Our algorithm accepts a trial point if it is unfiltered, i.e., if its objective function value and its constraint violation function value are not dominated. We show that the algorithm generates an accumulation point satisfying appropriate first-order optimality conditions.
Joint with: John Dennis
Step Computation in a Trust Region Interior Point Method
Richard H. Byrd
In computing a step on an interior point
method for nonlinear inequality constrained
optimization, use of a trust region provides
a unified structure for dealing with negative
curvature and rank deficiency.
However, in an interior point context, exactly
solving the complete trust region subproblem
step presents us with an intractable subproblem.
Therefore, most practical methods use a cheap
approximate solution to the trust region problem.
Here, we consider several of these approximate
methods, and point out that methods
that are theoretically adequate can have
serious drawbacks. We present some new approaches
to this problem and argue that in many cases
simpler is better. We also consider to what
extent these approximate approaches provide the
benefits promised by trust regions in the cases of
negative curvature and rank deficiency.
An Introduction to Filter Methods
Roger Fletcher
Filter methods provide an alternative to penalty or augmented Lagrangian methods for the globalisation of methods for nonlinear programming. They are based on using the dominance concept of multiobjective optimization. The talk will describe the basic idea and its use with a trust region SQP method. Various options are possible for developing the approach and these will be discussed. One particular choice has been to avoid any recourse to backtracking, and this gives the methods an unusual character.
A large scale NLP code has been developed and has given promising results on a wide selection of CUTE and AMPL models. It is available for use through NEOS. Various heuristics were used in this code in an attempt to eliminate situations in which the method might fail to converge. Since that time, the possibility of developing convergence proofs for filter type methods has been considered, and a first attempt is the paper of Fletcher, Leyffer and Toint (Dundee NA report NA/183 available) which relates to an SLP filter algorithm. This result shows that some of the heuristics in the original code could be dispensed with. A second pilot SQP code, based on the ideas in this convergence proof, has been developed, with equally good numerical outcomes.
Since that time, more thought has been given to analysing other filter
type algorithms, to identify what algorithmic choices are available when
developing such algorithms. I hope to conclude by speaking about two
more general situations. One is where the SLP step is used to determine
an active set that can be used to build an equality constrained QP step
(similar to the ideas in an early paper of Fletcher and Sainz de la Maza).
The other is to apply SLP based methods to the problem of solving
systems of nonlinear algebraic equations and inequalities (with an application
to the ``feasibility restoration phase'' of an filter type method for
optimization).
An Algorithm for Solving Strongly Convex Optimization Problems using a Feasible Interior Point Approach and Quasi-Newton Hessian Approximations - Global Convergence Results
Jean Charles Gilbert
In this talk we present an algorithm for solving strongly convex optimization problems, which combines the interior point approach and quasi-Newton updates. This algorithm can be useful when the problem to solve has many inequality constraints and when second derivatives are not available.
The algorithm uses two kinds of iterations. The inner iterations are used to find an approximate solution of the optimality conditions perturbed by a fixed positive parameter m, while an outer iteration is the collection of inner iterations corresponding to the same value of m. We will discuss the convergence of the algorithm when the Hessian of the Lagrangian is approximated using the BFGS formula: the sequence of inner iterates converges q-superlinearly for fixed m and the limit points of the outer iterate sequence are solutions of the problem. In case of strict complementarity, all the sequence of outer iterates converges to the unique analytic center of the optimal set. Somehow, this result extends the unconstrained BFGS convergence theory to inequality constrained optimization.
Joint with:
P. Armand (Limoges, France),
C. Gonzaga (Santa Catarina, Brazil),
S. Jégou-Jan (Toulouse, France), and
E. Karas (Santa Catarina, Brazil)
Interior Methods for Nonlinear Programming
Philip E. Gill
In the late 1970's interior methods were considered as a seldom used ``oddity'' in the optimization toolkit-usually being reserved for problems with an objective function that is not defined or meaningful outside the feasible region. However, various developments in the 1980's and 90's resulted in a resurgence of interest in interior methods for nonlinear programming. Unquestionably, the biggest impetus has been the demonstrated success of interior methods on large convex problems. Another factor has been the ability to use interior-point techniques to treat nonlinear inequality constraints in trust-region methods. Ironically, both of these developments has largely resulted in the abandonment of the idea of using interior methods to stay feasible with respect to a set of inequality constraints.
The first part of this talk will review some of the main issues in the
formulation of interior methods for nonlinear programming, including the
form of the constraints, the form of the Newton equations, treatment of
indefiniteness, convergence properties, incorporation of equality
constraints, estimation of Lagrange multipliers, and the crucial role of
numerical linear algebra. The second part will focus on some recent
developments in interior methods and reconsider the role of interior
methods as ``true'' feasible point techniques.
Numerical Methods for Large-Scale Nonconvex Quadratic Programming
Nick Gould
In this talk, we shall describe two quadratic programming methods that lie at the heart of our forthcoming nonlinear programming solver GALAHAD. Both algorithms are designed to handle large, sparse, nonconvex problems, the ultimate aim being to solve problems in hundreds of thousands of unknowns.
Our first method is based on a feasible-interior-point trust-region approach. At each outer iteration, an appropriate model of the logarithmic barrier function is minimized within an appropriately-shaped ellipsoidal trust-region. The solution to this sub-problem is (approximately) solved using a preconditioned Lanczos/conjugate gradient-based approach, in which some attempt is made to move around the boundary of the trust region in an attempt to find a better boundary solution, if such a solution occurs. The preconditioner used aims to mimic any ill-conditioning resulting from the barrier model, and this has consequences for both the shape of the trust-region, and for the underlying convergence theory. We give numerical results that suggest that the method is indeed able to solve some problems of the size we had been aiming for.
Our second method is of the active-set variety, and, although general in scope, is intended within GALAHAD to deal with the case where a good estimate of the optimal active set has been determined (and thus that relatively few iterations will be required). The method is iterative at two levels, one level relating to the selection of the current active set, and the second due to the method used to solve the equality-constrained problem for this active set. A preconditioned conjugate gradient method is used for this inner iteration, with the preconditioner chosen especially to ensure feasibility of the iterates. The preconditioner is updated at the conclusion of each outer iteration to ensure that this feasibility requirement persists. The well-known equivalence between the conjugate-gradient and Lanczos methods is exploited when finding directions of negative curvature.
Joint with: Philippe Toint (the first part also with Andy Conn
and Dominique Orban)
Affine-Scaling Interior-Point Newton Methods for Problems with Simple Bounds
Matthias Heinkenschloss
Abstract: We discuss a class of affine-scaling interior-point methods for bound constrained optimization problems. The emphasis of this talk is on problems which violate strict complementarity at the solution. The methods are modifications of the affine-scaling interior-point Newton methods introduced by T.F. Coleman and Y. Li. There are two modifications. One is a modification of the scaling matrix, the other one is the use of a projection of the step to maintain strict feasibility rather than a simple scaling of the step. Under the strong second order sufficient optimality conditions our algorithms converge locally q-superlinearly or q-quadratically. Global convergence of the algorithms can be achieved by imbedding them into a trust-region algorithm via a subspace minimization approach. Numerical results for the CUTE test set will be presented. Extensions of our results will be discussed.
Joint with: Michael Ulbrich and Stefan Ulbrich
A Method for Solving Nonconvex Semidefinite Programs
Florian Jarre
We present a new method for finding a local minimizer of a
smooth unconstrained nonconvex minimization problem.
Here, the usual linear systems of the Newton step for convex minimization
are replaced by quadratically constrained quadratic programs (QQP's).
These QQP's are of special structure and can be solved directly.
In addition, the QQP's allow for a special line search or
a plane search which effects that the algorithm is applicable
to nonconvex programs.
A comparison with other approaches for smooth nonconvex
minimization is given, and the application to barrier subproblems
in interior-point methods is discussed. The method is then
generalized to equality constrained nonlinear programs.
Preliminary but promising numerical results for
nonconvex bilinear semidefinite programs conclude the talk.
Optimal Static Output Feedback Design by using a Trust Region Interior Point Method
Friedemann Leibfritz
We consider the problem of designing feedback control laws when a complete set of state variables is not available. The resulting nonlinear and nonconvex matrix optimization problem including SDP-constraints for determining the optimal feedback gain will be solved by a trust region interior point approach. The algorithm will be discussed in some details. Finally, using test examples from optimal output feedback design we demonstrate the usefulness of this approach numerically.
Joint with: Ekkehard Sachs
Filter Methods beyond NLP
Sven Leyffer
In this talk we extend the filter methodology to a class of nonsmooth optimization problems. These problems minimize a nonsmooth objective subject to a nonsmooth constraint. We present an algorithm which promotes global convergence through the use of a filter and handles nonsmoothness with a bundle-based approach.
The proposed algorithm collects a bundle of subgradients of the nonsmooth functions and solves a cutting plane model for a step. The step is accepted if it reduces either the objective or the constraint violation compared to a filter.
One disadvantage of bundle methods is the fact that they ignore second order behaviour. We indicate how second order information can be computed efficiently and how it can be included in the cutting plane model of the nonsmooth problem.
Problems of this type arise for instance in the solution of stochastic Quadratic Programming problems. In these problems here-and-now (first stage) decisions must be made based on present objective and constraints and future (second stage) recourse actions based on the realization of random events.
Stochastic QP give rise to very large QP with block-angular structure which can be exploited by a decomposition approach. This approach gives rise a nonsmooth master problem in the first stage variables. In this master problem a piece-wise quadratic is minimized subject to a piece-wise linear constraint.
Time permitting, we indicate how filter methods can be extended
to solve Mathematical Programs with Equilibrium constraints.
Affine Scaling Interior Point Methods Without Perturbation
Yuying Li
We discuss interior point methods with a local Newton process based directly on the complementarity conditions without perturbation. Globalization methods of this local Newton process are examined for solving nonlinear programming problems. Nonlinear minimization problems with bound constraints, linear inequality constraints, and general nonlinear constraints are used for illustration. The globalization methods considered maintain a direct tie to the minimization problem and thus guarantee a local minimizer which is better than a given starting point in solving the original optimization problem. We illustrate the importance of this property in practical applications.
Joint with: Thomas F. Coleman
Global and Local Convergence of a Trust-Region Primal-Dual Interior Point Algorithm for Constrained Nonlinear Programming
Dominique Orban
After describing the framework and convergence properties of a trust-region-based interior point algorithm proposed by Conn, Gould, Orban and Toint, we examine its local behavior and in particular, conditions under which asymptotical superlinear convergence can be observed under classical local convergence assumptions of linear independence constraint qualification and second-order sufficiency at a solution. Rather than using the solution of the current barrier subproblem as the initial point for the next, an extrapolation rule suggested by Dussault is used in the primal case and allows us to asymptotically perform only one inner iteration per outer iteration. In the primal-dual case, no extrapolated point is needed but a suitable updating rule for the variables has to be chosen. In both cases, one has to suitably update the barrier parameter and the authors believe that one can achieve superlinear convergence that is actually as close to quadratic as one wishes, at least in the primal-dual case.
Joint with: Andy Conn, Nick Gould, Annick Sartenar, and
Philippe Toint
Interior-Point Methods using Negative Curvature
Francisco J. Prieto
Interior-point methods are very promising for the solution of large-scale nonlinear problems. In the nonconvex case, the use of negative curvature information may allow additional improvements in the behavior of these methods without any significant increase in their computational costs. However, the efficient implementation of such a method requires addressing several issues; for example, how to guarantee the convergence of the method, and how to compute and use the negative-curvature information without incurring in excessive costs.
We describe the implementation of two variants of an interior-point method using negative curvature. Particular attention is paid to the conditions under which the negative curvature information is used, the way in which the descent and negative curvature directions are combined, and to the updating of the barrier parameter.
The convergence properties of the method are analyzed, and the conditions under which the methods are globally convergent and may attain quadratic convergence are discussed.
Finally, computational results on a set of small and medium-sized problems are presented and compared with those obtained using other interior-point implementations, and with the results obtained by the proposed method when negative curvature information is not used.
Joint with: Javier M. Moguerza
A Strongly Polynomial Rounding Procedure Yielding a Maximally Complementary Solution for a Class of Linear Complementarity Problems
Cornelis Roos
We deal with Linear Complementarity Problems (LCPs) with P*(k)- matrices. First we establish the convergence rate of the complementary variables along the central path. We provide order estimates in terms of the input data. Using these estimates we present a strongly polynomial rounding procedure: given a point with sufficiently small complementarity gap and close enough to the central path, the rounding procedure produces a maximally complementary solution in at most O (n3) arithmetic operations.
The result implies that interior point methods not only converge to a complementary solution of P*(k) LCPs but, when furnished with our rounding procedure, they can produce a maximally complementary (exact) solution in polynomial time.
Joint with: Tibor Illés, Jiming Peng, and Tamás Terlaky
Topics in Implementing an Interior Point Method for Nonconvex Nonlinear Programming
David F. Shanno
The talk will be concerned with recent work on LOQO, an interior point code
for nonconvex nonlinear programming. Topics include will be higher order methods,
including the predictor-corrector method, variable step bounds to determine maximum
step size, different step lengths for the primal and dual variables, alternative
methods for handling equality constraints, and the effect of different but equivalent
problem formulations, including the effects of problem reduction using presolve.
Numerical experience on the Hock and Schittkowski and the CUTE test sets will
be given, with comparisons where possible to other available codes.
Another Filter SQP Algorithm and its Application in the Context of Derivative Free Optimization
Philippe Toint
The support of an extension of the filter SQP algorithm analyzed by Fletcher, Gould, Leyffer and Toint (Report available from Ph. Toint Webpage) will be used to review the global convergence theory associated with such algorithms. In the new algorithmic variant, the approximate solution of the quadratic programming subproblem is not required to satisfy the constraints of this subproblem, but a suitable control on the size of the permitted infeasibility makes it possible to retain the necessary convergence properties while allowing considerable freedom in a practical implementation. The extension will also provide the opportunity to briefly discuss the features that the author currently feels are at the core of filter methods.
The opportunities offered by filter methods for derivative free optimization will also be discussed, based on the framework discussed by Conn, Scheinberg and Toint that uses a trust-region with conditional model and multivariate interpolation. With some luck, preliminary numerical experience might also be available.
Joint with: N. Gould and B. Colson
An Infeasible Interior-Point Trust-Region Method for Constrained Minimization
Paul Tseng
We consider an interior-point trust-region method that can start at
infeasible points
and achieves subsequence convergence to 1st-order
stationary point, assuming only a certain set containing the iterates is
bounded and the origin is not in the convex hull of the nearly active
constraint gradients everywhere on this set. If the cluster point
satisfies an additional assumption (which holds when the constraints are
linear or when the cluster point satisfies strict complementarity and
a local Lipschitzian error bound holds), then it is a 2nd-order
stationary point. Preliminary numerical experience and
extension to semidefinite nonlinear programming will be discussed.
Interior Projected Newton Methods for Mixed Nonlinear Complementarity Problems in Function Spaces with Applications
Michael Ulbrich
Many applications in mathematical modeling and control lead to problems that are posed in infinite-dimensional function spaces and contain pointwise complementarity conditions. In particular, this applies to obstacle, free boundary value, and optimal control problems. The availability of an infinite- dimensional convergence theory is of significant importance for the robustness of solution algorithm with respect to the mesh size of the numerical discretization.
In this talk we consider infinite-dimensional mixed nonlinear complementarity problems (MCP) in the function space Lp. Using NCP- or MCP-functions, these problems can be converted into equivalent nonsmooth operator equations. We present a new semismoothness-like calculus in function space that can be used to develop a locally q-superlinearly convergent Newton method for these infinite-dimensional nonsmooth equations. Since the problem function of the MCP may be undefined if the bound constraints are violated, we restrict the equation reformulation to the feasible set of the MCP. The Newton iteration is augmented by an interior-point modification of the projection onto the feasible box. Using our infinite-dimensional semismoothness concept, we establish the q-superlinear local convergence of the resulting method to a regular solution without assuming strict complementarity. We also present a sufficient condition for regularity that is well-suited for many applications. Our results can be viewed as an infinite-dimensional generalization of the successful semismooth Newton methods for finite-dimensional MCPs.
The algorithm and its convergence theory are applied to the
optimal control of partial differential equations and to free boundary
value problems. Numerical results prove the efficient and mesh-independent
behavior of our algorithm and confirm our theoretical findings.
Nonmonotone Trust Region Methods for Nonlinear Equality Constrained Optimization without a Penalty Function
Stefan Ulbrich
We propose and analyze a class of nonmonotone trust region methods for nonlinear equality constrained optimization problems. The algorithmic framework yields global convergence without using a merit function like the augmented Lagrangian and allows nonmonotonicity independently for both, the constraint violation and the objective function value. Similar to the augmented Lagrangian-based algorithm by Dennis, El-Alem, and Maciel, each step is composed of a quasi-normal and a tangential step. Both steps are required to satisfy a fraction of Cauchy decrease condition for their respective trust region subproblems. Our mechanism for accepting steps combines nonmonotone decrease conditions on the constraint violation and/or the objective function, which leads to a flexibility and acceptance behaviour comparable to filter-based methods.
Preliminary numerical results for the CUTE test set confirm that our approach is very promising.
The proposed class of algorithms can be extended in a natural way to multilevel trust region algorithms. Hereby, the constraints are grouped in blocks and for each block a normal step is computed that acts tangential to the previous blocks. The generalization of our analysis to this class of algorithms is making progress and we plan to present convergence results also for this extended framework. The extension of our concept to the general NLP is under development.
Joint with: Michael Ulbrich
A Quasi-Newton Interior Point Method for Large-Scale Nonlinear Programming with Modifications to Handle Constraint Inconsistencies
Andreas Wächter
A primal-dual interior point line search method for large-scale nonlinear nonconvex programming is presented. The problem instances are assumed to be formulated with nonlinear equality constraints and additional bound constraints. We are especially interested in solving Chemical Engineering applications such as Dynamic Optimization or Multi-Period Design problems which usually have very few degrees of freedom compared to the total number of variables and often do not provide exact second order derivative information. As a consequence, the proposed algorithm follows a non-orthogonal range and null space decomposition of the search direction, which allows an efficient solution of the arising linear systems, and uses reduced space Quasi-Newton approximations. In order to achieve fast local convergence properties for decreasing barrier parameters, only the reduced Hessian of the Lagrangian of the original problem is approximated (by BFGS or SR1), whereas the reduced Hessian of the barrier term or its primal-dual equivalent is computed exactly. We present practical results for standard problem collections and for some dynamic optimization applications (up to 55000 variables).
However, we discovered that our unmodified algorithm for solving the barrier problem sometimes generates iterates that converge to an infeasible point at the boundary of the box constraints although this point is neither a solution of the barrier problem nor a stationary point for the constraint violation. This behavior is demonstrated on a simple three dimensional example. It is shown that other Newton-type Interior Point methods suffer from similar behavior, and we propose modifications to overcome this problem.
Joint with: Lorenz T. Biegler
Properties of the log-Barrier Function for Degenerate Nonlinear Programs
Stephen J. Wright
Consider a solution of a nonlinear program at which second-order sufficient conditions and the Mangasarian-Fromovitz constraint qualifications are satisfied, but the active constraint gradients are not necessarily linearly independent. We describe some properties of the minimizers of the log-barrier function near this point (for different values of the barrier parameter), when strict complementarity both is and is not satisfied.
Joint with: Dominique Orban
Convergence Properties of an Interior Point Method with a Primal-Dual Merit Function for Nonlinear Optimization
Hiroshi Yabe
In this talk, we are concerned with a primal-dual interior point method for solving nonlinearly constrained optimization problems, in which Newton-like methods are applied to the shifted barrier KKT conditions. We propose a differentiable primal-dual merit function based on the l2 barrier penalty function within the framework of line search methods, and show the global convergence property of our method. Furthermore, by carefully controlling parameters in the algorithm, its superlinear convergence property is shown.
Joint with: Hiroshi Yamashita
Solving Semidefinite Programs via Nonlinear Programming: Transformations and Derivatives
Yin Zhang
We introduce transformations that convert a large class of linear and/or nonlinear semidefinite programming (SDP) problems into nonlinear optimization problems over some open ``orthants''. For example, in the case of the SDP relaxation of a MAXCUT problem, the number of variables of the resulting nonlinear optimization problem is equal to the number of vertices in the underlying graph. The class of transformable problems includes most, if not all, instances of SDP relaxations of combinatorial optimization problems with binary variables, as well as other important SDP problems. We also derive formulas for the first-order derivatives of the objective function of the resulting nonlinear optimization problem, hence enabling the effective application of existing gradient-based optimization techniques to the solution of large-scale SDP problems.
Joint with: Samuel Burer and Renato D.C. Monteiro