Local Solvers
Local solvers are algorithms that find the local minimum of a function. In globalsearch-rs
, we use the cobyla
crate and the argmin
crate to provide local solvers. The local
solvers are used to refine the points in the reference set.
Available Local Solvers
The following local solvers are available in globalsearch-rs
:
- COBYLA (default)
- L-BFGS
- Nelder-Mead
- Steepest Descent
- Trust Region
- Newton-CG
Requirements Comparison
Below is a comparison of the requirements for each local solver:
Solver | Gradient | Hessian | Line Search | Notes |
---|---|---|---|---|
COBYLA | No | No | No | Derivative-free, constraints support |
L-BFGS | Yes | No | Yes | Optional L1-regularization via OWL-QN |
Nelder-Mead | No | No | No | Derivative-free |
Steepest Descent | Yes | No | Yes | First-order derivatives |
Trust Region | Yes | Yes | No | Uses trust-region radius updates |
Newton-CG | Yes | Yes | Yes | Conjugate-gradient Newton steps |
Detailed Overview of Each Solver
COBYLA
The COBYLA (Constrained Optimization BY Linear Approximation) algorithm is a derivative-free
optimization method that is used to find the local minimum of a function. It is particularly useful
for problems with nonlinear constraints. The algorithm works by iteratively constructing linear
approximations of the objective function and constraints.
This algorithm requires that the Problem
struct implements the objective
method. If the problem
has constraints, the Problem
struct must also implement the constraints
method.
COBYLA is the only local solver in globalsearch-rs
that supports constraints and bounds.
Parameters for COBYLA
Parameter | Type | Description | Default |
---|---|---|---|
max_iter | usize | Maximum number of iterations | 300 |
initial_step_size | f64 | Initial step size | 1.0 |
ftol_rel | f64 | Tolerance for the objective function (relative) | 1e-6 |
ftol_abs | f64 | Tolerance for the objective function (absolute) | 1e-8 |
xtol_rel | f64 | Tolerance for the solution (relative) | 0 (Disabled) |
xtol_abs | f64 | Tolerance for the solution (absolute) | 0 (Disabled) |
L-BFGS
The L-BFGS (Limited-memory Broyden-Fletcher-Goldfarb-Shanno) algorithm is a quasi-Newton method that is used to find the local minimum of a function. It is a popular algorithm for large-scale optimization problems and is known for its efficiency and effectiveness. It also provides L1 regularization via the Orthant-Wise Limited-memory Quasi-Newton (OWL-QN) method.
This algorithm requires that the Problem
struct implements the objective
and gradient
methods.
It also requires a linesearch
method to be implemented in the OQNLPParams
struct, which is used
to find the step size for the L-BFGS algorithm.
Parameters for L-BFGS
Parameter | Type | Description | Default |
---|---|---|---|
max_iter | usize | The maximum number of iterations to perform. | 300 |
tolerance_grad | f64 | The gradient tolerance. | sqrt(EPSILON) |
tolerance_cost | f64 | The cost tolerance. | EPSILON |
history_size | usize | Number of past updates to the position () and gradient () used to approximate the inverse Hessian. | 10 |
l1_coefficient | Option<f64> | L1 regularization coefficient (OWL-QN) to penalize high-value coefficients. | None |
line_search_params | LineSearchParams | Line search parameters. | LineSearchParams::default() |
Nelder-Mead
The Nelder-Mead algorithm is a simplex-based optimization method that is used to find the local minimum of a function. It is a derivative-free optimization method that is suitable for non-smooth functions. The algorithm is simple to implement and is widely used in practice.
This algorithm requires that the Problem
struct implements the objective
method.
Parameters for Nelder-Mead
Parameter | Type | Description | Default |
---|---|---|---|
simplex_delta | f64 | Step size for generating the simplex from a given point | 0.1 |
sd_tolerance | f64 | Sample standard deviation tolerance | EPSILON |
max_iter | usize | Maximum number of iterations | 300 |
alpha | f64 | Reflection coefficient | 1.0 |
gamma | f64 | Expansion coefficient | 2.0 |
rho | f64 | Contraction coefficient | 0.5 |
sigma | f64 | Shrinkage coefficient | 0.5 |
Steepest Descent
The Steepest Descent algorithm is a first-order optimization method that is used to find the local minimum of a function. It is based on the idea of moving in the direction of the steepest descent of the function. The algorithm is simple to implement and is suitable for smooth functions.
This algorithm requires that the Problem
struct implements the objective
and gradient
methods.
It also requires a linesearch
method to be implemented in the OQNLPParams
struct, which is used
to find the step size for the Steepest Descent algorithm.
Parameters for Steepest Descent
Parameter | Type | Description | Default |
---|---|---|---|
max_iter | usize | Maximum number of iterations | 300 |
line_search_params | LineSearchParams | Line search parameters | LineSearchParams::default() |
Trust Region
The Trust Region algorithm is a popular optimization method that is used to find the local minimum of a function. It is based on the idea of approximating the objective function within a local region and iteratively refining the solution. The algorithm is particularly effective for non-convex functions and is widely used in practice.
This algorithm requires that the Problem
struct implements the objective
, gradient
, and
hessian
methods.
Parameters for Trust Region
Parameter | Type | Description | Default |
---|---|---|---|
trust_region_radius_method | TrustRegionRadiusMethod | Trust region radius method | Cauchy |
radius | f64 | Initial trust region radius | 1.0 |
max_radius | f64 | Maximum trust region radius | 100.0 |
eta | f64 | Trust region update parameter | 0.125 |
Newton-CG
The Newton-CG (Newton-Conjugate Gradient) algorithm is a second-order optimization method that is used to find the local minimum of a function. It combines the Newtonβs method with the conjugate gradient method to efficiently handle large-scale optimization problems. The algorithm is particularly effective for non-linear optimization problems and is widely used in practice.
This algorithm requires that the Problem
struct implements the objective
, gradient
, and
hessian
methods.
It also requires a linesearch
method to be implemented in the OQNLPParams
struct, which is used
to find the step size for the Newton-CG algorithm.
Parameters for Newton-CG
Parameter | Type | Description | Default |
---|---|---|---|
max_iter | usize | Maximum number of iterations | 300 |
curvature_threshold | f64 | Curvature threshold | 0.0 |
tolerance | f64 | Tolerance for the objective function | EPSILON |
line_search_params | LineSearchParams | Line search parameters | LineSearchParams::default() |