Overview
globalsearch-rs is a Rust implementation of the OQNLP (OptQuest/NLP) algorithm, a global
optimization algorithm that combines scatter search with local nonlinear programming (NLP) solvers.
Briefly, the algorithm operates in two distinct stages:
- Stage 1 (Scatter Search) generates a diverse reference set of candidate points through strategic sampling and combination.
- Stage 2 (Iterative Improvement) refines promising candidates using local optimization with adaptive filtering mechanisms.
How the Algorithm Works
The OQNLP algorithm is structured as a two-stage multistart optimization approach:
Stage 1: Scatter Search (Exploration)
Stage 1 creates a diverse reference set of population_size candidate points through six phases:
-
Initialization: The algorithm begins by adding three anchor points to the reference set: the lower bound corner, upper bound corner, and centroid (midpoint between bounds). These strategic locations ensure coverage of the search space boundaries.
-
Diversification: Using stratified random sampling,
population_sizecandidate points are generated uniformly within the variable bounds. The algorithm iteratively selects candidates that maximize the minimum distance to existing reference set points, ensuring spatial diversity. -
Evaluation: All points in the reference set are evaluated using the objective function and sorted in ascending order by their objective values.
-
Intensification: The top
kbest points (wherek = max(2, min(⌊sqrt(population_size)⌋, population_size))) are selected for combination. All pairs of these elite points generate trial points through:- Linear combinations: Four interpolation/extrapolation points using weights (0.25, 0.5, 0.75, 1.25)
- Random perturbations: Two points near the pair’s midpoint with small random deviations (±10% of variable ranges)
Reference Set Update: Trial points undergo diversity and quality filtering:
- Points too close to the top 5 reference points (distance < 10% of average distance) are rejected
- Only trials better than the worst reference point are evaluated
- The reference set is updated by selecting the best points from the merged pool of original and accepted trial points
Output: The algorithm returns the sorted reference set with pre-computed objective values and the best solution found.
Stage 2: Iterative Improvement (Local Optimization)
Stage 2 processes the reference set through iterations iterations, applying two adaptive filters before local optimization:
Merit Filter: Only trial points with objective values below or equal to the current merit threshold are accepted. The threshold is initialized to the best objective from Stage 1 and adaptively relaxed when progress stagnates.
Distance Filter: Trial points are rejected if they are too close (within distance_factor) to any previously accepted solution, preventing redundant local searches and maintaining solution diversity.
Processing Logic:
- When both filters pass, the trial point undergoes local optimization using an NLP solver.
- When the filters reject a point, an unchanged cycle counter increments
- After
wait_cycleconsecutive rejections, the merit threshold is relaxed:T_merit ← T_merit + threshold_factor (1 + |T_merit|)
Solution Set Management: Local optimization results are processed with strict criteria:
- New global best: If a solution is better than the current best (by relative tolerance 1e-6 and absolute tolerance 1e-8), the entire solution set is replaced
- Similar objective: Solutions within tolerance of the best are added as distinct minima if sufficiently far from existing solutions
- Worse solutions: Rejected, but added to the distance filter to maintain search space coverage
Parallel Processing: When the rayon feature is enabled and batch_size > 1, Stage 2 iterations are processed in batches with parallel filter evaluation and local search execution, while maintaining sequential consistency for state updates.
Construction of Solution Set
After running for the selected amount of iterations, optimal solutions are collected from the local refinements and combined into a final solution set. This set represents the best-found solutions across all multistart attempts, providing a diverse range of high-quality candidates for further analysis or deployment.
Equal solutions are defined as solutions that have the same objective function value with a relative
tolerance of 1e-6 and an absolute tolerance of 1e-8.
Algorithm Pseudocode
The following pseudocode summarizes the two main stages of the OQNLP algorithm.
Stage 1: Scatter Search
Stage 2: Iterative Improvement
Where the subroutine ProcessSolution manages the solution set by adding new solutions based on their objective values and distances from existing solutions. It ensures that only distinct and high-quality solutions are retained in the final set.
It can be summarized as follows:
Graphical Example
In the following example, we optimize the well-known Six-Hump Camel function. The function is continuous, differentiable and non-convex. It is defined as follows 1:
The function has two global minima at approximately with a function value of .
The function is defined in the domain and .
In this case, we run the algorithm with a population size of 50 and 50 stage two iterations. The distance_factor is set to 0.75, the wait_cycle to 10, the threshold_factor to 0.5 and the seed to 50002099. The local solver used is COBYLA with the default configuration.
Creating the Reference Set
The reference set is created using scatter search techniques. The following plot shows a countour plot of the function with the initial population of 50 points (blue points). The orange diamond indicates the best point in the initial population, in which the local solver is started.
Stage Two Iterations
The algorithm then enters the stage two iterations, where it performs local optimization on the selected reference set points that have been identified as promising (i.e. those within the merit and distance thresholds). The following plots show the contour plot and surface of the function colored by the stage two iteration number.
Convergence Analysis
The following plot shows the convergence of the best objective function value found during the optimization process. On the plot on the left, The X-axis represents the number of Stage 2 iteration, while the Y-axis represents the best objective function value found so far. On the other hand, the plot on the right shows the total elapsed time in miliseconds (for both Stage 1 and Stage 2) in the X-axis, while the Y-axis represents the best objective function value found so far.
The plot shows that the algorithm first finds a local minima and then improves it over time. The algorithm successfully finds both global minima of the Six-Hump Camel function.
The algorithm was run on a Windows 10 machine using a Ryzen 5 5500 processor with 16 GB of RAM @ 3200 MHz, using rust 1.91.0 with the release profile.
The code to reproduce this example can be found in the plots folder of the GitHub repository.
References
Footnotes
-
Molga, M., & Smutnicki, C. Test functions for optimization needs (April 3, 2005), pp. 27-28. Retrieved January 2025, from https://robertmarks.org/Classes/ENGR5358/Papers/functions.pdf ↩