API Reference#
This section contains the complete API documentation for PyGlobalSearch.
Core Classes#
Defines an optimization problem to be solved. |
|
Parameters for the OQNLP global optimization algorithm. |
|
A local solution found by the optimization algorithm. |
|
A collection of local solutions found by the optimization algorithm. |
Main Functions#
Perform global optimization on the given problem. |
Complete Module Documentation#
pyglobalsearch#
PyGlobalSearch: Python bindings for globalsearch-rs.
PyGlobalSearch provides a Python interface to the globalsearch-rs Rust crate, which implements the OQNLP (OptQuest/NLP) algorithm for global optimization.
The OQNLP algorithm combines scatter search metaheuristics with local optimization to effectively find global minima in nonlinear optimization problems. It’s particularly effective for:
Multi-modal functions with multiple minima
Nonlinear optimization problems
Problems where derivative information may be unavailable or unreliable
Constrained optimization (using COBYLA solver)
Quick Start#
import pyglobalsearch as gs
import numpy as np
# Define your problem
def objective(x): return x[0]**2 + x[1]**2
def bounds(): return np.array([[-5, 5], [-5, 5]])
# Create problem and parameters
problem = gs.PyProblem(objective, bounds)
params = gs.PyOQNLPParams()
# Optimize
result = gs.optimize(problem, params)
best = result.best_solution()
print(f"Best: x = {best.x()}, f(x) = {best.fun()}")
Key Features#
Multiple Solvers: COBYLA, L-BFGS, Newton-CG, Trust Region, Nelder-Mead, Steepest Descent
Constraint Support: Inequality constraints via COBYLA solver
Builder Pattern: Flexible solver configuration using builder functions
Multiple Solutions: Returns all global minima found
Early Stopping: Target objectives and time limits for efficiency
Main Classes#
PyProblem: Defines the optimization problem (objective, bounds, constraints)
PyOQNLPParams: Controls algorithm behavior (iterations, population size, etc.)
PySolutionSet: Contains all solutions found by the optimizer
PyLocalSolution: Represents a single solution point and objective value
Algorithm Reference#
Based on: Ugray, Z., Lasdon, L., Plummer, J., Glover, F., Kelly, J., & Martí, R. (2007). “Scatter Search and Local NLP Solvers: A Multistart Framework for Global Optimization.” INFORMS Journal on Computing, 19(3), 328-340.
- pyglobalsearch.optimize(problem, params, observer=None, local_solver=None, local_solver_config=None, seed=None, target_objective=None, max_time=None, verbose=None, exclude_out_of_bounds=None, parallel=None)#
Perform global optimization on the given problem.
This function implements the OQNLP (OptQuest/NLP) algorithm, which combines scatter search metaheuristics with local optimization to find global minima of nonlinear problems. It’s particularly effective for multi-modal functions with multiple local minima.
The algorithm works in two stages:
Scatter search to explore the parameter space and identify promising regions
Local optimization from multiple starting points to refine solutions
- Parameters:
problem (PyProblem) – The optimization problem to solve (objective, bounds, constraints, etc.)
params (PyOQNLPParams) – Parameters controlling the optimization algorithm behavior
observer (PyObserver, optional) – Optional observer for tracking algorithm progress and metrics
local_solver (str, optional) – Local optimization algorithm (“COBYLA”, “LBFGS”, “NewtonCG”, “TrustRegion”, “NelderMead”, “SteepestDescent”)
local_solver_config (object, optional) – Custom configuration for the local solver (must match solver type)
seed (int, optional) – Random seed for reproducible results
target_objective (float, optional) – Stop optimization when this objective value is reached
max_time (float, optional) – Maximum time in seconds for Stage 2 optimization (unlimited if None)
verbose (bool, optional) – Print progress information during optimization
exclude_out_of_bounds (bool, optional) – Filter out solutions that violate bounds
parallel (bool, optional) – Enable parallel processing using rayon (default: False)
- Returns:
A set of local solutions found during optimization
- Return type:
- Raises:
ValueError – If solver configuration doesn’t match the specified solver type, or if the problem is not properly defined
Examples#
Basic optimization:
>>> result = gs.optimize(problem, params) >>> best = result.best_solution()
With observer for progress tracking:
>>> observer = gs.observers.Observer().with_stage1_tracking().with_stage2_tracking().with_default_callback() >>> result = gs.optimize(problem, params, observer=observer)
With custom solver configuration:
>>> cobyla_config = gs.builders.cobyla(max_iter=1000) >>> result = gs.optimize(problem, params, ... local_solver="COBYLA", ... local_solver_config=cobyla_config)
With early stopping:
>>> result = gs.optimize(problem, params, ... target_objective=-1.0316, # Stop when reached ... max_time=60.0, # Max 60 seconds ... verbose=True) # Show progress
Enable parallel processing:
>>> result = gs.optimize(problem, params, parallel=True)
- class pyglobalsearch.PyOQNLPParams(iterations=300, population_size=1000, wait_cycle=15, threshold_factor=0.2, distance_factor=0.75)#
Bases:
objectParameters for the OQNLP global optimization algorithm.
The OQNLP algorithm combines scatter search metaheuristics with local optimization to find global minima in nonlinear optimization problems. These parameters control the behavior of the algorithm.
- Parameters:
iterations (int) – Maximum number of global iterations
population_size (int) – Size of the scatter search population
wait_cycle (int) – Iterations to wait without improvement before termination
threshold_factor (float) – Controls acceptance threshold for new solutions
distance_factor (float) – Controls minimum distance between solutions
Examples#
Default parameters:
>>> params = gs.PyOQNLPParams()
Custom parameters for difficult problems:
>>> params = gs.PyOQNLPParams( ... iterations=500, ... population_size=2000, ... wait_cycle=25, ... threshold_factor=0.1, # More exploration ... distance_factor=0.2 # Allow closer solutions ... )
- distance_factor#
Controls minimum distance between solutions
- iterations#
Maximum number of stage two iterations
- population_size#
Size of the scatter search population
- threshold_factor#
Controls acceptance threshold for new solutions
- wait_cycle#
Iterations to wait without improvement before termination
- class pyglobalsearch.PyProblem(objective, variable_bounds, gradient=None, hessian=None, constraints=None)#
Bases:
objectDefines an optimization problem to be solved.
A PyProblem encapsulates all the mathematical components needed for optimization: the objective function to minimize, variable bounds, and optional gradient, Hessian, and constraint functions.
- Parameters:
objective (callable) – Function that takes x (array-like) and returns the value to minimize (float)
variable_bounds (callable) – Function that returns bounds array of shape (n_vars, 2) with [lower, upper] bounds
gradient (callable, optional) – Function that takes x and returns gradient array (for gradient-based solvers)
hessian (callable, optional) – Function that takes x and returns Hessian matrix (for Newton-type solvers)
constraints (list[callable], optional) – List of constraint functions where constraint(x) >= 0 means satisfied
Examples#
Basic unconstrained problem:
>>> def objective(x): return x[0]**2 + x[1]**2 >>> def bounds(): return np.array([[-5, 5], [-5, 5]]) >>> problem = gs.PyProblem(objective, bounds)
Problem with gradient for faster convergence:
>>> def gradient(x): return np.array([2*x[0], 2*x[1]]) >>> problem = gs.PyProblem(objective, bounds, gradient=gradient)
Constrained problem (requires COBYLA solver):
>>> def constraint(x): return x[0] + x[1] - 1 # x[0] + x[1] >= 1 >>> problem = gs.PyProblem(objective, bounds, constraints=[constraint])
- constraints#
List of constraint functions
- Parameters:
x – Input parameters as a list or array of floats
- Returns:
Constraint value (float), should be >= 0 to be satisfied
- Raises:
ValueError – If any constraint function does not return a float
- gradient#
Function returning the gradient
- Parameters:
x – Input parameters as a list or array of floats
- Returns:
Gradient as a list or array of floats
- Raises:
ValueError – If the gradient length does not match the number of variables
- hessian#
Function returning the Hessian matrix
- Parameters:
x – Input parameters as a list or array of floats
- Returns:
Hessian matrix as a 2D list or array of floats
- Raises:
ValueError – If the Hessian is not square or does not match the number of variables
- objective#
Objective function to minimize
- Parameters:
x – Input parameters as a list or array of floats
- Returns:
Objective function value (float)
- variable_bounds#
Function returning variable bounds
- Returns:
2D array-like of shape (n_vars, 2) with [lower, upper] bounds for each variable
- Return type:
array-like
- class pyglobalsearch.PyLocalSolution(point, objective)#
Bases:
objectA local solution found by the optimization algorithm.
Represents a single solution point in parameter space along with its objective function value. This class provides both direct attribute access and SciPy-compatible methods for accessing solution data.
- Variables:
Examples#
Create and access a solution:
>>> solution = PyLocalSolution([1.0, 2.0], 3.5) >>> # Access via attributes >>> x_coords = solution.point >>> f_value = solution.objective >>> # Access via SciPy-compatible methods >>> x_coords = solution.x() >>> f_value = solution.fun()
- fun()#
Returns the objective function value at the solution point.
- Returns:
The objective function value at this solution point (same as objective attribute)
- Return type:
Note
This method is similar to the fun method in SciPy.optimize results.
- objective#
The objective function value at this solution point
- Returns:
The objective function value at this solution point (same as fun() method)
- Return type:
- point#
The solution coordinates as a list of float values
- class pyglobalsearch.PySolutionSet(solutions)#
Bases:
objectA collection of local solutions found by the optimization algorithm.
The OQNLP algorithm typically finds multiple local solutions during its search. This class stores all solutions and provides methods to access them, find the best solution, and iterate over the results.
Solutions are automatically sorted by objective function value (best first).
Examples#
Get optimization results and access solutions:
>>> result = gs.optimize(problem, params) >>> # Access best solution >>> best = result.best_solution() >>> if best: ... print(f"Best: x = {best.x()}, f(x) = {best.fun()}") >>> # Check number of solutions >>> print(f"Found {len(result)} solutions") >>> # Iterate over all solutions >>> for i, solution in enumerate(result): ... print(f"Solution {i}: f(x) = {solution.fun()}") >>> # Access specific solution by index >>> second_best = result[1] if len(result) > 1 else None
- best_solution()#
Returns the best solution in the set based on the objective function value.
- Returns:
The solution with the lowest objective function value, or None if the set is empty
- Return type:
PyLocalSolution or None
- is_empty()#
Returns true if the solution set contains no solutions.
- Returns:
True if the solution set is empty, False otherwise
- Return type:
- solutions#
The list of local solutions found by the optimization algorithm.
- Returns:
The list of local solutions found by the optimization algorithm
- Return type:
pyglobalsearch.builders#
- class pyglobalsearch.builders.PyLineSearchParams(method)#
Bases:
object- static hagerzhang(params)#
Hager-Zhang line search configuration
- Parameters:
params – Hager-Zhang line search parameters
- static morethuente(params)#
More-Thuente line search configuration
- Parameters:
params – More-Thuente line search parameters
- class pyglobalsearch.builders.PyHagerZhang(delta=0.1, sigma=0.9, epsilon=1e-06, theta=0.5, gamma=0.66, eta=0.01, bounds=Ellipsis)#
Bases:
objectHager-Zhang line search method configuration.
The Hager-Zhang line search is a sophisticated line search algorithm that satisfies both Wolfe conditions and provides strong theoretical guarantees. It’s particularly effective for L-BFGS and other quasi-Newton methods.
- Parameters:
delta (float) – Armijo parameter for sufficient decrease condition
sigma (float) – Wolfe parameter for curvature condition
epsilon (float) – Tolerance for approximate Wolfe conditions
theta (float) – Parameter for bracketing phase
gamma (float) – Parameter for update rules
eta (float) – Parameter for switching conditions
Examples#
Default parameters (recommended for most problems):
>>> hz_config = gs.builders.hagerzhang()
Conservative line search (more function evaluations, more reliable):
>>> conservative = gs.builders.hagerzhang(delta=0.01, sigma=0.99)
Aggressive line search (fewer evaluations, less reliable):
>>> aggressive = gs.builders.hagerzhang(delta=0.3, sigma=0.7)
Use with L-BFGS:
>>> lbfgs_config = gs.builders.lbfgs(line_search_params=hz_config)
- bounds#
Set lower and upper bound of step
- delta#
Constant C1 of the strong Wolfe conditions
- epsilon#
Parameter for approximate Wolfe conditions
- eta#
Used in the lower bound for beta_k^N.
- gamma#
Parameter that determines when a bisection step is performed.
- sigma#
Constant C2 of the strong Wolfe conditions
- theta#
Parameter used in the update rules when the potential intervals [a, c] or [c, b] violate the opposite slope condition.
- pyglobalsearch.builders.hagerzhang()#
- class pyglobalsearch.builders.PyMoreThuente(c1=0.0001, c2=0.9, width_tolerance=1e-10, bounds=Ellipsis)#
Bases:
object- bounds#
Set lower and upper bound of step
- c1#
Constant C1 of the strong Wolfe conditions
- c2#
Constant C2 of the strong Wolfe conditions
- width_tolerance#
Parameter for approximate Wolfe conditions
- pyglobalsearch.builders.morethuente()#
- class pyglobalsearch.builders.PyLBFGS(max_iter=300, tolerance_grad=Ellipsis, tolerance_cost=Ellipsis, history_size=10, l1_coefficient=None, line_search_params=Ellipsis)#
Bases:
objectL-BFGS (Limited-memory Broyden-Fletcher-Goldfarb-Shanno) solver configuration.
L-BFGS is a quasi-Newton optimization algorithm that approximates the inverse Hessian using only gradient information and a limited history of previous steps. It’s one of the most effective algorithms for smooth, unconstrained optimization.
- Parameters:
max_iter (int) – Maximum number of iterations
tolerance_grad (float) – Gradient norm tolerance for convergence
tolerance_cost (float) – Relative function change tolerance
history_size (int) – Number of previous steps to store
l1_coefficient (float, optional) – L1 regularization coefficient for sparsity
line_search_params (PyLineSearchParams) – Line search method configuration
Key Features
Requires only gradient information (no Hessian)
Superlinear convergence near the optimum
Memory-efficient (stores only m previous steps)
Excellent for large-scale optimization
Convergence Criteria
L-BFGS stops when:
||∇f(x)|| < tolerance_grad (gradient norm is small)
|f_new - f_old| / max(|f_new|, |f_old|, 1) < tolerance_cost
Maximum iterations reached
Examples#
Default configuration (good for most problems):
>>> lbfgs_config = gs.builders.lbfgs()
High precision optimization:
>>> precise = gs.builders.lbfgs( ... tolerance_grad=1e-12, ... max_iter=1000 ... )
Large-scale problems (more history for better approximation):
>>> large_scale = gs.builders.lbfgs( ... history_size=20, ... line_search_params=gs.builders.hagerzhang() ... )
Sparse optimization with L1 regularization:
>>> sparse = gs.builders.lbfgs( ... l1_coefficient=0.01, # Promotes sparsity ... tolerance_grad=1e-6 ... )
Conservative line search for difficult problems:
>>> robust = gs.builders.lbfgs( ... line_search_params=gs.builders.morethuente(c1=1e-6, c2=0.99) ... )
- line_search_params#
Line search method configuration
- Type:
- pyglobalsearch.builders.lbfgs()#
- class pyglobalsearch.builders.PyNelderMead(simplex_delta=0.1, sd_tolerance=Ellipsis, max_iter=300, alpha=1.0, gamma=2.0, rho=0.5, sigma=0.5)#
Bases:
object
- pyglobalsearch.builders.neldermead()#
- class pyglobalsearch.builders.PySteepestDescent(max_iter=300, line_search_params=Ellipsis)#
Bases:
object- line_search_params#
Line search method configuration
- Type:
- pyglobalsearch.builders.steepestdescent()#
- class pyglobalsearch.builders.PyNewtonCG(max_iter=300, curvature_threshold=0.0, tolerance=Ellipsis, line_search_params=Ellipsis)#
Bases:
object- line_search_params#
Line search method configuration
- Type:
- pyglobalsearch.builders.newtoncg()#
- class pyglobalsearch.builders.PyTrustRegionRadiusMethod#
Bases:
object- Cauchy = PyTrustRegionRadiusMethod.Cauchy#
- Steihaug = PyTrustRegionRadiusMethod.Steihaug#
- static cauchy()#
- static steihaug()#
- class pyglobalsearch.builders.PyTrustRegion(trust_region_radius_method=Ellipsis, max_iter=300, radius=1.0, max_radius=100.0, eta=0.125)#
Bases:
object- trust_region_radius_method#
Trust region radius method
- pyglobalsearch.builders.trustregion()#
- class pyglobalsearch.builders.PyCOBYLA(max_iter=300, step_size=1.0, ftol_rel=None, ftol_abs=None, xtol_rel=None, xtol_abs=None)#
Bases:
objectCOBYLA (Constrained Optimization BY Linear Approximations) solver configuration.
COBYLA is a derivative-free optimization algorithm that can handle inequality constraints. It works by building linear approximations to the objective function and constraints, making it suitable for problems where gradients are unavailable or unreliable.
- Parameters:
max_iter (int) – Maximum number of iterations
step_size (float) – Initial trust region radius
ftol_rel (float, optional) – Relative tolerance for function convergence
ftol_abs (float, optional) – Absolute tolerance for function convergence
xtol_rel (float, optional) – Relative tolerance for parameter convergence
xtol_abs (list[float], optional) – Per-variable absolute tolerances for parameters
Key Features
No gradient information required
Handles inequality constraints (constraint(x) ≥ 0)
Robust for noisy or discontinuous functions
Good for problems with expensive function evaluations
Convergence Criteria
COBYLA stops when any of these conditions are met:
Maximum iterations reached
Function tolerance satisfied: |f_new - f_old| < ftol_abs + ftol_rel * |f_old|
Parameter tolerance satisfied: |x_new - x_old| < xtol_abs + xtol_rel * |x_old|
Examples#
Default configuration:
>>> cobyla_config = gs.builders.cobyla()
High precision optimization:
>>> precise = gs.builders.cobyla( ... max_iter=1000, ... xtol_abs=[1e-10] * n_vars # Very tight parameter tolerance ... )
For expensive function evaluations:
>>> efficient = gs.builders.cobyla( ... max_iter=100, ... ftol_rel=1e-4, # Looser function tolerance ... step_size=0.1 # Smaller initial steps ... )
Different tolerance per variable (for scaled problems):
>>> scaled = gs.builders.cobyla( ... xtol_abs=[1e-6, 1e-8, 1e-4] # x1: 1e-6, x2: 1e-8, x3: 1e-4 ... )
- pyglobalsearch.builders.cobyla()#
Create a COBYLA solver configuration.
COBYLA (Constrained Optimization BY Linear Approximations) is the only solver in this library that can handle inequality constraints. It’s also an excellent choice for derivative-free optimization.
- Parameters:
max_iter (int) – Maximum number of optimization iterations
step_size (float) – Initial trust region radius (larger = more exploration)
ftol_rel (float, optional) – Relative tolerance for function value convergence
ftol_abs (float, optional) – Absolute tolerance for function value convergence
xtol_rel (float, optional) – Relative tolerance for parameter convergence
xtol_abs (list[float], optional) – Per-variable absolute tolerances (length must match problem dimension)
- Returns:
Configured COBYLA solver instance
- Return type:
Note
If xtol_abs is provided, its length must match the problem dimension
For constrained problems, COBYLA is currently the only supported solver
Larger step_size values encourage more exploration but may slow convergence
Examples#
Default COBYLA (good starting point):
>>> config = gs.builders.cobyla()
Conservative settings for reliable convergence:
>>> config = gs.builders.cobyla( ... max_iter=1000, ... step_size=0.1, ... xtol_abs=[1e-8, 1e-8] # Same tolerance for both variables ... )
Different tolerance per variable (useful for scaled problems):
>>> config = gs.builders.cobyla( ... xtol_abs=[1e-6, 1e-8, 1e-4] # x1: loose, x2: tight, x3: very loose ... )
pyglobalsearch.observers#
- class pyglobalsearch.observers.PyObserverMode#
Bases:
objectObserver mode determines which stages to track
- Both = PyObserverMode.Both#
- Stage1Only = PyObserverMode.Stage1Only#
- Stage2Only = PyObserverMode.Stage2Only#
- class pyglobalsearch.observers.PyStage1State#
Bases:
objectState tracker for Stage 1 of the algorithm
Tracks comprehensive metrics during the scatter search phase that builds the initial reference set. This includes reference set construction, trial point generation, function evaluations, and substage progression.
- best_objective#
Get best objective value
Returns the best (lowest) objective function value found so far in Stage 1. This represents the highest quality solution discovered during scatter search.
- best_point#
Get best solution coordinates
Returns the coordinates (decision variables) of the best solution found so far in Stage 1, or None if no solution has been evaluated yet. Returns a Python list of floats representing the solution point.
- current_substage#
Get current substage name
Returns a string identifier for the current phase of Stage 1 execution. This helps track progress through the scatter search algorithm.
- function_evaluations#
Get total number of function evaluations
Returns the cumulative count of objective function evaluations performed during Stage 1. This includes evaluations for initial points, diversification, intensification trial points, and local optimization.
- reference_set_size#
Get reference set size
Returns the current number of solutions in the reference set. The reference set maintains a diverse collection of high-quality solutions that guide the intensification phase of scatter search.
- total_time#
Get total elapsed time since Stage 1 started (seconds)
Returns the time elapsed since Stage 1 began. If Stage 1 is still running, returns the current elapsed time. If Stage 1 has completed, returns the total time spent in Stage 1.
- trial_points_generated#
Get number of trial points generated
Returns the total number of trial points generated during the intensification phase of scatter search. Trial points are candidate solutions created by combining and perturbing reference set members.
- class pyglobalsearch.observers.PyStage2State#
Bases:
objectState tracker for Stage 2 of the algorithm
Tracks comprehensive metrics during the iterative refinement phase that improves the solution set through merit filtering and local optimization. This phase focuses on intensifying search around high-quality regions.
- best_objective#
Get best objective value
Returns the best (lowest) objective function value found across all solutions in the current solution set. This represents the highest quality solution discovered during Stage 2.
- best_point#
Get best solution coordinates
Returns the coordinates (decision variables) of the best solution found so far in Stage 2, or None if no solution has been evaluated yet. Returns a Python list of floats representing the solution point.
- current_iteration#
Get current iteration
Returns the current iteration number in Stage 2. Each iteration represents a complete cycle of selection, generation, evaluation, and filtering.
- function_evaluations#
Get total function evaluations
Returns the cumulative count of objective function evaluations performed during Stage 2. This includes evaluations for trial points generated during each iteration and function evaluations performed by local solvers.
- improved_local_calls#
Get number of local solver calls that improved the solution set
Returns the number of local solver calls that successfully improved the solution set by finding better solutions. This measures the effectiveness of local optimization in finding improvements.
- last_added_point#
Get last added solution coordinates
Returns the coordinates (decision variables) of the most recently added solution to the solution set, or None if no solution has been added yet. This is particularly useful for tracking new discoveries in multimodal optimization problems. Returns a Python list of floats.
- local_solver_calls#
Get number of local solver calls
Returns the total number of times local optimization algorithms have been invoked during Stage 2. Each call attempts to improve a candidate solution through gradient-based or derivative-free local search.
- solution_set_size#
Get solution set size
Returns the current number of solutions maintained in the working solution set. The solution set maintains a diverse collection of high-quality solutions that balance quality and coverage of the search space.
- threshold_value#
Get threshold value
Returns the current merit filter threshold value. Solutions must have an objective value better than this threshold to be accepted into the solution set during filtering operations.
- total_time#
Get total time spent in Stage 2 (seconds)
Returns the time elapsed since Stage 2 began. If Stage 2 is still running, returns the current elapsed time. If Stage 2 has completed, returns the total time spent in Stage 2.
- unchanged_cycles#
Get unchanged cycles count
Returns the number of consecutive iterations where the solution set has not improved. This is a key convergence indicator used to detect when the algorithm should terminate due to stagnation.
- class pyglobalsearch.observers.PyObserver#
Bases:
objectMain observer struct that tracks algorithm state
The observer can be configured to track different metrics during Stage 1 (reference set construction) and Stage 2 (iterative improvement). It supports real-time monitoring through callbacks and provides detailed statistics about algorithm performance and convergence.
- elapsed_time#
Get elapsed time in seconds
Returns the time elapsed since start_timer() was called. Returns None if timing is not enabled or timer hasn’t started.
- is_timing_enabled#
Check if timing is enabled
Returns true if the observer is configured to track timing information.
- should_observe_stage1#
Check if Stage 1 should be observed
Returns true if Stage 1 tracking is enabled and the observer mode allows Stage 1 observation (Stage1Only or Both modes).
- should_observe_stage2#
Check if Stage 2 should be observed
Returns true if Stage 2 tracking is enabled and the observer mode allows Stage 2 observation (Stage2Only or Both modes).
- stage1()#
Get Stage 1 state reference
Returns the current Stage 1 state if Stage 1 tracking is enabled and Stage 1 is still active. Returns None after Stage 1 completes to prevent repeated callback invocations.
For final Stage 1 statistics after completion, use stage1_final().
- stage1_final()#
Get Stage 1 state reference even after completion (for final statistics)
Returns the final Stage 1 state regardless of whether Stage 1 is still active. This method should be used for accessing final statistics after optimization completes.
- stage2()#
Get Stage 2 state reference
Returns the current Stage 2 state if Stage 2 tracking is enabled and Stage 2 has started. Returns None before Stage 2 begins to prevent premature callback invocations.
- unique_updates()#
Enable filtering of Stage 2 callback messages to only show unique updates
When enabled, Stage 2 callback messages will only be printed when there is an actual change in the optimization state (other than just the iteration number). This reduces log verbosity by filtering out identical consecutive messages.
# Changes that trigger printing: - Best objective value changes - Solution set size changes - Threshold value changes - Local solver call counts change - Function evaluation counts change
# Example
`python observer = PyObserver() observer.with_stage2_tracking() observer.with_default_callback() observer.unique_updates() # Only print when state changes `
- with_callback(callback)#
Set a Python callback function to be called during optimization
The callback function will be called with two arguments: (stage1_state, stage2_state) where each can be None or the corresponding state object.
# Arguments
callback - Python callable that takes (stage1_state, stage2_state) as arguments
# Example
```python def my_callback(stage1, stage2):
- if stage1:
print(f”Stage 1: {stage1.function_evaluations()} evaluations”)
- if stage2:
print(f”Stage 2: Iteration {stage2.current_iteration()}”)
observer = PyObserver() observer.with_callback(my_callback) ```
- with_callback_frequency(frequency)#
Set the frequency for callback invocation
Controls how often the callback is invoked during Stage 2. For example, a frequency of 10 means the callback is called every 10 iterations.
# Arguments
frequency - Number of iterations between callback calls
- with_default_callback()#
Use a default console logging callback for Stage 1 and Stage 2
This is a convenience method that provides sensible default logging for both stages of the optimization. The default callback prints progress information to stderr (using eprintln!).
- with_mode(mode)#
Set observer mode
Controls which stages of the optimization algorithm are monitored. This allows fine-grained control over tracking scope and performance.
# Arguments
mode - The observer mode determining which stages to track
- with_stage1_callback()#
Use a default console logging callback for Stage 1 only
This prints updates during scatter search and local optimization in Stage 1.
- with_stage1_tracking()#
Enable Stage 1 tracking
Enables tracking of scatter search metrics including: - Reference set size and composition - Best objective values found - Function evaluation counts - Trial point generation statistics - Sub-stage progression (initialization, diversification, intensification)
Stage 1 tracking is required for stage1() and stage1_final() to return data.
- with_stage2_callback()#
Use a default console logging callback for Stage 2 only
This prints iteration progress during Stage 2. Use with_callback_frequency() to control how often updates are printed.
- with_stage2_tracking()#
Enable Stage 2 tracking
Enables tracking of iterative refinement metrics including: - Current iteration number - Solution set size and composition - Best objective values - Local solver call statistics - Function evaluation counts - Threshold values and merit filtering - Convergence metrics (unchanged cycles)
Stage 2 tracking is required for stage2() to return data.
- with_timing()#
Enable timing tracking for stages
When enabled, tracks elapsed time for: - Total Stage 1 duration - Total Stage 2 duration - Sub-stage timing within Stage 1
Timing data is accessible via the total_time() methods on [PyStage1State] and [PyStage2State].