Continue an Optimization

SMAC can also be continued from a previous run. To do so, it reads in old files (derived from scenario’s name, output_directory and seed) and sets the corresponding components. In this example, an optimization of a simple quadratic function is continued.

First, after creating a scenario with 50 trials, we run SMAC with overwrite=True. This will overwrite any previous runs (in case the example was called before). We use a custom callback to artificially stop this first optimization after 10 trials.

Second, we again run the SMAC optimization using the same scenario, but this time with overwrite=False. As there already is a previous run with the same meta data, this run will be continued until the 50 trials are reached.

[INFO][abstract_initial_design.py:147] Using 10 initial design configurations and 0 additional configurations.
[INFO][abstract_intensifier.py:306] Using only one seed for deterministic scenario.
[INFO][abstract_intensifier.py:516] Added config e08add as new incumbent because there are no incumbents yet.
[INFO][abstract_intensifier.py:595] Added config e86960 and rejected config e08add as incumbent because it is not better than the incumbents on 1 instances:
[INFO][abstract_intensifier.py:595] Added config 78cf4a and rejected config e86960 as incumbent because it is not better than the incumbents on 1 instances:
[INFO][abstract_intensifier.py:595] Added config 9c0078 and rejected config 78cf4a as incumbent because it is not better than the incumbents on 1 instances:
[INFO][smbo.py:225] A callback returned False. Abort is requested.
[INFO][smbo.py:333] Shutting down because the stop flag was set.
[INFO][abstract_initial_design.py:147] Using 10 initial design configurations and 0 additional configurations.
[INFO][smbo.py:498] Continuing from previous run.
[INFO][abstract_intensifier.py:288] Added existing seed 209652396 from runhistory to the intensifier.
[INFO][abstract_intensifier.py:306] Using only one seed for deterministic scenario.
[INFO][abstract_intensifier.py:595] Added config 9aa814 and rejected config 9c0078 as incumbent because it is not better than the incumbents on 1 instances:
[INFO][abstract_intensifier.py:595] Added config d85172 and rejected config 9aa814 as incumbent because it is not better than the incumbents on 1 instances:
[INFO][abstract_intensifier.py:595] Added config 7c6c87 and rejected config d85172 as incumbent because it is not better than the incumbents on 1 instances:
[INFO][abstract_intensifier.py:595] Added config 75d013 and rejected config 7c6c87 as incumbent because it is not better than the incumbents on 1 instances:
[INFO][abstract_intensifier.py:595] Added config 8964ef and rejected config 75d013 as incumbent because it is not better than the incumbents on 1 instances:
[INFO][abstract_intensifier.py:595] Added config e90b89 and rejected config 8964ef as incumbent because it is not better than the incumbents on 1 instances:
[INFO][abstract_intensifier.py:595] Added config e486a5 and rejected config e90b89 as incumbent because it is not better than the incumbents on 1 instances:
[INFO][abstract_intensifier.py:595] Added config a95115 and rejected config e486a5 as incumbent because it is not better than the incumbents on 1 instances:
[INFO][abstract_intensifier.py:595] Added config 9cfa74 and rejected config a95115 as incumbent because it is not better than the incumbents on 1 instances:
[INFO][abstract_intensifier.py:595] Added config 0d7153 and rejected config 9cfa74 as incumbent because it is not better than the incumbents on 1 instances:
[INFO][smbo.py:320] Finished 50 trials.
[INFO][smbo.py:328] Configuration budget is exhausted:
[INFO][smbo.py:329] --- Remaining wallclock time: inf
[INFO][smbo.py:330] --- Remaining cpu time: inf
[INFO][smbo.py:331] --- Remaining trials: 0
[INFO][abstract_intensifier.py:306] Using only one seed for deterministic scenario.
Default cost: 25.0
Incumbent cost of first run: 0.09616130553973981
[INFO][abstract_intensifier.py:306] Using only one seed for deterministic scenario.
Incumbent cost of continued run: 0.0001605482962957238

from __future__ import annotations

from ConfigSpace import Configuration, ConfigurationSpace, Float

from smac import Callback
from smac import HyperparameterOptimizationFacade as HPOFacade
from smac import Scenario
from smac.main.smbo import SMBO
from smac.runhistory import TrialInfo, TrialValue

__copyright__ = "Copyright 2021, AutoML.org Freiburg-Hannover"
__license__ = "3-clause BSD"


class StopCallback(Callback):
    def __init__(self, stop_after: int):
        self._stop_after = stop_after

    def on_tell_end(self, smbo: SMBO, info: TrialInfo, value: TrialValue) -> bool | None:
        """Called after the stats are updated and the trial is added to the runhistory. Optionally, returns false
        to gracefully stop the optimization.
        """
        if smbo.runhistory.finished == self._stop_after:
            return False

        return None


class QuadraticFunction:
    @property
    def configspace(self) -> ConfigurationSpace:
        cs = ConfigurationSpace(seed=0)
        x = Float("x", (-5, 5), default=-5)
        cs.add([x])

        return cs

    def train(self, config: Configuration, seed: int = 0) -> float:
        """Returns the y value of a quadratic function with a minimum at x=0."""
        x = config["x"]
        return x * x


if __name__ == "__main__":
    model = QuadraticFunction()

    # Scenario object specifying the optimization "environment"
    scenario = Scenario(model.configspace, deterministic=True, n_trials=50)
    stop_after = 10

    # Now we use SMAC to find the best hyperparameters
    smac = HPOFacade(
        scenario,
        model.train,  # We pass the target function here
        callbacks=[StopCallback(stop_after=stop_after)],
        overwrite=True,  # Overrides any previous results that are found that are inconsistent with the meta-data
    )

    incumbent = smac.optimize()
    assert smac.runhistory.finished == stop_after

    # Now, we want to continue the optimization
    # Make sure, we don't overwrite the last run
    smac2 = HPOFacade(
        scenario,
        model.train,
        overwrite=False,
    )

    # Check whether we get the same incumbent
    assert smac.intensifier.get_incumbent() == smac2.intensifier.get_incumbent()
    assert smac2.runhistory.finished == stop_after

    # And now we finish the optimization
    incumbent2 = smac2.optimize()

    default_cost = smac.validate(model.configspace.get_default_configuration())
    print(f"Default cost: {default_cost}")

    incumbent_cost = smac.validate(incumbent)
    print(f"Incumbent cost of first run: {incumbent_cost}")

    incumbent_cost = smac2.validate(incumbent2)
    print(f"Incumbent cost of continued run: {incumbent_cost}")

Total running time of the script: (0 minutes 1.508 seconds)