Logging and Restarting an Optimization Run¶
This notebook describes how DEHB logs its state and results and how you can reload a checkpoint from the disk and restart the optimization run.
DEHB supports logging in three different ways, which can be specified in the constructor of DEHB via the save_freq
parameter:
"end"
, saving the optimizer state only at the end of optimization (at the end ofrun
). Note: This option is suboptimal for users using the ask & tell interface."incumbent"
, saving the optimizer state after the incumbent changes."step"
, saving the optimizer state after every step, i.e. after every call oftell
.
No matter what option is chosen, the state will always also be saved after the run
function has finished (similar as in "end"
).
The directory, where the state and logs will be saved is specified via the output_path
parameter. If no output path is specified, the current directory is used.
Setting up DEHB¶
Here we only use a toy setup for DEHB as in the interfacing_DEHB
example. For a detailed description of the unique parts of DEHB, please refer to this example.
import time
import warnings
from typing import Dict, List, Optional, Union
import ConfigSpace
import numpy as np
warnings.filterwarnings("ignore")
def target_function(
x: Union[ConfigSpace.Configuration, List, np.array],
fidelity: Optional[Union[int, float]] = None,
**kwargs,
) -> Dict:
start = time.time()
y = np.random.uniform() # placeholder response of evaluation
time.sleep(0.05) # simulates runtime
cost = time.time() - start
# result dict passed to DE/DEHB as function evaluation output
result = {
"fitness": y, # must-have key that DE/DEHB minimizes
"cost": cost, # must-have key that associates cost/runtime
"info": dict() # optional key containing a dictionary of additional info
}
return result
import ConfigSpace
def create_search_space():
# Creating a one-dimensional search space of real numbers in [3, 10]
cs = ConfigSpace.ConfigurationSpace()
cs.add_hyperparameter(ConfigSpace.UniformFloatHyperparameter("x0", lower=3, upper=10, log=False))
return cs
cs = create_search_space()
dimensions = len(cs.get_hyperparameters())
min_fidelity, max_fidelity = (0.1, 3)
from dehb import DEHB
dehb = DEHB(
f=target_function,
dimensions=dimensions,
cs=cs,
min_fidelity=min_fidelity,
max_fidelity=max_fidelity,
output_path="./temp_folder",
save_freq="end",
n_workers=1,
)
Running DEHB¶
First, we want to run DEHB for 5 brackets, later we will use the created checkpoint to restart the optimization. Since we used the option "end"
, the state will only be saved after 5 brackets.
trajectory, runtime, history = dehb.run(brackets=5)
print(f"Trajectory length: {len(trajectory)}")
print("Incumbent:")
print(dehb.get_incumbents())
Trajectory length: 105 Incumbent: (Configuration(values={ 'x0': 3.5040997379934, }), 0.004179776039343164)
Restarting DEHB¶
Now, we use the previously created checkpoint to restart the optimization run. For this, we specifiy the same output_path
as above and additionally set the resume
flag to True
. After reloading the checkpoint, we run for another five brackets and report the results.
dehb = DEHB(
f=target_function,
dimensions=dimensions,
cs=cs,
min_fidelity=min_fidelity,
max_fidelity=max_fidelity,
output_path="./temp_folder",
save_freq="end",
n_workers=1,
resume=True,
)
trajectory, runtime, history = dehb.run(brackets=5)
print(f"Trajectory length: {len(trajectory)}")
print("Incumbent:")
print(dehb.get_incumbents())
Trajectory length: 183 Incumbent: (Configuration(values={ 'x0': 6.6390343221257, }), 0.0013943861635727917)