Multi fidelity facade
smac.facade.multi_fidelity_facade
#
MultiFidelityFacade
#
MultiFidelityFacade(
scenario: Scenario,
target_function: Callable | str | AbstractRunner,
*,
model: AbstractModel | None = None,
acquisition_function: (
AbstractAcquisitionFunction | None
) = None,
acquisition_maximizer: (
AbstractAcquisitionMaximizer | None
) = None,
initial_design: AbstractInitialDesign | None = None,
random_design: AbstractRandomDesign | None = None,
intensifier: AbstractIntensifier | None = None,
multi_objective_algorithm: (
AbstractMultiObjectiveAlgorithm | None
) = None,
runhistory_encoder: (
AbstractRunHistoryEncoder | None
) = None,
config_selector: ConfigSelector | None = None,
logging_level: (
int | Path | Literal[False] | None
) = None,
callbacks: list[Callback] = None,
overwrite: bool = False,
dask_client: Client | None = None
)
Bases: HyperparameterOptimizationFacade
This facade configures SMAC in a multi-fidelity setting.
Warning
smac.main.config_selector.ConfigSelector
contains the min_trials
parameter. This parameter determines
how many samples are required to train the surrogate model. If budgets are involved, the highest budgets
are checked first. For example, if min_trials is three, but we find only two trials in the runhistory for
the highest budget, we will use trials of a lower budget instead.
Source code in smac/facade/abstract_facade.py
105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 |
|
intensifier
property
#
intensifier: AbstractIntensifier
The optimizer which is responsible for the BO loop. Keeps track of useful information like status.
meta
property
#
Generates a hash based on all components of the facade. This is used for the run name or to determine whether a run should be continued or not.
optimizer
property
#
optimizer: SMBO
The optimizer which is responsible for the BO loop. Keeps track of useful information like status.
runhistory
property
#
runhistory: RunHistory
The runhistory which is filled with all trials during the optimization process.
get_acquisition_function
staticmethod
#
Returns an Expected Improvement acquisition function.
Parameters#
scenario : Scenario xi : float, defaults to 0.0 Controls the balance between exploration and exploitation of the acquisition function.
Source code in smac/facade/hyperparameter_optimization_facade.py
get_acquisition_maximizer
staticmethod
#
get_acquisition_maximizer(
scenario: Scenario,
*,
challengers: int = 10000,
local_search_iterations: int = 10
) -> LocalAndSortedRandomSearch
Returns local and sorted random search as acquisition maximizer.
Warning#
If you experience RAM issues, try to reduce the number of challengers.
Parameters#
challengers : int, defaults to 10000 Number of challengers. local_search_iterations: int, defaults to 10 Number of local search iterations.
Source code in smac/facade/hyperparameter_optimization_facade.py
get_config_selector
staticmethod
#
get_config_selector(
scenario: Scenario,
*,
retrain_after: int = 8,
retries: int = 16
) -> ConfigSelector
Returns the default configuration selector.
Source code in smac/facade/abstract_facade.py
get_initial_design
staticmethod
#
get_initial_design(
scenario: Scenario,
*,
n_configs: int | None = None,
n_configs_per_hyperparamter: int = 10,
max_ratio: float = 0.25,
additional_configs: list[Configuration] = None
) -> RandomInitialDesign
Returns a random initial design.
Parameters#
scenario : Scenario
n_configs : int | None, defaults to None
Number of initial configurations (disables the arguments n_configs_per_hyperparameter
).
n_configs_per_hyperparameter: int, defaults to 10
Number of initial configurations per hyperparameter. For example, if my configuration space covers five
hyperparameters and n_configs_per_hyperparameter
is set to 10, then 50 initial configurations will be
samples.
max_ratio: float, defaults to 0.25
Use at most scenario.n_trials
* max_ratio
number of configurations in the initial design.
Additional configurations are not affected by this parameter.
additional_configs: list[Configuration], defaults to []
Adds additional configurations to the initial design.
Source code in smac/facade/multi_fidelity_facade.py
get_intensifier
staticmethod
#
get_intensifier(
scenario: Scenario,
*,
eta: int = 3,
n_seeds: int = 1,
instance_seed_order: str | None = "shuffle_once",
max_incumbents: int = 10,
incumbent_selection: str = "highest_observed_budget"
) -> Hyperband
Returns a Hyperband intensifier instance. Budgets are supported.
int, defaults to 3
Input that controls the proportion of configurations discarded in each round of Successive Halving.
n_seeds : int, defaults to 1
How many seeds to use for each instance.
instance_seed_order : str, defaults to "shuffle_once"
How to order the instance-seed pairs. Can be set to:
* None: No shuffling at all and use the instance-seed order provided by the user.
* "shuffle_once": Shuffle the instance-seed keys once and use the same order across all runs.
* "shuffle": Shuffles the instance-seed keys for each bracket individually.
incumbent_selection : str, defaults to "any_budget"
How to select the incumbent when using budgets. Can be set to:
* "any_budget": Incumbent is the best on any budget, i.e., the best performance regardless of budget.
* "highest_observed_budget": Incumbent is the best in the highest budget run so far.
refer to runhistory.get_trials
for more details. Crucially, if true, then a
for a given config-instance-seed, only the highest (so far executed) budget is used for
comparison against the incumbent. Notice, that if the highest observed budget is smaller
than the highest budget of the incumbent, the configuration will be queued again to
be intensified again.
* "highest_budget": Incumbent is selected only based on the absolute highest budget
available only.
max_incumbents : int, defaults to 10
How many incumbents to keep track of in the case of multi-objective.
Source code in smac/facade/multi_fidelity_facade.py
get_model
staticmethod
#
get_model(
scenario: Scenario,
*,
n_trees: int = 10,
ratio_features: float = 1.0,
min_samples_split: int = 2,
min_samples_leaf: int = 1,
max_depth: int = 2**20,
bootstrapping: bool = True
) -> RandomForest
Returns a random forest as surrogate model.
Parameters#
n_trees : int, defaults to 10 The number of trees in the random forest. ratio_features : float, defaults to 5.0 / 6.0 The ratio of features that are considered for splitting. min_samples_split : int, defaults to 3 The minimum number of data points to perform a split. min_samples_leaf : int, defaults to 3 The minimum number of data points in a leaf. max_depth : int, defaults to 20 The maximum depth of a single tree. bootstrapping : bool, defaults to True Enables bootstrapping.
Source code in smac/facade/hyperparameter_optimization_facade.py
get_multi_objective_algorithm
staticmethod
#
get_multi_objective_algorithm(
scenario: Scenario,
*,
objective_weights: list[float] | None = None
) -> MeanAggregationStrategy
Returns the mean aggregation strategy for the multi-objective algorithm.
Parameters#
scenario : Scenario objective_weights : list[float] | None, defaults to None Weights for averaging the objectives in a weighted manner. Must be of the same length as the number of objectives.
Source code in smac/facade/hyperparameter_optimization_facade.py
get_random_design
staticmethod
#
get_random_design(
scenario: Scenario, *, probability: float = 0.2
) -> ProbabilityRandomDesign
Returns ProbabilityRandomDesign
for interleaving configurations.
Parameters#
probability : float, defaults to 0.2 Probability that a configuration will be drawn at random.
Source code in smac/facade/hyperparameter_optimization_facade.py
get_runhistory_encoder
staticmethod
#
get_runhistory_encoder(
scenario: Scenario,
) -> RunHistoryLogScaledEncoder
Returns a log scaled runhistory encoder. That means that costs are log scaled before training the surrogate model.
Source code in smac/facade/hyperparameter_optimization_facade.py
optimize
#
Optimizes the configuration of the algorithm.
Parameters#
data_to_scatter: dict[str, Any] | None We first note that this argument is valid only dask_runner! When a user scatters data from their local process to the distributed network, this data is distributed in a round-robin fashion grouping by number of cores. Roughly speaking, we can keep this data in memory and then we do not have to (de-)serialize the data every time we would like to execute a target function with a big dataset. For example, when your target function has a big dataset shared across all the target function, this argument is very useful.
Returns#
incumbent : Configuration Best found configuration.
Source code in smac/facade/abstract_facade.py
tell
#
tell(
info: TrialInfo, value: TrialValue, save: bool = True
) -> None
Adds the result of a trial to the runhistory and updates the intensifier.
Parameters#
info: TrialInfo Describes the trial from which to process the results. value: TrialValue Contains relevant information regarding the execution of a trial. save : bool, optional to True Whether the runhistory should be saved.
Source code in smac/facade/abstract_facade.py
validate
#
Validates a configuration on seeds different from the ones used in the optimization process and on the highest budget (if budget type is real-valued).
Parameters#
config : Configuration Configuration to validate instances : list[str] | None, defaults to None Which instances to validate. If None, all instances specified in the scenario are used. In case that the budget type is real-valued, this argument is ignored. seed : int | None, defaults to None If None, the seed from the scenario is used.
Returns#
cost : float | list[float] The averaged cost of the configuration. In case of multi-fidelity, the cost of each objective is averaged.