Async priorband
neps.optimizers.multi_fidelity_prior.async_priorband
#
PriorBandAsha
#
PriorBandAsha(
pipeline_space: SearchSpace,
budget: int,
eta: int = 3,
early_stopping_rate: int = 0,
initial_design_type: Literal[
"max_budget", "unique_configs"
] = "max_budget",
sampling_policy: Any = EnsemblePolicy,
promotion_policy: Any = AsyncPromotionPolicy,
loss_value_on_error: None | float = None,
cost_value_on_error: None | float = None,
ignore_errors: bool = False,
logger=None,
prior_confidence: Literal[
"low", "medium", "high"
] = "medium",
random_interleave_prob: float = 0.0,
sample_default_first: bool = True,
sample_default_at_target: bool = True,
prior_weight_type: str = "geometric",
inc_sample_type: str = "mutation",
inc_mutation_rate: float = 0.5,
inc_mutation_std: float = 0.25,
inc_style: str = "dynamic",
model_based: bool = False,
modelling_type: str = "joint",
initial_design_size: int = None,
model_policy: Any = ModelPolicy,
surrogate_model: str | Any = "gp",
domain_se_kernel: str = None,
hp_kernels: list = None,
surrogate_model_args: dict = None,
acquisition: str | BaseAcquisition = "EI",
log_prior_weighted: bool = False,
acquisition_sampler: (
str | AcquisitionSampler
) = "random",
)
Bases: MFBOBase
, PriorBandBase
, AsynchronousSuccessiveHalvingWithPriors
Implements a PriorBand on top of ASHA.
Source code in neps/optimizers/multi_fidelity_prior/async_priorband.py
calc_sampling_args
#
calc_sampling_args(rung) -> dict
Sets the weights for each of the sampling techniques.
Source code in neps/optimizers/multi_fidelity_prior/priorband.py
find_1nn_distance_from_incumbent
#
Finds the distance to the nearest neighbour.
find_all_distances_from_incumbent
#
Finds the distance to the nearest neighbour.
Source code in neps/optimizers/multi_fidelity_prior/priorband.py
find_incumbent
#
find_incumbent(rung: int = None) -> SearchSpace
Find the best performing configuration seen so far.
Source code in neps/optimizers/multi_fidelity_prior/priorband.py
get_config_and_ids
#
...and this is the method that decides which point to query.
RETURNS | DESCRIPTION |
---|---|
tuple[RawConfig, str, str | None]
|
|
Source code in neps/optimizers/multi_fidelity_prior/async_priorband.py
get_cost
#
Calls result.utils.get_cost() and passes the error handling through. Please use self.get_cost() instead of get_cost() in all optimizer classes.
Source code in neps/optimizers/base_optimizer.py
get_learning_curve
#
Calls result.utils.get_loss() and passes the error handling through. Please use self.get_loss() instead of get_loss() in all optimizer classes.
Source code in neps/optimizers/base_optimizer.py
get_loss
#
Calls result.utils.get_loss() and passes the error handling through. Please use self.get_loss() instead of get_loss() in all optimizer classes.
Source code in neps/optimizers/base_optimizer.py
is_activate_inc
#
is_activate_inc() -> bool
Function to check optimization state to allow/disallow incumbent sampling.
This function checks if the total resources used for the finished evaluations sums to the budget of one full SH bracket.
Source code in neps/optimizers/multi_fidelity_prior/priorband.py
is_init_phase
#
is_init_phase() -> bool
Returns True is in the warmstart phase and False under model-based search.
Source code in neps/optimizers/multi_fidelity/mf_bo.py
is_promotable
#
is_promotable() -> int | None
Returns an int if a rung can be promoted, else a None.
Source code in neps/optimizers/multi_fidelity/successive_halving.py
load_results
#
load_results(
previous_results: dict[str, ConfigResult],
pending_evaluations: dict[str, SearchSpace],
) -> None
This is basically the fit method.
PARAMETER | DESCRIPTION |
---|---|
previous_results |
[description]
TYPE:
|
pending_evaluations |
[description]
TYPE:
|
Source code in neps/optimizers/multi_fidelity/successive_halving.py
prior_to_incumbent_ratio
#
Calculates the normalized weight distribution between prior and incumbent.
Sum of the weights should be 1.
Source code in neps/optimizers/multi_fidelity_prior/priorband.py
sample_new_config
#
sample_new_config(rung: int = None, **kwargs)
Samples configuration from policies or random.
Source code in neps/optimizers/multi_fidelity/mf_bo.py
PriorBandAshaHB
#
PriorBandAshaHB(
pipeline_space: SearchSpace,
budget: int,
eta: int = 3,
initial_design_type: Literal[
"max_budget", "unique_configs"
] = "max_budget",
sampling_policy: Any = EnsemblePolicy,
promotion_policy: Any = AsyncPromotionPolicy,
loss_value_on_error: None | float = None,
cost_value_on_error: None | float = None,
ignore_errors: bool = False,
logger=None,
prior_confidence: Literal[
"low", "medium", "high"
] = "medium",
random_interleave_prob: float = 0.0,
sample_default_first: bool = True,
sample_default_at_target: bool = True,
prior_weight_type: str = "geometric",
inc_sample_type: str = "mutation",
inc_mutation_rate: float = 0.5,
inc_mutation_std: float = 0.25,
inc_style: str = "dynamic",
model_based: bool = False,
modelling_type: str = "joint",
initial_design_size: int = None,
model_policy: Any = ModelPolicy,
surrogate_model: str | Any = "gp",
domain_se_kernel: str = None,
hp_kernels: list = None,
surrogate_model_args: dict = None,
acquisition: str | BaseAcquisition = "EI",
log_prior_weighted: bool = False,
acquisition_sampler: (
str | AcquisitionSampler
) = "random",
)
Bases: PriorBandAsha
Implements a PriorBand on top of ASHA-HB (Mobster).
Source code in neps/optimizers/multi_fidelity_prior/async_priorband.py
145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 |
|
calc_sampling_args
#
calc_sampling_args(rung) -> dict
Sets the weights for each of the sampling techniques.
Source code in neps/optimizers/multi_fidelity_prior/priorband.py
find_1nn_distance_from_incumbent
#
Finds the distance to the nearest neighbour.
find_all_distances_from_incumbent
#
Finds the distance to the nearest neighbour.
Source code in neps/optimizers/multi_fidelity_prior/priorband.py
find_incumbent
#
find_incumbent(rung: int = None) -> SearchSpace
Find the best performing configuration seen so far.
Source code in neps/optimizers/multi_fidelity_prior/priorband.py
get_config_and_ids
#
...and this is the method that decides which point to query.
RETURNS | DESCRIPTION |
---|---|
tuple[RawConfig, str, str | None]
|
|
Source code in neps/optimizers/multi_fidelity_prior/async_priorband.py
get_cost
#
Calls result.utils.get_cost() and passes the error handling through. Please use self.get_cost() instead of get_cost() in all optimizer classes.
Source code in neps/optimizers/base_optimizer.py
get_learning_curve
#
Calls result.utils.get_loss() and passes the error handling through. Please use self.get_loss() instead of get_loss() in all optimizer classes.
Source code in neps/optimizers/base_optimizer.py
get_loss
#
Calls result.utils.get_loss() and passes the error handling through. Please use self.get_loss() instead of get_loss() in all optimizer classes.
Source code in neps/optimizers/base_optimizer.py
is_activate_inc
#
is_activate_inc() -> bool
Function to check optimization state to allow/disallow incumbent sampling.
This function checks if the total resources used for the finished evaluations sums to the budget of one full SH bracket.
Source code in neps/optimizers/multi_fidelity_prior/priorband.py
is_init_phase
#
is_init_phase() -> bool
Returns True is in the warmstart phase and False under model-based search.
Source code in neps/optimizers/multi_fidelity/mf_bo.py
is_promotable
#
is_promotable() -> int | None
Returns an int if a rung can be promoted, else a None.
Source code in neps/optimizers/multi_fidelity/successive_halving.py
prior_to_incumbent_ratio
#
Calculates the normalized weight distribution between prior and incumbent.
Sum of the weights should be 1.
Source code in neps/optimizers/multi_fidelity_prior/priorband.py
sample_new_config
#
sample_new_config(rung: int = None, **kwargs)
Samples configuration from policies or random.