Hyperband
neps.optimizers.multi_fidelity.hyperband
#
AsynchronousHyperband
#
AsynchronousHyperband(
pipeline_space: SearchSpace,
budget: int,
eta: int = 3,
initial_design_type: Literal[
"max_budget", "unique_configs"
] = "max_budget",
use_priors: bool = False,
sampling_policy: Any = RandomUniformPolicy,
promotion_policy: Any = AsyncPromotionPolicy,
loss_value_on_error: None | float = None,
cost_value_on_error: None | float = None,
ignore_errors: bool = False,
logger=None,
prior_confidence: Literal[
"low", "medium", "high"
] = None,
random_interleave_prob: float = 0.0,
sample_default_first: bool = False,
sample_default_at_target: bool = False,
)
Bases: HyperbandBase
Implements ASHA but as Hyperband.
Implements the Promotion variant of ASHA as used in Mobster.
Source code in neps/optimizers/multi_fidelity/hyperband.py
clear_old_brackets
#
Enforces reset at each new bracket.
Source code in neps/optimizers/multi_fidelity/hyperband.py
get_config_and_ids
#
get_config_and_ids() -> tuple[SearchSpace, str, str | None]
...and this is the method that decides which point to query.
RETURNS | DESCRIPTION |
---|---|
tuple[SearchSpace, str, str | None]
|
|
Source code in neps/optimizers/multi_fidelity/hyperband.py
get_cost
#
Calls result.utils.get_cost() and passes the error handling through. Please use self.get_cost() instead of get_cost() in all optimizer classes.
Source code in neps/optimizers/base_optimizer.py
get_learning_curve
#
Calls result.utils.get_loss() and passes the error handling through. Please use self.get_loss() instead of get_loss() in all optimizer classes.
Source code in neps/optimizers/base_optimizer.py
get_loss
#
Calls result.utils.get_loss() and passes the error handling through. Please use self.get_loss() instead of get_loss() in all optimizer classes.
Source code in neps/optimizers/base_optimizer.py
is_promotable
#
is_promotable() -> int | None
Returns an int if a rung can be promoted, else a None.
Source code in neps/optimizers/multi_fidelity/successive_halving.py
AsynchronousHyperbandWithPriors
#
AsynchronousHyperbandWithPriors(
pipeline_space: SearchSpace,
budget: int,
eta: int = 3,
initial_design_type: Literal[
"max_budget", "unique_configs"
] = "max_budget",
sampling_policy: Any = FixedPriorPolicy,
promotion_policy: Any = AsyncPromotionPolicy,
loss_value_on_error: None | float = None,
cost_value_on_error: None | float = None,
ignore_errors: bool = False,
logger=None,
prior_confidence: Literal[
"low", "medium", "high"
] = "medium",
random_interleave_prob: float = 0.0,
sample_default_first: bool = False,
sample_default_at_target: bool = False,
)
Bases: AsynchronousHyperband
Implements ASHA but as Hyperband.
Source code in neps/optimizers/multi_fidelity/hyperband.py
clear_old_brackets
#
Enforces reset at each new bracket.
Source code in neps/optimizers/multi_fidelity/hyperband.py
get_config_and_ids
#
get_config_and_ids() -> tuple[SearchSpace, str, str | None]
...and this is the method that decides which point to query.
RETURNS | DESCRIPTION |
---|---|
tuple[SearchSpace, str, str | None]
|
|
Source code in neps/optimizers/multi_fidelity/hyperband.py
get_cost
#
Calls result.utils.get_cost() and passes the error handling through. Please use self.get_cost() instead of get_cost() in all optimizer classes.
Source code in neps/optimizers/base_optimizer.py
get_learning_curve
#
Calls result.utils.get_loss() and passes the error handling through. Please use self.get_loss() instead of get_loss() in all optimizer classes.
Source code in neps/optimizers/base_optimizer.py
get_loss
#
Calls result.utils.get_loss() and passes the error handling through. Please use self.get_loss() instead of get_loss() in all optimizer classes.
Source code in neps/optimizers/base_optimizer.py
is_promotable
#
is_promotable() -> int | None
Returns an int if a rung can be promoted, else a None.
Source code in neps/optimizers/multi_fidelity/successive_halving.py
Hyperband
#
Hyperband(
pipeline_space: SearchSpace,
budget: int,
eta: int = 3,
initial_design_type: Literal[
"max_budget", "unique_configs"
] = "max_budget",
use_priors: bool = False,
sampling_policy: Any = RandomUniformPolicy,
promotion_policy: Any = SyncPromotionPolicy,
loss_value_on_error: None | float = None,
cost_value_on_error: None | float = None,
ignore_errors: bool = False,
logger=None,
prior_confidence: Literal[
"low", "medium", "high"
] = None,
random_interleave_prob: float = 0.0,
sample_default_first: bool = False,
sample_default_at_target: bool = False,
)
Bases: HyperbandBase
Source code in neps/optimizers/multi_fidelity/hyperband.py
clear_old_brackets
#
Enforces reset at each new bracket.
The _get_rungs_state() function creates the rung_promotions
dict mapping which
is used by the promotion policies to determine the next step: promotion/sample.
To simulate reset of rungs like in vanilla HB, the algorithm is viewed as a
series of SH brackets, where the SH brackets comprising HB is repeated. This is
done by iterating over the closed loop of possible SH brackets (self.sh_brackets).
The oldest, active, incomplete SH bracket is searched for to choose the next
evaluation. If either all brackets are over or waiting, a new SH bracket,
corresponding to the SH bracket under HB as registered by current_SH_bracket
.
Source code in neps/optimizers/multi_fidelity/hyperband.py
get_config_and_ids
#
get_config_and_ids() -> tuple[SearchSpace, str, str | None]
...and this is the method that decides which point to query.
RETURNS | DESCRIPTION |
---|---|
tuple[SearchSpace, str, str | None]
|
|
Source code in neps/optimizers/multi_fidelity/hyperband.py
get_cost
#
Calls result.utils.get_cost() and passes the error handling through. Please use self.get_cost() instead of get_cost() in all optimizer classes.
Source code in neps/optimizers/base_optimizer.py
get_learning_curve
#
Calls result.utils.get_loss() and passes the error handling through. Please use self.get_loss() instead of get_loss() in all optimizer classes.
Source code in neps/optimizers/base_optimizer.py
get_loss
#
Calls result.utils.get_loss() and passes the error handling through. Please use self.get_loss() instead of get_loss() in all optimizer classes.
Source code in neps/optimizers/base_optimizer.py
is_promotable
#
is_promotable() -> int | None
Returns an int if a rung can be promoted, else a None.
Source code in neps/optimizers/multi_fidelity/successive_halving.py
HyperbandBase
#
HyperbandBase(
pipeline_space: SearchSpace,
budget: int,
eta: int = 3,
initial_design_type: Literal[
"max_budget", "unique_configs"
] = "max_budget",
use_priors: bool = False,
sampling_policy: Any = RandomUniformPolicy,
promotion_policy: Any = SyncPromotionPolicy,
loss_value_on_error: None | float = None,
cost_value_on_error: None | float = None,
ignore_errors: bool = False,
logger=None,
prior_confidence: Literal[
"low", "medium", "high"
] = None,
random_interleave_prob: float = 0.0,
sample_default_first: bool = False,
sample_default_at_target: bool = False,
)
Bases: SuccessiveHalvingBase
Implements a Hyperband procedure with a sampling and promotion policy.
Source code in neps/optimizers/multi_fidelity/hyperband.py
clear_old_brackets
#
Enforces reset at each new bracket.
Source code in neps/optimizers/multi_fidelity/hyperband.py
get_config_and_ids
#
get_config_and_ids() -> tuple[SearchSpace, str, str | None]
...and this is the method that decides which point to query.
RETURNS | DESCRIPTION |
---|---|
tuple[SearchSpace, str, str | None]
|
|
get_cost
#
Calls result.utils.get_cost() and passes the error handling through. Please use self.get_cost() instead of get_cost() in all optimizer classes.
Source code in neps/optimizers/base_optimizer.py
get_learning_curve
#
Calls result.utils.get_loss() and passes the error handling through. Please use self.get_loss() instead of get_loss() in all optimizer classes.
Source code in neps/optimizers/base_optimizer.py
get_loss
#
Calls result.utils.get_loss() and passes the error handling through. Please use self.get_loss() instead of get_loss() in all optimizer classes.
Source code in neps/optimizers/base_optimizer.py
is_promotable
#
is_promotable() -> int | None
Returns an int if a rung can be promoted, else a None.
Source code in neps/optimizers/multi_fidelity/successive_halving.py
HyperbandCustomDefault
#
HyperbandCustomDefault(
pipeline_space: SearchSpace,
budget: int,
eta: int = 3,
initial_design_type: Literal[
"max_budget", "unique_configs"
] = "max_budget",
sampling_policy: Any = EnsemblePolicy,
promotion_policy: Any = SyncPromotionPolicy,
loss_value_on_error: None | float = None,
cost_value_on_error: None | float = None,
ignore_errors: bool = False,
logger=None,
prior_confidence: Literal[
"low", "medium", "high"
] = "medium",
random_interleave_prob: float = 0.0,
sample_default_first: bool = False,
sample_default_at_target: bool = False,
)
Bases: HyperbandWithPriors
If prior specified, does 50% times priors and 50% random search like vanilla-HB.
Source code in neps/optimizers/multi_fidelity/hyperband.py
clear_old_brackets
#
Enforces reset at each new bracket.
The _get_rungs_state() function creates the rung_promotions
dict mapping which
is used by the promotion policies to determine the next step: promotion/sample.
To simulate reset of rungs like in vanilla HB, the algorithm is viewed as a
series of SH brackets, where the SH brackets comprising HB is repeated. This is
done by iterating over the closed loop of possible SH brackets (self.sh_brackets).
The oldest, active, incomplete SH bracket is searched for to choose the next
evaluation. If either all brackets are over or waiting, a new SH bracket,
corresponding to the SH bracket under HB as registered by current_SH_bracket
.
Source code in neps/optimizers/multi_fidelity/hyperband.py
get_config_and_ids
#
get_config_and_ids() -> tuple[SearchSpace, str, str | None]
...and this is the method that decides which point to query.
RETURNS | DESCRIPTION |
---|---|
tuple[SearchSpace, str, str | None]
|
|
Source code in neps/optimizers/multi_fidelity/hyperband.py
get_cost
#
Calls result.utils.get_cost() and passes the error handling through. Please use self.get_cost() instead of get_cost() in all optimizer classes.
Source code in neps/optimizers/base_optimizer.py
get_learning_curve
#
Calls result.utils.get_loss() and passes the error handling through. Please use self.get_loss() instead of get_loss() in all optimizer classes.
Source code in neps/optimizers/base_optimizer.py
get_loss
#
Calls result.utils.get_loss() and passes the error handling through. Please use self.get_loss() instead of get_loss() in all optimizer classes.
Source code in neps/optimizers/base_optimizer.py
is_promotable
#
is_promotable() -> int | None
Returns an int if a rung can be promoted, else a None.
Source code in neps/optimizers/multi_fidelity/successive_halving.py
HyperbandWithPriors
#
HyperbandWithPriors(
pipeline_space: SearchSpace,
budget: int,
eta: int = 3,
initial_design_type: Literal[
"max_budget", "unique_configs"
] = "max_budget",
sampling_policy: Any = FixedPriorPolicy,
promotion_policy: Any = SyncPromotionPolicy,
loss_value_on_error: None | float = None,
cost_value_on_error: None | float = None,
ignore_errors: bool = False,
logger=None,
prior_confidence: Literal[
"low", "medium", "high"
] = "medium",
random_interleave_prob: float = 0.0,
sample_default_first: bool = False,
sample_default_at_target: bool = False,
)
Bases: Hyperband
Implements a Hyperband procedure with a sampling and promotion policy.
Source code in neps/optimizers/multi_fidelity/hyperband.py
clear_old_brackets
#
Enforces reset at each new bracket.
The _get_rungs_state() function creates the rung_promotions
dict mapping which
is used by the promotion policies to determine the next step: promotion/sample.
To simulate reset of rungs like in vanilla HB, the algorithm is viewed as a
series of SH brackets, where the SH brackets comprising HB is repeated. This is
done by iterating over the closed loop of possible SH brackets (self.sh_brackets).
The oldest, active, incomplete SH bracket is searched for to choose the next
evaluation. If either all brackets are over or waiting, a new SH bracket,
corresponding to the SH bracket under HB as registered by current_SH_bracket
.
Source code in neps/optimizers/multi_fidelity/hyperband.py
get_config_and_ids
#
get_config_and_ids() -> tuple[SearchSpace, str, str | None]
...and this is the method that decides which point to query.
RETURNS | DESCRIPTION |
---|---|
tuple[SearchSpace, str, str | None]
|
|
Source code in neps/optimizers/multi_fidelity/hyperband.py
get_cost
#
Calls result.utils.get_cost() and passes the error handling through. Please use self.get_cost() instead of get_cost() in all optimizer classes.
Source code in neps/optimizers/base_optimizer.py
get_learning_curve
#
Calls result.utils.get_loss() and passes the error handling through. Please use self.get_loss() instead of get_loss() in all optimizer classes.
Source code in neps/optimizers/base_optimizer.py
get_loss
#
Calls result.utils.get_loss() and passes the error handling through. Please use self.get_loss() instead of get_loss() in all optimizer classes.
Source code in neps/optimizers/base_optimizer.py
is_promotable
#
is_promotable() -> int | None
Returns an int if a rung can be promoted, else a None.