Successive halving
neps.optimizers.multi_fidelity.successive_halving
#
AsynchronousSuccessiveHalving
#
AsynchronousSuccessiveHalving(
pipeline_space: SearchSpace,
budget: int,
eta: int = 3,
early_stopping_rate: int = 0,
initial_design_type: Literal[
"max_budget", "unique_configs"
] = "max_budget",
use_priors: bool = False,
sampling_policy: Any = RandomUniformPolicy,
promotion_policy: Any = AsyncPromotionPolicy,
loss_value_on_error: None | float = None,
cost_value_on_error: None | float = None,
ignore_errors: bool = False,
logger=None,
prior_confidence: Literal[
"low", "medium", "high"
] = None,
random_interleave_prob: float = 0.0,
sample_default_first: bool = False,
sample_default_at_target: bool = False,
)
Bases: SuccessiveHalvingBase
Implements ASHA with a sampling and asynchronous promotion policy.
Source code in neps/optimizers/multi_fidelity/successive_halving.py
get_config_and_ids
#
get_config_and_ids() -> tuple[SearchSpace, str, str | None]
...and this is the method that decides which point to query.
RETURNS | DESCRIPTION |
---|---|
tuple[SearchSpace, str, str | None]
|
|
Source code in neps/optimizers/multi_fidelity/successive_halving.py
get_cost
#
Calls result.utils.get_cost() and passes the error handling through. Please use self.get_cost() instead of get_cost() in all optimizer classes.
Source code in neps/optimizers/base_optimizer.py
get_learning_curve
#
Calls result.utils.get_loss() and passes the error handling through. Please use self.get_loss() instead of get_loss() in all optimizer classes.
Source code in neps/optimizers/base_optimizer.py
get_loss
#
Calls result.utils.get_loss() and passes the error handling through. Please use self.get_loss() instead of get_loss() in all optimizer classes.
Source code in neps/optimizers/base_optimizer.py
is_promotable
#
is_promotable() -> int | None
Returns an int if a rung can be promoted, else a None.
Source code in neps/optimizers/multi_fidelity/successive_halving.py
load_results
#
load_results(
previous_results: dict[str, ConfigResult],
pending_evaluations: dict[str, SearchSpace],
) -> None
This is basically the fit method.
PARAMETER | DESCRIPTION |
---|---|
previous_results |
[description]
TYPE:
|
pending_evaluations |
[description]
TYPE:
|
Source code in neps/optimizers/multi_fidelity/successive_halving.py
AsynchronousSuccessiveHalvingWithPriors
#
AsynchronousSuccessiveHalvingWithPriors(
pipeline_space: SearchSpace,
budget: int,
eta: int = 3,
early_stopping_rate: int = 0,
initial_design_type: Literal[
"max_budget", "unique_configs"
] = "max_budget",
sampling_policy: Any = FixedPriorPolicy,
promotion_policy: Any = AsyncPromotionPolicy,
loss_value_on_error: None | float = None,
cost_value_on_error: None | float = None,
ignore_errors: bool = False,
logger=None,
prior_confidence: Literal[
"low", "medium", "high"
] = "medium",
random_interleave_prob: float = 0.0,
sample_default_first: bool = False,
sample_default_at_target: bool = False,
)
Bases: AsynchronousSuccessiveHalving
Implements ASHA with a sampling and asynchronous promotion policy.
Source code in neps/optimizers/multi_fidelity/successive_halving.py
get_config_and_ids
#
get_config_and_ids() -> tuple[SearchSpace, str, str | None]
...and this is the method that decides which point to query.
RETURNS | DESCRIPTION |
---|---|
tuple[SearchSpace, str, str | None]
|
|
Source code in neps/optimizers/multi_fidelity/successive_halving.py
get_cost
#
Calls result.utils.get_cost() and passes the error handling through. Please use self.get_cost() instead of get_cost() in all optimizer classes.
Source code in neps/optimizers/base_optimizer.py
get_learning_curve
#
Calls result.utils.get_loss() and passes the error handling through. Please use self.get_loss() instead of get_loss() in all optimizer classes.
Source code in neps/optimizers/base_optimizer.py
get_loss
#
Calls result.utils.get_loss() and passes the error handling through. Please use self.get_loss() instead of get_loss() in all optimizer classes.
Source code in neps/optimizers/base_optimizer.py
is_promotable
#
is_promotable() -> int | None
Returns an int if a rung can be promoted, else a None.
Source code in neps/optimizers/multi_fidelity/successive_halving.py
load_results
#
load_results(
previous_results: dict[str, ConfigResult],
pending_evaluations: dict[str, SearchSpace],
) -> None
This is basically the fit method.
PARAMETER | DESCRIPTION |
---|---|
previous_results |
[description]
TYPE:
|
pending_evaluations |
[description]
TYPE:
|
Source code in neps/optimizers/multi_fidelity/successive_halving.py
SuccessiveHalving
#
SuccessiveHalving(
pipeline_space: SearchSpace,
budget: int = None,
eta: int = 3,
early_stopping_rate: int = 0,
initial_design_type: Literal[
"max_budget", "unique_configs"
] = "max_budget",
use_priors: bool = False,
sampling_policy: Any = RandomUniformPolicy,
promotion_policy: Any = SyncPromotionPolicy,
loss_value_on_error: None | float = None,
cost_value_on_error: None | float = None,
ignore_errors: bool = False,
logger=None,
prior_confidence: Literal[
"low", "medium", "high"
] = None,
random_interleave_prob: float = 0.0,
sample_default_first: bool = False,
sample_default_at_target: bool = False,
)
Bases: SuccessiveHalvingBase
PARAMETER | DESCRIPTION |
---|---|
pipeline_space |
Space in which to search
TYPE:
|
budget |
Maximum budget
TYPE:
|
eta |
The reduction factor used by SH
TYPE:
|
early_stopping_rate |
Determines the number of rungs in an SH bracket Choosing 0 creates maximal rungs given the fidelity bounds
TYPE:
|
initial_design_type |
Type of initial design to switch to BO Legacy parameter from NePS BO design. Could be used to extend to MF-BO.
TYPE:
|
use_priors |
Allows random samples to be generated from a default Samples generated from a Gaussian centered around the default value
TYPE:
|
sampling_policy |
The type of sampling procedure to use
TYPE:
|
promotion_policy |
The type of promotion procedure to use
TYPE:
|
loss_value_on_error |
Setting this and cost_value_on_error to any float will supress any error during bayesian optimization and will use given loss value instead. default: None
TYPE:
|
cost_value_on_error |
Setting this and loss_value_on_error to any float will supress any error during bayesian optimization and will use given cost value instead. default: None
TYPE:
|
logger |
logger object, or None to use the neps logger
DEFAULT:
|
prior_confidence |
The range of confidence to have on the prior The higher the confidence, the smaller is the standard deviation of the prior distribution centered around the default
TYPE:
|
random_interleave_prob |
Chooses the fraction of samples from random vs prior
TYPE:
|
sample_default_first |
Whether to sample the default configuration first
TYPE:
|
sample_default_at_target |
Whether to evaluate the default configuration at the target fidelity or max budget
TYPE:
|
Source code in neps/optimizers/multi_fidelity/successive_halving.py
39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 |
|
clear_old_brackets
#
Enforces reset at each new bracket.
The _get_rungs_state() function creates the rung_promotions
dict mapping which
is used by the promotion policies to determine the next step: promotion/sample.
The key to simulating reset of rungs like in vanilla SH is by subsetting only the
relevant part of the observation history that corresponds to one SH bracket.
Under a parallel run, multiple SH brackets can be spawned. The oldest, active,
incomplete SH bracket is searched for to choose the next evaluation. If either
all brackets are over or waiting, a new SH bracket is spawned.
There are no waiting or blocking calls.
Source code in neps/optimizers/multi_fidelity/successive_halving.py
get_config_and_ids
#
get_config_and_ids() -> tuple[SearchSpace, str, str | None]
...and this is the method that decides which point to query.
RETURNS | DESCRIPTION |
---|---|
tuple[SearchSpace, str, str | None]
|
|
Source code in neps/optimizers/multi_fidelity/successive_halving.py
get_cost
#
Calls result.utils.get_cost() and passes the error handling through. Please use self.get_cost() instead of get_cost() in all optimizer classes.
Source code in neps/optimizers/base_optimizer.py
get_learning_curve
#
Calls result.utils.get_loss() and passes the error handling through. Please use self.get_loss() instead of get_loss() in all optimizer classes.
Source code in neps/optimizers/base_optimizer.py
get_loss
#
Calls result.utils.get_loss() and passes the error handling through. Please use self.get_loss() instead of get_loss() in all optimizer classes.
Source code in neps/optimizers/base_optimizer.py
is_promotable
#
is_promotable() -> int | None
Returns an int if a rung can be promoted, else a None.
Source code in neps/optimizers/multi_fidelity/successive_halving.py
load_results
#
load_results(
previous_results: dict[str, ConfigResult],
pending_evaluations: dict[str, SearchSpace],
) -> None
This is basically the fit method.
PARAMETER | DESCRIPTION |
---|---|
previous_results |
[description]
TYPE:
|
pending_evaluations |
[description]
TYPE:
|
Source code in neps/optimizers/multi_fidelity/successive_halving.py
SuccessiveHalvingBase
#
SuccessiveHalvingBase(
pipeline_space: SearchSpace,
budget: int = None,
eta: int = 3,
early_stopping_rate: int = 0,
initial_design_type: Literal[
"max_budget", "unique_configs"
] = "max_budget",
use_priors: bool = False,
sampling_policy: Any = RandomUniformPolicy,
promotion_policy: Any = SyncPromotionPolicy,
loss_value_on_error: None | float = None,
cost_value_on_error: None | float = None,
ignore_errors: bool = False,
logger=None,
prior_confidence: Literal[
"low", "medium", "high"
] = None,
random_interleave_prob: float = 0.0,
sample_default_first: bool = False,
sample_default_at_target: bool = False,
)
Bases: BaseOptimizer
Implements a SuccessiveHalving procedure with a sampling and promotion policy.
PARAMETER | DESCRIPTION |
---|---|
pipeline_space |
Space in which to search
TYPE:
|
budget |
Maximum budget
TYPE:
|
eta |
The reduction factor used by SH
TYPE:
|
early_stopping_rate |
Determines the number of rungs in an SH bracket Choosing 0 creates maximal rungs given the fidelity bounds
TYPE:
|
initial_design_type |
Type of initial design to switch to BO Legacy parameter from NePS BO design. Could be used to extend to MF-BO.
TYPE:
|
use_priors |
Allows random samples to be generated from a default Samples generated from a Gaussian centered around the default value
TYPE:
|
sampling_policy |
The type of sampling procedure to use
TYPE:
|
promotion_policy |
The type of promotion procedure to use
TYPE:
|
loss_value_on_error |
Setting this and cost_value_on_error to any float will supress any error during bayesian optimization and will use given loss value instead. default: None
TYPE:
|
cost_value_on_error |
Setting this and loss_value_on_error to any float will supress any error during bayesian optimization and will use given cost value instead. default: None
TYPE:
|
logger |
logger object, or None to use the neps logger
DEFAULT:
|
prior_confidence |
The range of confidence to have on the prior The higher the confidence, the smaller is the standard deviation of the prior distribution centered around the default
TYPE:
|
random_interleave_prob |
Chooses the fraction of samples from random vs prior
TYPE:
|
sample_default_first |
Whether to sample the default configuration first
TYPE:
|
sample_default_at_target |
Whether to evaluate the default configuration at the target fidelity or max budget
TYPE:
|
Source code in neps/optimizers/multi_fidelity/successive_halving.py
39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 |
|
get_config_and_ids
#
get_config_and_ids() -> tuple[SearchSpace, str, str | None]
...and this is the method that decides which point to query.
RETURNS | DESCRIPTION |
---|---|
tuple[SearchSpace, str, str | None]
|
|
Source code in neps/optimizers/multi_fidelity/successive_halving.py
get_cost
#
Calls result.utils.get_cost() and passes the error handling through. Please use self.get_cost() instead of get_cost() in all optimizer classes.
Source code in neps/optimizers/base_optimizer.py
get_learning_curve
#
Calls result.utils.get_loss() and passes the error handling through. Please use self.get_loss() instead of get_loss() in all optimizer classes.
Source code in neps/optimizers/base_optimizer.py
get_loss
#
Calls result.utils.get_loss() and passes the error handling through. Please use self.get_loss() instead of get_loss() in all optimizer classes.
Source code in neps/optimizers/base_optimizer.py
is_promotable
#
is_promotable() -> int | None
Returns an int if a rung can be promoted, else a None.
Source code in neps/optimizers/multi_fidelity/successive_halving.py
load_results
#
load_results(
previous_results: dict[str, ConfigResult],
pending_evaluations: dict[str, SearchSpace],
) -> None
This is basically the fit method.
PARAMETER | DESCRIPTION |
---|---|
previous_results |
[description]
TYPE:
|
pending_evaluations |
[description]
TYPE:
|
Source code in neps/optimizers/multi_fidelity/successive_halving.py
SuccessiveHalvingWithPriors
#
SuccessiveHalvingWithPriors(
pipeline_space: SearchSpace,
budget: int,
eta: int = 3,
early_stopping_rate: int = 0,
initial_design_type: Literal[
"max_budget", "unique_configs"
] = "max_budget",
sampling_policy: Any = FixedPriorPolicy,
promotion_policy: Any = SyncPromotionPolicy,
loss_value_on_error: None | float = None,
cost_value_on_error: None | float = None,
ignore_errors: bool = False,
logger=None,
prior_confidence: Literal[
"low", "medium", "high"
] = "medium",
random_interleave_prob: float = 0.0,
sample_default_first: bool = False,
sample_default_at_target: bool = False,
)
Bases: SuccessiveHalving
Implements a SuccessiveHalving procedure with a sampling and promotion policy.
Source code in neps/optimizers/multi_fidelity/successive_halving.py
clear_old_brackets
#
Enforces reset at each new bracket.
The _get_rungs_state() function creates the rung_promotions
dict mapping which
is used by the promotion policies to determine the next step: promotion/sample.
The key to simulating reset of rungs like in vanilla SH is by subsetting only the
relevant part of the observation history that corresponds to one SH bracket.
Under a parallel run, multiple SH brackets can be spawned. The oldest, active,
incomplete SH bracket is searched for to choose the next evaluation. If either
all brackets are over or waiting, a new SH bracket is spawned.
There are no waiting or blocking calls.
Source code in neps/optimizers/multi_fidelity/successive_halving.py
get_config_and_ids
#
get_config_and_ids() -> tuple[SearchSpace, str, str | None]
...and this is the method that decides which point to query.
RETURNS | DESCRIPTION |
---|---|
tuple[SearchSpace, str, str | None]
|
|
Source code in neps/optimizers/multi_fidelity/successive_halving.py
get_cost
#
Calls result.utils.get_cost() and passes the error handling through. Please use self.get_cost() instead of get_cost() in all optimizer classes.
Source code in neps/optimizers/base_optimizer.py
get_learning_curve
#
Calls result.utils.get_loss() and passes the error handling through. Please use self.get_loss() instead of get_loss() in all optimizer classes.
Source code in neps/optimizers/base_optimizer.py
get_loss
#
Calls result.utils.get_loss() and passes the error handling through. Please use self.get_loss() instead of get_loss() in all optimizer classes.
Source code in neps/optimizers/base_optimizer.py
is_promotable
#
is_promotable() -> int | None
Returns an int if a rung can be promoted, else a None.
Source code in neps/optimizers/multi_fidelity/successive_halving.py
load_results
#
load_results(
previous_results: dict[str, ConfigResult],
pending_evaluations: dict[str, SearchSpace],
) -> None
This is basically the fit method.
PARAMETER | DESCRIPTION |
---|---|
previous_results |
[description]
TYPE:
|
pending_evaluations |
[description]
TYPE:
|