Cost cooling
neps.optimizers.bayesian_optimization.cost_cooling
#
CostCooling
#
CostCooling(
pipeline_space: SearchSpace,
initial_design_size: int = 10,
surrogate_model: str | Any = "gp",
cost_model: str | Any = "gp",
surrogate_model_args: dict = None,
cost_model_args: dict = None,
optimal_assignment: bool = False,
domain_se_kernel: str = None,
graph_kernels: list = None,
hp_kernels: list = None,
acquisition: str | BaseAcquisition = "EI",
log_prior_weighted: bool = False,
acquisition_sampler: (
str | AcquisitionSampler
) = "mutation",
random_interleave_prob: float = 0.0,
patience: int = 100,
budget: None | int | float = None,
ignore_errors: bool = False,
loss_value_on_error: None | float = None,
cost_value_on_error: None | float = None,
logger=None,
)
Bases: BayesianOptimization
Implements a basic cost-cooling as described in "Cost-aware Bayesian Optimization" (arxiv.org/abs/2003.10870) by Lee et al.
PARAMETER | DESCRIPTION |
---|---|
pipeline_space |
Space in which to search
TYPE:
|
initial_design_size |
Number of 'x' samples that need to be evaluated before selecting a sample using a strategy instead of randomly.
TYPE:
|
surrogate_model |
Surrogate model |
cost_model |
Cost model |
surrogate_model_args |
Arguments that will be given to the surrogate model (the Gaussian processes model).
TYPE:
|
cost_model_args |
Arguments that will be given to the cost model (the Gaussian processes model).
TYPE:
|
optimal_assignment |
whether the optimal assignment kernel should be used.
TYPE:
|
domain_se_kernel |
Stationary kernel name
TYPE:
|
graph_kernels |
Kernels for NAS
TYPE:
|
hp_kernels |
Kernels for HPO
TYPE:
|
acquisition |
Acquisition strategy
TYPE:
|
log_prior_weighted |
if to use log for prior
TYPE:
|
acquisition_sampler |
Acquisition function fetching strategy
TYPE:
|
random_interleave_prob |
Frequency at which random configurations are sampled instead of configurations from the acquisition strategy.
TYPE:
|
patience |
How many times we try something that fails before giving up.
TYPE:
|
budget |
Maximum budget |
ignore_errors |
Ignore hyperparameter settings that threw an error and do not raise an error. Error configs still count towards max_evaluations_total.
TYPE:
|
loss_value_on_error |
Setting this and cost_value_on_error to any float will supress any error during bayesian optimization and will use given loss value instead. default: None
TYPE:
|
cost_value_on_error |
Setting this and loss_value_on_error to any float will supress any error during bayesian optimization and will use given cost value instead. default: None
TYPE:
|
logger |
logger object, or None to use the neps logger
DEFAULT:
|
RAISES | DESCRIPTION |
---|---|
ValueError
|
if patience < 1 |
ValueError
|
if initial_design_size < 1 |
ValueError
|
if random_interleave_prob is not between 0.0 and 1.0 |
ValueError
|
if no kernel is provided |
Source code in neps/optimizers/bayesian_optimization/cost_cooling.py
25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 |
|
get_cost
#
Calls result.utils.get_cost() and passes the error handling through. Please use self.get_cost() instead of get_cost() in all optimizer classes.
Source code in neps/optimizers/base_optimizer.py
get_learning_curve
#
Calls result.utils.get_loss() and passes the error handling through. Please use self.get_loss() instead of get_loss() in all optimizer classes.
Source code in neps/optimizers/base_optimizer.py
get_loss
#
Calls result.utils.get_loss() and passes the error handling through. Please use self.get_loss() instead of get_loss() in all optimizer classes.
Source code in neps/optimizers/base_optimizer.py
is_init_phase
#
is_init_phase() -> bool
Decides if optimization is still under the warmstart phase/model-based search.