Iterations – the job center of HpBandster

class hpbandster.core.base_iteration.BaseIteration(HPB_iter, num_configs, budgets, config_sampler, logger=None, result_logger=None)[source]

Base class for various iteration possibilities. This decides what configuration should be run on what budget next. Typical choices are e.g. successive halving. Results from runs are processed and (depending on the implementations) determine the further development.

Parameters:
  • HPB_iter (int) – The current HPBandSter iteration index.
  • num_configs (list of ints) – the number of configurations in each stage of SH
  • budgets (list of floats) – the budget associated with each stage
  • config_sample (callable) – a function that returns a valid configuration. Its only argument should be the budget that this config is first scheduled for. This might be used to pick configurations that perform best after this particular budget is exhausted to build a better autoML system.
  • logger (a logger) –
  • result_logger (hpbandster.api.results.util.json_result_logger object) – a result logger that writes live results to disk
add_configuration(config=None, config_info={})[source]

function to add a new configuration to the current iteration

Parameters:
  • config (valid configuration) – The configuration to add. If None, a configuration is sampled from the config_sampler
  • config_info (dict) – Some information about the configuration that will be stored in the results
finish_up()[source]
get_next_run()[source]

function to return the next configuration and budget to run.

This function is called from HB_master, don’t call this from your script.

It returns None if this run of SH is finished or there are pending jobs that need to finish to progress to the next stage.

If there are empty slots to be filled in the current SH stage (which never happens in the original SH version), a new configuration will be sampled and scheduled to run next.

process_results()[source]

function that is called when a stage is completed and needs to be analyzed befor further computations.

The code here implements the original SH algorithms by advancing the k-best (lowest loss) configurations at the current budget. k is defined by the num_configs list (see __init__) and the current stage value.

For more advanced methods like resampling after each stage, overload this function only.

register_result(job, skip_sanity_checks=False)[source]

function to register the result of a job

This function is called from HB_master, don’t call this from your script.

class hpbandster.core.base_iteration.Datum(config, config_info, results=None, time_stamps=None, exceptions=None, status='QUEUED', budget=0)[source]
class hpbandster.core.base_iteration.WarmStartIteration(Result, config_generator)[source]

iteration that imports a privious Result for warm starting

fix_timestamps(time_ref)[source]

manipulates internal time stamps such that the last run ends at time 0