Results of a run – how to access all the information¶
-
class
hpbandster.core.result.
Result
(HB_iteration_data, HB_config)[source]¶ Object returned by the HB_master.run function
This class offers a simple API to access the information from a Hyperband run.
-
get_all_runs
(only_largest_budget=False)[source]¶ returns all runs performed
Parameters: only_largest_budget (boolean) – if True, only the largest budget for each configuration is returned. This makes sense if the runs are continued across budgets and the info field contains the information you care about. If False, all runs of a configuration are returned
-
get_fANOVA_data
(config_space, budgets=None, loss_fn=<function Result.<lambda>>, failed_loss=None)[source]¶
-
get_id2config_mapping
()[source]¶ returns a dict where the keys are the config_ids and the values are the actual configurations
-
get_incumbent_id
()[source]¶ Find the config_id of the incumbent.
The incumbent here is the configuration with the smallest loss among all runs on the maximum budget! If no run finishes on the maximum budget, None is returned!
-
get_incumbent_trajectory
(all_budgets=True, bigger_is_better=True, non_decreasing_budget=True)[source]¶ Returns the best configurations over time
Parameters: - all_budgets (bool) – If set to true all runs (even those not with the largest budget) can be the incumbent. Otherwise, only full budget runs are considered
- bigger_is_better (bool) – flag whether an evaluation on a larger budget is always considered better. If True, the incumbent might increase for the first evaluations on a bigger budget
- non_decreasing_budget (bool) – flag whether the budget of a new incumbent should be at least as big as the one for the current incumbent.
Returns: dictionary with all the config IDs, the times the runs finished, their respective budgets, and corresponding losses
Return type: dict
-
get_learning_curves
(lc_extractor=<function extract_HBS_learning_curves>, config_ids=None)[source]¶ extracts all learning curves from all run configurations
Parameters: - lc_extractor (callable) – a function to return a list of learning_curves. defaults to hpbanster.HB_result.extract_HP_learning_curves
- config_ids (list of valid config ids) – if only a subset of the config ids is wanted
Returns: a dictionary with the config_ids as keys and the learning curves as values
Return type: dict
-
-
class
hpbandster.core.result.
Run
(config_id, budget, loss, info, time_stamps, error_logs)[source]¶ Not a proper class, more a ‘struct’ to bundle important information about a particular run
-
hpbandster.core.result.
extract_HBS_learning_curves
(runs)[source]¶ function to get the hyperband learning curves
This is an example function showing the interface to use the HB_result.get_learning_curves method.
Parameters: runs (list of HB_result.run objects) – the performed runs for an unspecified config Returns: list of learning curves – An individual learning curve is a list of (t, x_t) tuples. This function must return a list of these. One could think of cases where one could extract multiple learning curves from these runs, e.g. if each run is an independent training run of a neural network on the data. Return type: list of lists of tuples
-
class
hpbandster.core.result.
json_result_logger
(directory, overwrite=False)[source]¶ convenience logger for ‘semi-live-results’
Logger that writes job results into two files (configs.json and results.json). Both files contain propper json objects in each line.
This version opens and closes the files for each result. This might be very slow if individual runs are fast and the filesystem is rather slow (e.g. a NFS).
Parameters: - directory (string) – the directory where the two files ‘configs.json’ and ‘results.json’ are stored
- overwrite (bool) –
In case the files already exist, this flag controls the behavior:
- True: The existing files will be overwritten. Potential risk of deleting previous results
- False: A FileExistsError is raised and the files are not modified.
-
hpbandster.core.result.
logged_results_to_HBS_result
(directory)[source]¶ function to import logged ‘live-results’ and return a HB_result object
You can load live run results with this function and the returned HB_result object gives you access to the results the same way a finished run would.
Parameters: directory (str) – the directory containing the results.json and config.json files Returns: hpbandster.core.result.Result – TODO Return type: object: