.. note:: :class: sphx-glr-download-link-note Click :ref:`here ` to download the full example code .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_commons.py: Worker for Examples 1-4 ======================= This class implements a very simple worker used in the firt examples. .. code-block:: python import numpy import time import ConfigSpace as CS from hpbandster.core.worker import Worker class MyWorker(Worker): def __init__(self, *args, sleep_interval=0, **kwargs): super().__init__(*args, **kwargs) self.sleep_interval = sleep_interval def compute(self, config, budget, **kwargs): """ Simple example for a compute function The loss is just a the config + some noise (that decreases with the budget) For dramatization, the function can sleep for a given interval to emphasizes the speed ups achievable with parallel workers. Args: config: dictionary containing the sampled configurations by the optimizer budget: (float) amount of time/epochs/etc. the model can use to train Returns: dictionary with mandatory fields: 'loss' (scalar) 'info' (dict) """ res = numpy.clip(config['x'] + numpy.random.randn()/budget, config['x']/2, 1.5*config['x']) time.sleep(self.sleep_interval) return({ 'loss': float(res), # this is the a mandatory field to run hyperband 'info': res # can be used for any user-defined information - also mandatory }) @staticmethod def get_configspace(): config_space = CS.ConfigurationSpace() config_space.add_hyperparameter(CS.UniformFloatHyperparameter('x', lower=0, upper=1)) return(config_space) **Total running time of the script:** ( 0 minutes 0.000 seconds) .. _sphx_glr_download_auto_examples_commons.py: .. only :: html .. container:: sphx-glr-footer :class: sphx-glr-footer-example .. container:: sphx-glr-download :download:`Download Python source code: commons.py ` .. container:: sphx-glr-download :download:`Download Jupyter notebook: commons.ipynb ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_