Skip to content

Initializing the Pipeline Space#

In NePS, we need to define a pipeline_space. This space can be structured through various approaches, including a Python dictionary, or ConfigSpace. Each of these methods allows you to specify a set of parameter types, ranging from Float and Categorical to specialized architecture parameters. Whether you choose a dictionary, or ConfigSpace, your selected method serves as a container or framework within which these parameters are defined and organized. This section not only guides you through the process of setting up your pipeline_space using these methods but also provides detailed instructions and examples on how to effectively incorporate various parameter types, ensuring that NePS can utilize them in the optimization process.

Parameters#

NePS currently features 4 primary hyperparameter types:

Using these types, you can define the parameters that NePS will optimize during the search process. The most basic way to pass these parameters is through a Python dictionary, where each key-value pair represents a parameter name and its respective type. For example, the following Python dictionary defines a pipeline_space with four parameters for optimizing a deep learning model:

pipeline_space = {
    "learning_rate": neps.Float(0.00001, 0.1, log=True),
    "num_epochs": neps.Integer(3, 30, is_fidelity=True),
    "optimizer": ["adam", "sgd", "rmsprop"], # Categorical
    "dropout_rate": 0.5, # Constant
}

neps.run(.., pipeline_space=pipeline_space)
Quick Parameter Reference

A list of unordered choices for a parameter.

This kind of parameter is used to represent hyperparameters that can take on a discrete set of unordered values. For example, the optimizer hyperparameter in a neural network search space can be a Categorical with choices like ["adam", "sgd", "rmsprop"].

import neps

optimizer_choice = neps.Categorical(
    ["adam", "sgd", "rmsprop"],
    prior="adam"
)

center class-attribute instance-attribute #

center: float | int | str = field(init=False)

The center value of the categorical hyperparameter.

As there is no natural center for a categorical parameter, this is the first value in the choices list.

choices instance-attribute #

choices: list[float | int | str]

The list of choices for the categorical hyperparameter.

prior class-attribute instance-attribute #

prior: float | int | str | None = None

The default value for the categorical hyperparameter.

prior_confidence class-attribute instance-attribute #

prior_confidence: Literal['low', 'medium', 'high'] = 'low'

Confidence score for the prior value when considering prior based optimization.

A float value for a parameter.

This kind of parameter is used to represent hyperparameters with continuous float values, optionally specifying if it exists on a log scale.

For example, l2_norm could be a value in (0.1), while the learning_rate hyperparameter in a neural network search space can be a Float with a range of (0.0001, 0.1) but on a log scale.

import neps

l2_norm = neps.Float(0, 1)
learning_rate = neps.Float(1e-4, 1e-1, log=True)

center class-attribute instance-attribute #

center: float = field(init=False)

The center value of the numerical hyperparameter.

is_fidelity class-attribute instance-attribute #

is_fidelity: bool = False

Whether the hyperparameter is fidelity.

log class-attribute instance-attribute #

log: bool = False

Whether the hyperparameter is in log space.

lower instance-attribute #

lower: float

The lower bound of the numerical hyperparameter.

prior class-attribute instance-attribute #

prior: float | None = None

Prior value for the hyperparameter.

prior_confidence class-attribute instance-attribute #

prior_confidence: Literal['low', 'medium', 'high'] = 'low'

Confidence score for the prior value when considering prior based optimization.

upper instance-attribute #

upper: float

The upper bound of the numerical hyperparameter.

An integer value for a parameter.

This kind of parameter is used to represent hyperparameters with continuous integer values, optionally specifying f it exists on a log scale.

For example, batch_size could be a value in (32, 128), while the num_layers hyperparameter in a neural network search space can be a Integer with a range of (1, 1000) but on a log scale.

import neps

batch_size = neps.Integer(32, 128)
num_layers = neps.Integer(1, 1000, log=True)

center class-attribute instance-attribute #

center: int = field(init=False)

The center value of the numerical hyperparameter.

is_fidelity class-attribute instance-attribute #

is_fidelity: bool = False

Whether the hyperparameter is fidelity.

log class-attribute instance-attribute #

log: bool = False

Whether the hyperparameter is in log space.

lower instance-attribute #

lower: int

The lower bound of the numerical hyperparameter.

prior class-attribute instance-attribute #

prior: int | None = None

Prior value for the hyperparameter.

prior_confidence class-attribute instance-attribute #

prior_confidence: Literal['low', 'medium', 'high'] = 'low'

Confidence score for the prior value when considering prior based optimization.

upper instance-attribute #

upper: int

The upper bound of the numerical hyperparameter.

A constant value for a parameter.

This kind of parameter is used to represent hyperparameters with values that should not change during optimization.

For example, the batch_size hyperparameter in a neural network search space can be a Constant with a value of 32.

import neps

batch_size = neps.Constant(32)

center property #

center: Any

The center of the hyperparameter.

Warning

There is no real center of a constant value, hence we take this to be the value itself.

Using your knowledge, providing a Prior#

When optimizing, you can provide your own knowledge using the parameter prior=. By indicating a prior= we take this to be your user prior, your knowledge about where a good value for this parameter lies.

You can also specify a prior_confidence= to indicate how strongly you want NePS, to focus on these, one of either "low", "medium", or "high".

import neps

neps.run(
    ...,
    pipeline_space={
        "learning_rate": neps.Float(1e-4, 1e-1, log=True, prior=1e-2, prior_confidence="medium"),
        "num_epochs": neps.Integer(3, 30, is_fidelity=True),
        "optimizer": neps.Categorical(["adam", "sgd", "rmsprop"], prior="adam", prior_confidence="low"),
        "dropout_rate": neps.Constant(0.5),
    }
)

Must set prior= for all parameters, if any

If you specify prior= for one parameter, you must do so for all your variables. This will be improved in future versions.

Interaction with is_fidelity

If you specify is_fidelity=True for one parameter, the prior= and prior_confidence= are ignored. This will be dissallowed in future versions.

Currently the two major algorithms that exploit this in NePS are PriorBand (prior-based HyperBand) and PiBO, a version of Bayesian Optimization which uses Priors. For more information on priors and algorithms using them, please refer to the prior documentation.

Using ConfigSpace#

For users familiar with the ConfigSpace library, can also define the pipeline_space through ConfigurationSpace()

from configspace import ConfigurationSpace, Float

configspace = ConfigurationSpace(
    {
        "learning_rate": Float("learning_rate", bounds=(1e-4, 1e-1), log=True)
        "optimizer": ["adam", "sgd", "rmsprop"],
        "dropout_rate": 0.5,
    }
)

Warning

Parameters you wish to use as a fidelity are not support through ConfigSpace at this time.

For additional information on ConfigSpace and its features, please visit the following link.