Skip to content

Parameters

A module of all the parameters for the search space.

Parameter module-attribute #

A type alias for all the parameter types.

A Constant is not included as it does not change value.

Categorical dataclass #

Categorical(
    choices: list[float | int | str],
    prior: float | int | str | None = None,
    prior_confidence: Literal[
        "low", "medium", "high"
    ] = "low",
)

A list of unordered choices for a parameter.

This kind of parameter is used to represent hyperparameters that can take on a discrete set of unordered values. For example, the optimizer hyperparameter in a neural network search space can be a Categorical with choices like ["adam", "sgd", "rmsprop"].

import neps

optimizer_choice = neps.Categorical(
    ["adam", "sgd", "rmsprop"],
    prior="adam"
)

center class-attribute instance-attribute #

center: float | int | str = field(init=False)

The center value of the categorical hyperparameter.

As there is no natural center for a categorical parameter, this is the first value in the choices list.

choices instance-attribute #

choices: list[float | int | str]

The list of choices for the categorical hyperparameter.

prior class-attribute instance-attribute #

prior: float | int | str | None = None

The default value for the categorical hyperparameter.

prior_confidence class-attribute instance-attribute #

prior_confidence: Literal['low', 'medium', 'high'] = 'low'

Confidence score for the prior value when considering prior based optimization.

Constant dataclass #

Constant(value: Any)

A constant value for a parameter.

This kind of parameter is used to represent hyperparameters with values that should not change during optimization.

For example, the batch_size hyperparameter in a neural network search space can be a Constant with a value of 32.

import neps

batch_size = neps.Constant(32)

center property #

center: Any

The center of the hyperparameter.

Warning

There is no real center of a constant value, hence we take this to be the value itself.

Float dataclass #

Float(
    lower: float,
    upper: float,
    log: bool = False,
    prior: float | None = None,
    prior_confidence: Literal[
        "low", "medium", "high"
    ] = "low",
    is_fidelity: bool = False,
)

A float value for a parameter.

This kind of parameter is used to represent hyperparameters with continuous float values, optionally specifying if it exists on a log scale.

For example, l2_norm could be a value in (0.1), while the learning_rate hyperparameter in a neural network search space can be a Float with a range of (0.0001, 0.1) but on a log scale.

import neps

l2_norm = neps.Float(0, 1)
learning_rate = neps.Float(1e-4, 1e-1, log=True)

center class-attribute instance-attribute #

center: float = field(init=False)

The center value of the numerical hyperparameter.

is_fidelity class-attribute instance-attribute #

is_fidelity: bool = False

Whether the hyperparameter is fidelity.

log class-attribute instance-attribute #

log: bool = False

Whether the hyperparameter is in log space.

lower instance-attribute #

lower: float

The lower bound of the numerical hyperparameter.

prior class-attribute instance-attribute #

prior: float | None = None

Prior value for the hyperparameter.

prior_confidence class-attribute instance-attribute #

prior_confidence: Literal['low', 'medium', 'high'] = 'low'

Confidence score for the prior value when considering prior based optimization.

upper instance-attribute #

upper: float

The upper bound of the numerical hyperparameter.

Integer dataclass #

Integer(
    lower: int,
    upper: int,
    log: bool = False,
    prior: int | None = None,
    prior_confidence: Literal[
        "low", "medium", "high"
    ] = "low",
    is_fidelity: bool = False,
)

An integer value for a parameter.

This kind of parameter is used to represent hyperparameters with continuous integer values, optionally specifying f it exists on a log scale.

For example, batch_size could be a value in (32, 128), while the num_layers hyperparameter in a neural network search space can be a Integer with a range of (1, 1000) but on a log scale.

import neps

batch_size = neps.Integer(32, 128)
num_layers = neps.Integer(1, 1000, log=True)

center class-attribute instance-attribute #

center: int = field(init=False)

The center value of the numerical hyperparameter.

is_fidelity class-attribute instance-attribute #

is_fidelity: bool = False

Whether the hyperparameter is fidelity.

log class-attribute instance-attribute #

log: bool = False

Whether the hyperparameter is in log space.

lower instance-attribute #

lower: int

The lower bound of the numerical hyperparameter.

prior class-attribute instance-attribute #

prior: int | None = None

Prior value for the hyperparameter.

prior_confidence class-attribute instance-attribute #

prior_confidence: Literal['low', 'medium', 'high'] = 'low'

Confidence score for the prior value when considering prior based optimization.

upper instance-attribute #

upper: int

The upper bound of the numerical hyperparameter.