benchmark
class PD1Result
dataclass
#
score: float
prop
#
The score of interest.
error: float
prop
#
The error of interest.
val_score: float
prop
#
The score on the validation set.
val_error: float
prop
#
The error on the validation set.
cost: float
prop
#
The train cost of the model (asssumed to be seconds).
Please double check with YAHPO.
class PD1ResultSimple
dataclass
#
class PD1ResultTransformer
dataclass
#
class PD1Benchmark(*, datadir=None, seed=None, prior=None, perturb_prior=None)
#
PARAMETER | DESCRIPTION |
---|---|
datadir |
Path to the data directory |
seed |
The seed to use for the space
TYPE:
|
prior |
Any prior to use for the benchmark
TYPE:
|
perturb_prior |
Whether to perturb the prior. If specified, this is interpreted as the std of a normal from which to perturb numerical hyperparameters of the prior, and the raw probability of swapping a categorical value.
TYPE:
|
Source code in src/mfpbench/pd1/benchmark.py
pd1_dataset: str
classvar
#
The dataset that this benchmark uses.
pd1_model: str
classvar
#
The model that this benchmark uses.
pd1_batchsize: int
classvar
#
The batchsize that this benchmark uses.
pd1_metrics: tuple[str, ...]
classvar
#
The metrics that are available for this benchmark.
Config: type[C]
attr
#
The config type for this benchmark.
Result: type[R]
attr
#
The result type for this benchmark.
surrogates: dict[str, XGBRegressor]
prop
#
The surrogates for this benchmark, one per metric.
surrogate_dir: Path
prop
#
The directory where the surrogates are stored.
surrogate_paths: dict[str, Path]
prop
#
The paths to the surrogates.