benchmark
The hartmann benchmarks.
The presets of terrible, bad, moderate and good are empirically obtained hyperparameters for the hartmann function
The function flattens with increasing fidelity bias. Along with increasing noise, that obviously makes one config harder to distinguish from another. Moreover, this works with any number of fidelitiy levels.
class MFHartmann3Config
dataclass
#
class MFHartmann6Config
dataclass
#
class MFHartmannResult
dataclass
#
score: float
prop
#
The score of interest.
error: float
prop
#
The score of interest.
test_score: float
prop
#
Just returns the score.
test_error: float
prop
#
Just returns the error.
val_score: float
prop
#
Just returns the score.
val_error: float
prop
#
Just returns the error.
cost: float
prop
#
Just retuns the fidelity.
class MFHartmannBenchmark(*, seed=None, bias=None, noise=None, prior=None, perturb_prior=None)
#
Bases: Benchmark
, Generic[G, C]
PARAMETER | DESCRIPTION |
---|---|
seed |
The seed to use.
TYPE:
|
bias |
How much bias to introduce
TYPE:
|
noise |
How much noise to introduce
TYPE:
|
prior |
The prior to use for the benchmark.
TYPE:
|
perturb_prior |
If not None, will perturb the prior by this amount. For numericals, while for categoricals, this is interpreted as the probability of swapping the value for a random one.
TYPE:
|
Source code in src/mfpbench/synthetic/hartmann/benchmark.py
mfh_dims: int
classvar
#
How many dimensions there are to the Hartmann function.
mfh_suffix: str
classvar
#
Suffix for the benchmark name
Config: type[C]
attr
#
The Config type for this mfhartmann benchmark.
Generator: type[G]
attr
#
The underlying mfhartmann function generator.
mfh_bias_noise: tuple[float, float]
classvar
#
The default bias and noise for mfhartmann benchmarks.
optimum: C
prop
#
The optimum of the benchmark.