benchmark
class LCBenchTabularResult
dataclass
#
Bases: Result[LCBenchTabularConfig, int]
score: float
prop
#
The score of interest.
error: float
prop
#
The error of interest.
val_score: float
prop
#
The score on the validation set.
val_error: float
prop
#
The error on the validation set.
test_score: float
prop
#
The score on the test set.
test_error: float
prop
#
The error on the test set.
cost: float
prop
#
The time to train the configuration (assumed to be seconds).
class LCBenchTabularBenchmark(task_id, datadir=None, *, remove_constants=False, seed=None, prior=None, perturb_prior=None)
#
Bases: TabularBenchmark
PARAMETER | DESCRIPTION |
---|---|
task_id |
The task to benchmark on.
TYPE:
|
datadir |
The directory to look for the data in. If |
remove_constants |
Whether to remove constant config columns from the data or not.
TYPE:
|
seed |
The seed to use.
TYPE:
|
prior |
The prior to use for the benchmark. If None, no prior is used. If a str, will check the local location first for a prior specific for this benchmark, otherwise assumes it to be a Path. If a Path, will load the prior from the path. If a Mapping, will be used directly.
TYPE:
|
perturb_prior |
If not None, will perturb the prior by this amount. For numericals, this is interpreted as the standard deviation of a normal distribution while for categoricals, this is interpreted as the probability of swapping the value for a random one.
TYPE:
|
Source code in src/mfpbench/lcbench_tabular/benchmark.py
task_ids: tuple[str, ...]
classvar
#
('adult', 'airlines', 'albert', 'Amazon_employee_access', 'APSFailure', 'Australian', 'bank-marketing', 'blood-transfusion-service-center', 'car', 'christine', 'cnae-9', 'connect-4', 'covertype', 'credit-g', 'dionis', 'fabert', 'Fashion-MNIST', 'helena', 'higgs', 'jannis', 'jasmine', 'jungle_chess_2pcs_raw_endgame_complete', 'kc1', 'KDDCup09_appetency', 'kr-vs-kp', 'mfeat-factors', 'MiniBooNE', 'nomao', 'numerai28.6', 'phoneme', 'segment', 'shuttle', 'sylvine', 'vehicle', 'volkert')