The Function Approximation Toy Benchmark¶
This benchmark is not built on top of an algorithm, but is simply a function approximation task. In each step until the cutoff, the DAC controller predicts one y-value for a given function curve per task dimension. The predictions are can be configured to be discrete, in which case there is some distance between the true function value and the best possible prediction. This distance is used as a cost function. If multiple task dimensions are used, the total cost is computed by multiplying the costs of all dimensions. Furthermore, it is possible to assign different importance weights to each dimension, mirroring hyperparameter importances.
The benchmark is very cheap to run and the instances can be sampled and shaped easily. Therefore it’s a good starting point for any new DAC method or to gain specific insights for which fine control over the instance distribution is required.
The Sigmoid benchmark was constructed by Biedenkapp et al. for the paper `”Dynamic Algorithm Configuration: Foundation of a New Meta-Algorithmic Framework” <https://www.tnt.uni-hannover.de/papers/data/1432/20-ECAI-DAC.pdf>`_ at ECAI 2020 and later extended to include multiple function classes and importance weights.
Function Approximation Benchmark.
- class dacbench.benchmarks.function_approximation_benchmark.FunctionApproximationBenchmark(config_path=None, config=None)[source]¶
Bases:
AbstractBenchmark
Benchmark with default configuration & relevant functions for Function Approximation.
- get_benchmark(dimension=None, seed=0)[source]¶
Get Sigmoid Benchmark from DAC paper.
- Parameters:
dimension (int) – Sigmoid dimension, was 1, 2, 3 or 5 in the paper
seed (int) – Environment seed
Returns
-------
env (SigmoidEnv) – Sigmoid environment
Function Approximation Environment.
- class dacbench.envs.function_approximation.FunctionApproximationEnv(config)[source]¶
Bases:
AbstractMADACEnv
Function Approximation Environment.