The Luby Toy Benchmark

Task: learning the Luby sequence with variations
Cost: correctness of sequence element prediction
Number of hyperparameters to control: one integer
State Information: Actions and timesteps of the last three iterations
Noise Level: None
Instance space: the Luby sequence with possibilities to modify the starting point of the series (e.g. element 5 instead of 1) as well as the repetition fo each element

This benchmark is not built on top of an algorithm, instead it’s a pure sequence learning task. In each step until the cutoff, the DAC controller’s task is to predict the next element of the Luby sequence. If the prediction is correct, it is given a reward of 1 and else 0.

The benchmark is very cheap to run, but can be altered to be quite challenging nonetheless. In its basic form, it can serve to validate DAC methods and observe their prowess in learning a series of predictions correctly.

The Luby benchmark was constructed by Biedenkapp et al. for the paper `”Dynamic Algorithm Configuration: Foundation of a New Meta-Algorithmic Framework” <https://www.tnt.uni-hannover.de/papers/data/1432/20-ECAI-DAC.pdf>`_ at ECAI 2020

Luby Benchmark.

class dacbench.benchmarks.luby_benchmark.LubyBenchmark(config_path=None, config=None)[source]

Bases: AbstractBenchmark

Benchmark with default configuration & relevant functions for Sigmoid.

get_benchmark(min_l=8, fuzziness=1.5, seed=0)[source]

Get Benchmark from DAC paper.

Parameters:
  • min_l (int) – Minimum sequence lenght, was 8, 16 or 32 in the paper

  • fuzziness (float) – Amount of noise applied. Was 1.5 for most of the experiments

  • seed (int) – Environment seed

  • Returns

  • --------

  • LubyEnv (Luby Environment)

get_environment()[source]

Return Luby env with current configuration.

Returns:

LubyEnv: Luby environment

read_instance_set(test=False)[source]

Read instance set from file.

set_cutoff(steps)[source]

Set cutoff and adapt dependencies.

Parameters:

int – Maximum number of steps

set_history_length(length)[source]

Set history length and adapt dependencies.

Parameters:

int – History length

Luby environment from “Dynamic Algorithm Configuration:Foundation of a New Meta-Algorithmic Framework” by A. Biedenkapp and H. F. Bozkurt and T. Eimer and F. Hutter and M. Lindauer. Original environment authors: André Biedenkapp, H. Furkan Bozkurt.

class dacbench.envs.luby.LubyEnv(config)[source]

Bases: AbstractEnv

Environment to learn Luby Sequence.

close() bool[source]

Close Env.

Returns:

bool: Closing confirmation

get_default_reward(_)[source]

The default reward function.

Parameters:

_ (_type_) – Empty parameter, which can be used when overriding

Returns:

The calculated reward

Return type:

float

get_default_state(_)[source]

Default state function.

Parameters:

_ (_type_) – Empty parameter, which can be used when overriding

Returns:

dict: The current state

render(mode: str = 'human') None[source]

Render env in human mode.

Parameters:

mode (str) – Execution mode

reset(seed=None, options=None) list[int][source]

Resets env.

Returns:

numpy.array: Environment state

step(action: int)[source]

Execute environment step.

Parameters:
  • action (int) – action to execute

  • Returns

  • --------

  • np.array (state, reward, terminated, truncated, info)

  • float (state, reward, terminated, truncated, info)

  • bool (state, reward, terminated, truncated, info)

  • bool

  • dict (state, reward, terminated, truncated, info)

class dacbench.envs.luby.LubyInstance(start_shift: float, sticky_shift: float)[source]

Bases: object

Luby Instance.

dacbench.envs.luby.luby_gen(i)[source]

Generator for the Luby Sequence.