Skip to content

Optimization Guide#

One of the core tasks of any AutoML system is to optimize some objective, whether it be some pipeline, a black-box or even a toy function. In the context of AMLTK, this means defining some Metric(s) to optimize and creating an Optimizer to optimize them.

You can check out the integrated optimizers in our optimizer reference

This guide relies lightly on topics covered in the Pipeline Guide for creating a pipeline but also the Scheduling guide for creating a Scheduler and a Task. These aren't required but if something is not clear or you'd like to know how something works, please refer to these guides or the reference!

Optimizing a 1-D function#

We'll start with a simple example of maximizing a polynomial function The first thing to do is define the function we want to optimize.

import numpy as np
import matplotlib.pyplot as plt

def poly(x):
    return (x**2 + 4*x + 3) / x

fig, ax = plt.subplots()
x = np.linspace(-10, 10, 100)
ax.plot(x, poly(x))

2023-11-23T18:10:41.691383 image/svg+xml Matplotlib v3.8.2, https://matplotlib.org/

Our next step is to define the search range over which we want to optimize, in this case, the range of values x can take. Here we use a simple Searchable, however we can reprsent entire machine learning pipelines, with conditonality and much more complex ranges. (Pipeline guide)

Vocab...

When dealing with such functions, one might call the x just a parameter. However in the context of Machine Learning, if this poly() function was more like train_model(), then we would refer to x as a hyperparameter with it's range as it's search space.

from amltk.pipeline import Searchable

def poly(x: float) -> float:
    return (x**2 + 4*x + 3) / x

s = Searchable(
    {"x": (-10.0, 10.0)},
    name="my-searchable"
)
from amltk._doc import doc_print; doc_print(print, s)

╭─ Searchable(my-searchable) ─╮
 space {'x': (-10.0, 10.0)}  
╰─────────────────────────────╯

Creating an Optimizer#

We'll utilize SMAC here for optimization as an example but you can find other available optimizers here.

Requirements

This requires smac which can be installed with:

pip install amltk[smac]

# Or directly
pip install smac

The first thing we'll need to do is create a Metric a definition of some value we want to optimize.

from amltk.optimization import Metric

metric = Metric("score", minimize=False)
print(metric)
score (maximize)

The next step is to actually create an optimizer, you'll have to refer to their reference documentation. However, for most integrated optimizers, we expose a helpful create().

from amltk.optimization.optimizers.smac import SMACOptimizer
from amltk.optimization import Metric
from amltk.pipeline import Searchable

def poly(x: float) -> float:
    return (x**2 + 4*x + 3) / x

metric = Metric("score", minimize=False)
space = Searchable(space={"x": (-10.0, 10.0)}, name="my-searchable")

optimizer = SMACOptimizer.create(space=space, metrics=metric, seed=42)

Running an Optimizer#

At this point, we can begin optimizing our function, using the ask to get Trials and tell methods with Trial.Reports.

from amltk.optimization.optimizers.smac import SMACOptimizer
from amltk.optimization import Metric, History, Trial
from amltk.pipeline import Searchable

def poly(x: float) -> float:
    return (x**2 + 4*x + 3) / x

metric = Metric("score", minimize=False)
space = Searchable(space={"x": (-10.0, 10.0)}, name="my-searchable")

optimizer = SMACOptimizer.create(space=space, metrics=metric, seed=42)

reports = []
for _ in range(10):
    trial: Trial = optimizer.ask()
    print(f"Evaluating trial {trial.name} with config {trial.config}")
    x = trial.config["my-searchable:x"]

    with trial.begin():
        score = poly(x)

    report: Trial.Report = trial.success(score=score)
    reports.append(report)

last_report = reports[-1]
print(last_report.config, last_report.metrics)
Evaluating trial config_id=1_seed=1608637542_budget=None_instance=None with config {'my-searchable:x': 5.9014238975942135}
Evaluating trial config_id=2_seed=1608637542_budget=None_instance=None with config {'my-searchable:x': -2.0745517686009407}
Evaluating trial config_id=3_seed=1608637542_budget=None_instance=None with config {'my-searchable:x': -8.257772866636515}
Evaluating trial config_id=4_seed=1608637542_budget=None_instance=None with config {'my-searchable:x': 4.430919848382473}
Evaluating trial config_id=5_seed=1608637542_budget=None_instance=None with config {'my-searchable:x': 0.24310464039444923}
Evaluating trial config_id=6_seed=1608637542_budget=None_instance=None with config {'my-searchable:x': -6.413793563842773}
Evaluating trial config_id=7_seed=1608637542_budget=None_instance=None with config {'my-searchable:x': -2.58980056270957}
Evaluating trial config_id=8_seed=1608637542_budget=None_instance=None with config {'my-searchable:x': 8.760508447885513}
Evaluating trial config_id=9_seed=1608637542_budget=None_instance=None with config {'my-searchable:x': 8.428955599665642}
Evaluating trial config_id=10_seed=1608637542_budget=None_instance=None with config {'my-searchable:x': -4.599663596600294}
{'my-searchable:x': -4.599663596600294} {'score': -1.2518852073757778}

Okay so there are a few things introduced all at once here, let's go over them bit by bit.

The Trial object#

The Trial object is the main object that you'll be interacting with when optimizing. It contains a load of useful properties and functionality to help you during optimization. What we introduced here is the .config, which contains a key, value mapping of the parameters to optimize, in this case, x. We also wrap the actual evaluation of the function in a with trial.begin(): which will time and profile the evaluation of the function and handle any exceptions that occur within the block, attaching the exception to .exception and the traceback to .traceback. Lastly, we use trial.success() which generates a Trial.Report for us.

We'll cover more of this later in the guide but feel free to check out the full API.


TODO

Everything past here is likely out-dated, sorry. Matrial in the pipelines guide guide and the scheduling guide is more up-to-date.

Running an Optimizer#

Now that we have an optimizer that knows the space to search, we can begin to actually ask() the optimizer for a next Trial, run our function and return a Trial.Report.

First we need to modify our function we wish to optimize to actually accept the Trial and return the Report.

Runnig the Optimizer
from amltk.optimization import RandomSearch, Trial
from amltk.pipeline import searchable

def poly(trial: Trial[RSTrialInfo]) -> Trial.Report[RSTrialInfo]:  # (4)!
    x = trial.config["x"]
    with trial.begin():  # (1)!
        y = (x**2 + 4*x + 3) / x
        return trial.success(cost=y)  # (2)!

    trial.fail()  # (3)!

s = searchable("parameters", space={"x": (-10.0, 10.0)})

space = s.space()
random_search = RandomSearch(space=space, seed=42)

results: list[float] = []

for _ in range(20):
    trial = random_search.ask()
    report = qaudratic(trial)
    random_search.tell(trial)

    cost = report.results["cost"]
    results.append(cost)
  1. Using the with trial.begin():, you let us know where exactly your trial begins and we can handle all things related to exception handling and timing.
  2. If you can return a success, then do so with trial.success().
  3. If you can't return a success, then do so with trial.fail().
  4. Here the inner type parameter RSTrial is the type of trial.info which contains the object returned by the ask of the wrapped optimizer. We'll see this in integrating your own Optimizer.

Running the Optimizer in a parallel fashion#

Now that we've seen the basic optimization loop, it's time to parallelize it with a Scheduler and the Task. We cover the Scheduler and Tasks in the Scheduling guide if you'd like to know more about how this works.

We first create a Scheduler to run with 1 process and run it for 5 seconds. Using the event system of AutoML-Toolkit, we define what happens through callbacks, registering to certain events, such as launch a single trial on @scheduler.on_start, tell the optimizer whenever we get something returned with @task.on_result.

Creating a Task for a Trial
from amltk.optimization import RandomSearch, Trial, RSTrialInfo
from amltk.pipeline import searchable
from amltk.scheduling import Scheduler

def poly(trial: Trial[RSTrialInfo]) -> Trial.Report[RSTrialInfo]:
    x = trial.config["x"]
    with trial.begin():
        y = (x**2 + 4*x + 3) / x
        return trial.success(cost=y)

    trial.fail()

s = searchable("parameters", space={"x": (-10.0, 10.0)})
space = s.space()

random_search = RandomSearch(space=space, seed=42)
scheduler = Scheduler.with_processes(1)

task = scheduler.task(poly)  # (5)!

results: list[float] = []

@scheduler.on_start  # (1)!
def launch_trial() -> None:
    trial = random_search.ask()
    task(trial)

@task.on_result  # (2)!
def tell_optimizer(report: Trial.Report) -> None:
    random_search.tell(report)

@task.on_result
def launch_another_trial(_: Trial.Report) -> None:
    trial = random_search.ask()
    task(trial)

@task.on_result  # (3)!
def save_result(report: Trial.Report) -> None:
    cost = report["cost"]
    results.append(cost)  # (4)!

scheduler.run(timeout=5)
  1. The function launch_trial() gets called when the scheduler starts, asking the optimizer for a trial and launching the task with the trial. launch_trial() gets called in the main process but task(trial) will get called in a seperate process.
  2. The function tell_optimizer gets called whenever the task returns a report. We should tell the optimizer about this report.
  3. This function save_result gets called whenever we have a successful trial.
  4. We don't store anything more than the optmimizer needs. Saving results that you wish to access later is up to you.
  5. Here we wrap the function we want to run in another process in a Task. There are other backends than processes, e.g. Clusters for which you should check out the Scheduling guide.

Now, to scale up, we trivially increase the number of initial trails launched with @scheduler.on_start and the number of processes in our Scheduler. That's it.

from amltk.optimization import RandomSearch, Trial, RSTrialInfo
from amltk.pipeline import searchable
from amltk.scheduling import Scheduler

def poly(trial: Trial[RSTrialInfo]) -> Trial.Report[RSTrialInfo]:
    x = trial.config["x"]
    with trial.begin():
        y = (x**2 + 4*x + 3) / x
        return trial.success(cost=y)

    trial.fail()

s = searchable("parameters", space={"x": (-10.0, 10.0)})
space = s.space()

random_search = RandomSearch(space=space, seed=42)

n_workers = 4
scheduler = Scheduler.with_processes(n_workers)

task = Trial.Task(poly)

results: list[float] = []

@scheduler.on_start(repeat=n_workers)
def launch_trial() -> None:
    trial = random_search.ask()
    task(trial)

@task.on_result
def tell_optimizer(report: Trial.Report) -> None:
    random_search.tell(report)

@task.on_result
def launch_another_trial(_: Trial.Report) -> None:
    trial = random_search.ask()
    task(trial)

@task.on_result
def save_result(report: Trial.Report) -> None:
    cost = report["cost"]
    results.append(cost)

scheduler.run(timeout=5)

That concludes the main portion of our Optimization guide. AutoML-Toolkit provides a host of more useful options, such as:

  • Setting constraints on your evaluation function, such as memory, wall time and cpu time, concurrency limits and call limits. Please refer to the Scheduling guide for more information.
  • Stop the scheduler with whatever stopping criterion you wish. Please refer to the Scheduling guide for more information.
  • Optimize over complex pipelines. Please refer to the Pipeline guide for more information.
  • Using different parallelization strategies, such as Dask, Ray, Slurm, and Apache Airflow.
  • Use a whole host of more callbacks to control you system, check out the Scheduling guide for more information.
  • Run the scheduler using asyncio to allow interactivity, run as a server or other more advanced use cases.