Skip to content

Trials

Trial#

A Trial is typically the output of Optimizer.ask(), indicating what the optimizer would like to evaluate next. We provide a host of convenience methods attached to the Trial to make it easy to save results, store artifacts, and more.

Paired with the Trial is the Trial.Report, class, providing an easy way to report back to the optimizer's tell() with a simple trial.success(cost=...) or trial.fail(cost=...) call..

Trial#

Bases: RichRenderable, Generic[I]

A Trial encapsulates some configuration that needs to be evaluated. Typically this is what is generated by an Optimizer.ask() call.

Usage

To begin a trial, you can use the trial.begin(), which will catch exceptions/traceback and profile the block of code.

If all went smooth, your trial was successful and you can use trial.success() to generate a success Report, typically passing what your chosen optimizer expects, e.g. "loss" or "cost".

If your trial failed, you can instead use the trial.fail() to generate a failure Report, where any caught exception will be attached to it. Each Optimizer will take care of what to do from here.

from amltk.optimization import Trial, Metric
from amltk.store import PathBucket

cost = Metric("cost", minimize=True)

def target_function(trial: Trial) -> Trial.Report:
    x = trial.config["x"]
    y = trial.config["y"]

    with trial.begin():
        cost = x**2 - y

    if trial.exception:
        return trial.fail()

    return trial.success(cost=cost)

# ... usually obtained from an optimizer
trial = Trial(name="some-unique-name", config={"x": 1, "y": 2}, metrics=[cost])

report = target_function(trial)
print(report.df())

status trial_seed ... time:kind time:unit name ... some-unique-name success ... wall seconds [1 rows x 20 columns]

What you can return with trial.success() or trial.fail() depends on the metrics of the trial. Typically an optimizer will provide the trial with the list of metrics.

Metrics

A metric with a given name, optimal direction, and possible bounds.

Some important properties is that they have a unique .name given the optimization run, a candidate .config' to evaluate, a possible .seed to use, and an .info object which is the optimizer specific information, if required by you.

If using Plugins, they may insert some extra objects in the .extra dict.

To profile your trial, you can wrap the logic you'd like to check with trial.begin(), which will automatically catch any errors, record the traceback, and profile the block of code, in terms of time and memory.

You can access the profiled time and memory using the .time and .memory attributes. If you've profile()'ed any other intervals, you can access them by name through trial.profiles. Please see the Profiler for more.

Profiling with a trial.

profile
from amltk.optimization import Trial

trial = Trial(name="some-unique-name", config={})

# ... somewhere where you've begun your trial.
with trial.profile("some_interval"):
    for work in range(100):
        pass

print(trial.profiler.df())
               memory:start_vms  memory:end_vms  ...  time:kind  time:unit
some_interval      2.074472e+09      2074472448  ...       wall    seconds

[1 rows x 12 columns]

You can also record anything you'd like into the .summary, a plain dict or use trial.store() to store artifacts related to the trial.

What to put in .summary?

For large items, e.g. predictions or models, these are highly advised to .store() to disk, especially if using a Task for multiprocessing.

Further, if serializing the report using the report.df(), returning a single row, or a History with history.df() for a dataframe consisting of many of the reports, then you'd likely only want to store things that are scalar and can be serialised to disk by a pandas DataFrame.

Report#

Bases: RichRenderable, Generic[I2]

The Trial.Report encapsulates a Trial, its status and any metrics/exceptions that may have occured.

Typically you will not create these yourself, but instead use trial.success() or trial.fail() to generate them.

from amltk.optimization import Trial, Metric

loss = Metric("loss", minimize=True)

trial = Trial(name="trial", config={"x": 1}, metrics=[loss])

with trial.begin():
    # Do some work
    # ...
    report: Trial.Report = trial.success(loss=1)

print(report.df())
        status  trial_seed exception  ... time:duration time:kind  time:unit
name                                  ...                                   
trial  success        <NA>        NA  ...      0.000037      wall    seconds

[1 rows x 19 columns]

These reports are used to report back metrics to an Optimizer with Optimizer.tell() but can also be stored for your own uses.

You can access the original trial with the .trial attribute, and the Status of the trial with the .status attribute.

You may also want to check out the History class for storing a collection of Reports, allowing for an easier time to convert them to a dataframe or perform some common Hyperparameter optimization parsing of metrics.

class Trial
dataclass
#

Bases: RichRenderable, Generic[I]

A Trial encapsulates some configuration that needs to be evaluated. Typically this is what is generated by an Optimizer.ask() call.

Usage

To begin a trial, you can use the trial.begin(), which will catch exceptions/traceback and profile the block of code.

If all went smooth, your trial was successful and you can use trial.success() to generate a success Report, typically passing what your chosen optimizer expects, e.g. "loss" or "cost".

If your trial failed, you can instead use the trial.fail() to generate a failure Report, where any caught exception will be attached to it. Each Optimizer will take care of what to do from here.

from amltk.optimization import Trial, Metric
from amltk.store import PathBucket

cost = Metric("cost", minimize=True)

def target_function(trial: Trial) -> Trial.Report:
    x = trial.config["x"]
    y = trial.config["y"]

    with trial.begin():
        cost = x**2 - y

    if trial.exception:
        return trial.fail()

    return trial.success(cost=cost)

# ... usually obtained from an optimizer
trial = Trial(name="some-unique-name", config={"x": 1, "y": 2}, metrics=[cost])

report = target_function(trial)
print(report.df())

status trial_seed ... time:kind time:unit name ... some-unique-name success ... wall seconds [1 rows x 20 columns]

What you can return with trial.success() or trial.fail() depends on the metrics of the trial. Typically an optimizer will provide the trial with the list of metrics.

Metrics

A metric with a given name, optimal direction, and possible bounds.

Some important properties is that they have a unique .name given the optimization run, a candidate .config' to evaluate, a possible .seed to use, and an .info object which is the optimizer specific information, if required by you.

If using Plugins, they may insert some extra objects in the .extra dict.

To profile your trial, you can wrap the logic you'd like to check with trial.begin(), which will automatically catch any errors, record the traceback, and profile the block of code, in terms of time and memory.

You can access the profiled time and memory using the .time and .memory attributes. If you've profile()'ed any other intervals, you can access them by name through trial.profiles. Please see the Profiler for more.

Profiling with a trial.

profile
from amltk.optimization import Trial

trial = Trial(name="some-unique-name", config={})

# ... somewhere where you've begun your trial.
with trial.profile("some_interval"):
    for work in range(100):
        pass

print(trial.profiler.df())
               memory:start_vms  memory:end_vms  ...  time:kind  time:unit
some_interval      2.074472e+09      2074472448  ...       wall    seconds

[1 rows x 12 columns]

You can also record anything you'd like into the .summary, a plain dict or use trial.store() to store artifacts related to the trial.

What to put in .summary?

For large items, e.g. predictions or models, these are highly advised to .store() to disk, especially if using a Task for multiprocessing.

Further, if serializing the report using the report.df(), returning a single row, or a History with history.df() for a dataframe consisting of many of the reports, then you'd likely only want to store things that are scalar and can be serialised to disk by a pandas DataFrame.

name: str
attr
#

The unique name of the trial.

config: Mapping[str, Any]
attr
#

The config of the trial provided by the optimizer.

bucket: PathBucket
classvar attr
#

The bucket to store trial related output to.

info: I | None
classvar attr
#

The info of the trial provided by the optimizer.

metrics: Sequence[Metric]
classvar attr
#

The metrics associated with the trial.

seed: int | None
classvar attr
#

The seed to use if suggested by the optimizer.

fidelities: dict[str, Any] | None
classvar attr
#

The fidelities at which to evaluate the trial, if any.

time: Timer.Interval
classvar attr
#

The time taken by the trial, once ended.

memory: Memory.Interval
classvar attr
#

The memory used by the trial, once ended.

profiler: Profiler
classvar attr
#

A profiler for this trial.

summary: dict[str, Any]
classvar attr
#

The summary of the trial. These are for summary statistics of a trial and are single values.

exception: BaseException | None
classvar attr
#

The exception raised by the trial, if any.

traceback: str | None
classvar attr
#

The traceback of the exception, if any.

storage: set[Any]
classvar attr
#

Anything stored in the trial, the elements of the list are keys that can be used to retrieve them later, such as a Path.

extras: dict[str, Any]
classvar attr
#

Any extras attached to the trial.

profiles: Mapping[str, Profile.Interval]
prop
#

The profiles of the trial.

class Status #

Bases: str, Enum

The status of a trial.

SUCCESS
classvar attr
#

The trial was successful.

FAIL
classvar attr
#

The trial failed.

CRASHED
classvar attr
#

The trial crashed.

UNKNOWN
classvar attr
#

The status of the trial is unknown.

class Report
dataclass
#

Bases: RichRenderable, Generic[I2]

The Trial.Report encapsulates a Trial, its status and any metrics/exceptions that may have occured.

Typically you will not create these yourself, but instead use trial.success() or trial.fail() to generate them.

from amltk.optimization import Trial, Metric

loss = Metric("loss", minimize=True)

trial = Trial(name="trial", config={"x": 1}, metrics=[loss])

with trial.begin():
    # Do some work
    # ...
    report: Trial.Report = trial.success(loss=1)

print(report.df())
        status  trial_seed exception  ... time:duration time:kind  time:unit
name                                  ...                                   
trial  success        <NA>        NA  ...      0.000032      wall    seconds

[1 rows x 19 columns]

These reports are used to report back metrics to an Optimizer with Optimizer.tell() but can also be stored for your own uses.

You can access the original trial with the .trial attribute, and the Status of the trial with the .status attribute.

You may also want to check out the History class for storing a collection of Reports, allowing for an easier time to convert them to a dataframe or perform some common Hyperparameter optimization parsing of metrics.

trial: Trial[I2]
attr
#

The trial that was run.

status: Trial.Status
attr
#

The status of the trial.

metrics: dict[str, float]
classvar attr
#

The metric values of the trial.

metric_values: tuple[Metric.Value, ...]
classvar attr
#

The metrics of the trial, linked to the metrics.

metric_defs: dict[str, Metric]
classvar attr
#

A lookup to the metric definitions

metric_names: tuple[str, ...]
classvar attr
#

The names of the metrics.

exception: BaseException | None
prop
#

The exception of the trial, if any.

traceback: str | None
prop
#

The traceback of the trial, if any.

name: str
prop
#

The name of the trial.

config: Mapping[str, Any]
prop
#

The config of the trial.

profiles: Mapping[str, Profile.Interval]
prop
#

The profiles of the trial.

summary: dict[str, Any]
prop
#

The summary of the trial.

storage: set[str]
prop
#

The storage of the trial.

time: Timer.Interval
prop
#

The time of the trial.

memory: Memory.Interval
prop
#

The memory of the trial.

bucket: PathBucket
prop
#

The bucket attached to the trial.

info: I2 | None
prop
#

The info of the trial, specific to the optimizer that issued it.

def df(*, profiles=True, configs=True, summary=True, metrics=True) #

Get a dataframe of the trial.

Prefixes

  • summary: Entries will be prefixed with "summary:"
  • config: Entries will be prefixed with "config:"
  • storage: Entries will be prefixed with "storage:"
  • metrics: Entries will be prefixed with "metrics:"
  • profile:<name>: Entries will be prefixed with "profile:<name>:"
PARAMETER DESCRIPTION
profiles

Whether to include the profiles.

TYPE: bool DEFAULT: True

configs

Whether to include the configs.

TYPE: bool DEFAULT: True

summary

Whether to include the summary.

TYPE: bool DEFAULT: True

metrics

Whether to include the metrics.

TYPE: bool DEFAULT: True

Source code in src/amltk/optimization/trial.py
def df(
    self,
    *,
    profiles: bool = True,
    configs: bool = True,
    summary: bool = True,
    metrics: bool = True,
) -> pd.DataFrame:
    """Get a dataframe of the trial.

    !!! note "Prefixes"

        * `summary`: Entries will be prefixed with `#!python "summary:"`
        * `config`: Entries will be prefixed with `#!python "config:"`
        * `storage`: Entries will be prefixed with `#!python "storage:"`
        * `metrics`: Entries will be prefixed with `#!python "metrics:"`
        * `profile:<name>`: Entries will be prefixed with
            `#!python "profile:<name>:"`

    Args:
        profiles: Whether to include the profiles.
        configs: Whether to include the configs.
        summary: Whether to include the summary.
        metrics: Whether to include the metrics.
    """
    items = {
        "name": self.name,
        "status": str(self.status),
        "trial_seed": self.trial.seed if self.trial.seed else np.nan,
        "exception": str(self.exception) if self.exception else "NA",
        "traceback": str(self.traceback) if self.traceback else "NA",
        "bucket": str(self.bucket.path),
    }
    if metrics:
        for value in self.metric_values:
            items[f"metric:{value.metric}"] = value.value
    if summary:
        items.update(**prefix_keys(self.trial.summary, "summary:"))
    if configs:
        items.update(**prefix_keys(self.trial.config, "config:"))
    if profiles:
        for name, profile in sorted(self.profiles.items(), key=lambda x: x[0]):
            # We log this one seperatly
            if name == "trial":
                items.update(profile.to_dict())
            else:
                items.update(profile.to_dict(prefix=f"profile:{name}"))

    return pd.DataFrame(items, index=[0]).convert_dtypes().set_index("name")

def retrieve(key, *, where=None, check=None) #

Retrieve items related to the trial.

Same argument for where=

Use the same argument for where= as you did for store().

retrieve
from amltk.optimization import Trial
from amltk.store import PathBucket

bucket = PathBucket("results")
trial = Trial(name="trial", config={"x": 1}, bucket=bucket)

trial.store({"config.json": trial.config})
with trial.begin():
    report = trial.success()

config = report.retrieve("config.json")
print(config)
{'x': 1}

You could also create a Bucket and use that instead.

retrieve-bucket
from amltk.optimization import Trial
from amltk.store import PathBucket

bucket = PathBucket("results")

trial = Trial(name="trial", config={"x": 1}, bucket=bucket)

trial.store({"config.json": trial.config})

with trial.begin():
    report = trial.success()

config = report.retrieve("config.json")
print(config)
{'x': 1}
PARAMETER DESCRIPTION
key

The key of the item to retrieve as said in .storage.

TYPE: str

check

If provided, will check that the retrieved item is of the provided type. If not, will raise a TypeError. This is only used if where= is a str, Path or Bucket.

TYPE: type[R] | None DEFAULT: None

where

Where to retrieve the items from.

  • If None, will use the bucket attached to the Trial if any, otherwise it will raise an error.

  • If a str or Path, will store a bucket will be created at the path, and the items will be retrieved from a sub-bucket with the name of the trial.

  • If a Bucket, will retrieve the items from a sub-bucket with the name of the trial.

TYPE: str | Path | Bucket[str, Any] | None DEFAULT: None

RETURNS DESCRIPTION
R | Any

The retrieved item.

RAISES DESCRIPTION
TypeError

If check= is provided and the retrieved item is not of the provided type.

Source code in src/amltk/optimization/trial.py
def retrieve(
    self,
    key: str,
    *,
    where: str | Path | Bucket[str, Any] | None = None,
    check: type[R] | None = None,
) -> R | Any:
    """Retrieve items related to the trial.

    !!! note "Same argument for `where=`"

         Use the same argument for `where=` as you did for `store()`.

    ```python exec="true" source="material-block" result="python" title="retrieve" hl_lines="7"
    from amltk.optimization import Trial
    from amltk.store import PathBucket

    bucket = PathBucket("results")
    trial = Trial(name="trial", config={"x": 1}, bucket=bucket)

    trial.store({"config.json": trial.config})
    with trial.begin():
        report = trial.success()

    config = report.retrieve("config.json")
    print(config)
    ```

    You could also create a Bucket and use that instead.

    ```python exec="true" source="material-block" result="python" title="retrieve-bucket" hl_lines="11"

    from amltk.optimization import Trial
    from amltk.store import PathBucket

    bucket = PathBucket("results")

    trial = Trial(name="trial", config={"x": 1}, bucket=bucket)

    trial.store({"config.json": trial.config})

    with trial.begin():
        report = trial.success()

    config = report.retrieve("config.json")
    print(config)
    ```

    Args:
        key: The key of the item to retrieve as said in `.storage`.
        check: If provided, will check that the retrieved item is of the
            provided type. If not, will raise a `TypeError`. This
            is only used if `where=` is a `str`, `Path` or `Bucket`.
        where: Where to retrieve the items from.

            * If `None`, will use the bucket attached to the `Trial` if any,
                otherwise it will raise an error.

            * If a `str` or `Path`, will store
            a bucket will be created at the path, and the items will be
            retrieved from a sub-bucket with the name of the trial.

            * If a `Bucket`, will retrieve the items from a sub-bucket with the
            name of the trial.

    Returns:
        The retrieved item.

    Raises:
        TypeError: If `check=` is provided and  the retrieved item is not of the provided
            type.
    """  # noqa: E501
    return self.trial.retrieve(key, where=where, check=check)

def store(items, *, where=None) #

Store items related to the trial.

See: Trial.store()

Source code in src/amltk/optimization/trial.py
def store(
    self,
    items: Mapping[str, T],
    *,
    where: (
        str | Path | Bucket | Callable[[str, Mapping[str, T]], None] | None
    ) = None,
) -> None:
    """Store items related to the trial.

    See: [`Trial.store()`][amltk.optimization.trial.Trial.store]
    """
    self.trial.store(items, where=where)

def from_df(df)
classmethod
#

Create a report from a dataframe.

See Also
Source code in src/amltk/optimization/trial.py
@classmethod
def from_df(cls, df: pd.DataFrame | pd.Series) -> Trial.Report:
    """Create a report from a dataframe.

    See Also:
        * [`.from_dict()`][amltk.optimization.Trial.Report.from_dict]
    """
    if isinstance(df, pd.DataFrame):
        if len(df) != 1:
            raise ValueError(
                f"Expected a dataframe with one row, got {len(df)} rows.",
            )
        series = df.iloc[0]
    else:
        series = df

    data_dict = {"name": series.name, **series.to_dict()}
    return cls.from_dict(data_dict)

def from_dict(d)
classmethod
#

Create a report from a dictionary.

Prefixes

Please see .df() for information on what the prefixes should be for certain fields.

PARAMETER DESCRIPTION
d

The dictionary to create the report from.

TYPE: Mapping[str, Any]

RETURNS DESCRIPTION
Report

The created report.

Source code in src/amltk/optimization/trial.py
@classmethod
def from_dict(cls, d: Mapping[str, Any]) -> Trial.Report:
    """Create a report from a dictionary.

    !!! note "Prefixes"

        Please see [`.df()`][amltk.optimization.Trial.Report.df]
        for information on what the prefixes should be for certain fields.

    Args:
        d: The dictionary to create the report from.

    Returns:
        The created report.
    """
    prof_dict = mapping_select(d, "profile:")
    if any(prof_dict):
        profile_names = sorted(
            {name.rsplit(":", maxsplit=2)[0] for name in prof_dict},
        )
        profiles = {
            name: Profile.from_dict(mapping_select(prof_dict, f"{name}:"))
            for name in profile_names
        }
    else:
        profiles = {}

    # NOTE: We assume the order of the objectives are in the right
    # order in the dict. If we attempt to force a sort-order, we may
    # deserialize incorrectly. By not having a sort order, we rely
    # on serialization to keep the order, which is not ideal either.
    # May revisit this if we need to
    raw_metrics: dict[str, float] = mapping_select(d, "metric:")
    _intermediate = {
        Metric.from_str(name): value for name, value in raw_metrics.items()
    }
    metrics: dict[Metric, Metric.Value] = {
        metric: metric.as_value(value)
        for metric, value in _intermediate.items()
    }

    _trial_profile_items = {
        k: v for k, v in d.items() if k.startswith(("memory:", "time:"))
    }
    if any(_trial_profile_items):
        trial_profile = Profile.from_dict(_trial_profile_items)
        profiles["trial"] = trial_profile
    else:
        trial_profile = Profile.na()

    exception = d.get("exception")
    traceback = d.get("traceback")
    trial_seed = d.get("trial_seed")
    if pd.isna(exception) or exception == "NA":  # type: ignore
        exception = None
    if pd.isna(traceback) or traceback == "NA":  # type: ignore
        traceback = None
    if pd.isna(trial_seed):  # type: ignore
        trial_seed = None

    if (_bucket := d.get("bucket")) is not None:
        bucket = PathBucket(_bucket)
    else:
        bucket = PathBucket(f"uknown_trial_bucket-{datetime.now().isoformat()}")

    trial: Trial[None] = Trial(
        name=d["name"],
        config=mapping_select(d, "config:"),
        info=None,  # We don't save this to disk so we load it back as None
        bucket=bucket,
        seed=trial_seed,
        fidelities=mapping_select(d, "fidelities:"),
        time=trial_profile.time,
        memory=trial_profile.memory,
        profiler=Profiler(profiles=profiles),
        metrics=list(metrics.keys()),
        summary=mapping_select(d, "summary:"),
        exception=exception,
        traceback=traceback,
    )
    status = Trial.Status(dict_get_not_none(d, "status", "unknown"))
    _values: dict[str, float] = {m.name: r.value for m, r in metrics.items()}
    if status == Trial.Status.SUCCESS:
        return trial.success(**_values)

    if status == Trial.Status.FAIL:
        return trial.fail(**_values)

    if status == Trial.Status.CRASHED:
        return trial.crashed(
            exception=Exception("Unknown status.")
            if trial.exception is None
            else None,
        )

    return trial.crashed(exception=Exception("Unknown status."))

def rich_renderables() #

The renderables for rich for this report.

Source code in src/amltk/optimization/trial.py
def rich_renderables(self) -> Iterable[RenderableType]:
    """The renderables for rich for this report."""
    from rich.pretty import Pretty
    from rich.text import Text

    yield Text.assemble(
        ("Status", "bold"),
        ("(", "default"),
        self.status.__rich__(),
        (")", "default"),
    )
    yield Pretty(self.metrics)
    yield from self.trial.rich_renderables()

def begin(time=None, memory_unit=None) #

Begin the trial with a contextmanager.

Will begin timing the trial in the with block, attaching the profiled time and memory to the trial once completed, under .profile.time and .profile.memory attributes.

If an exception is raised, it will be attached to the trial under .exception with the traceback attached to the actual error message, such that it can be pickled and sent back to the main process loop.

begin
from amltk.optimization import Trial

trial = Trial(name="trial", config={"x": 1})

with trial.begin():
    # Do some work
    pass

print(trial.memory)
print(trial.time)
Memory.Interval(start_vms=2074472448.0, start_rss=404443136.0, end_vms=2074472448, end_rss=404443136, unit=bytes)
Timer.Interval(start=1701796453.1768663, end=1701796453.1768768, kind=wall, unit=seconds)
begin-fail
from amltk.optimization import Trial

trial = Trial(name="trial", config={"x": -1})

with trial.begin():
    raise ValueError("x must be positive")

print(trial.exception)
print(trial.traceback)
print(trial.memory)
print(trial.time)
x must be positive
Traceback (most recent call last):
  File "/opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/site-packages/amltk/optimization/trial.py", line 301, in begin
    yield
  File "<code block: n173; title begin-fail>", line 6, in <module>
ValueError: x must be positive

Memory.Interval(start_vms=2074472448.0, start_rss=404443136.0, end_vms=2074472448, end_rss=404443136, unit=bytes)
Timer.Interval(start=1701796453.181097, end=1701796453.181203, kind=wall, unit=seconds)
PARAMETER DESCRIPTION
time

The timer kind to use for the trial. Defaults to the default timer kind of the profiler.

TYPE: Kind | Literal['wall', 'cpu', 'process'] | None DEFAULT: None

memory_unit

The memory unit to use for the trial. Defaults to the default memory unit of the profiler.

TYPE: Unit | Literal['B', 'KB', 'MB', 'GB'] | None DEFAULT: None

Source code in src/amltk/optimization/trial.py
@contextmanager
def begin(
    self,
    time: Timer.Kind | Literal["wall", "cpu", "process"] | None = None,
    memory_unit: Memory.Unit | Literal["B", "KB", "MB", "GB"] | None = None,
) -> Iterator[None]:
    """Begin the trial with a `contextmanager`.

    Will begin timing the trial in the `with` block, attaching the profiled time and memory
    to the trial once completed, under `.profile.time` and `.profile.memory` attributes.

    If an exception is raised, it will be attached to the trial under `.exception`
    with the traceback attached to the actual error message, such that it can
    be pickled and sent back to the main process loop.

    ```python exec="true" source="material-block" result="python" title="begin" hl_lines="5"
    from amltk.optimization import Trial

    trial = Trial(name="trial", config={"x": 1})

    with trial.begin():
        # Do some work
        pass

    print(trial.memory)
    print(trial.time)
    ```

    ```python exec="true" source="material-block" result="python" title="begin-fail" hl_lines="5"
    from amltk.optimization import Trial

    trial = Trial(name="trial", config={"x": -1})

    with trial.begin():
        raise ValueError("x must be positive")

    print(trial.exception)
    print(trial.traceback)
    print(trial.memory)
    print(trial.time)
    ```

    Args:
        time: The timer kind to use for the trial. Defaults to the default
            timer kind of the profiler.
        memory_unit: The memory unit to use for the trial. Defaults to the
            default memory unit of the profiler.
    """  # noqa: E501
    with self.profiler(name="trial", memory_unit=memory_unit, time_kind=time):
        try:
            yield
        except Exception as error:  # noqa: BLE001
            self.exception = error
            self.traceback = traceback.format_exc()
        finally:
            self.time = self.profiler["trial"].time
            self.memory = self.profiler["trial"].memory

def profile(name, *, time=None, memory_unit=None, summary=False) #

Measure some interval in the trial.

The results of the profiling will be available in the .summary attribute with the name of the interval as the key.

profile
from amltk.optimization import Trial
import time

trial = Trial(name="trial", config={"x": 1})

with trial.profile("some_interval"):
    # Do some work
    time.sleep(1)

print(trial.profiler["some_interval"].time)
Timer.Interval(start=1701796453.1927178, end=1701796454.1938276, kind=wall, unit=seconds)
PARAMETER DESCRIPTION
name

The name of the interval.

TYPE: str

time

The timer kind to use for the trial. Defaults to the default timer kind of the profiler.

TYPE: Kind | Literal['wall', 'cpu', 'process'] | None DEFAULT: None

memory_unit

The memory unit to use for the trial. Defaults to the default memory unit of the profiler.

TYPE: Unit | Literal['B', 'KB', 'MB', 'GB'] | None DEFAULT: None

summary

Whether to add the interval to the summary.

TYPE: bool DEFAULT: False

YIELDS DESCRIPTION
Iterator[None]

The interval measured. Values will be nan until the with block is finished.

Source code in src/amltk/optimization/trial.py
@contextmanager
def profile(
    self,
    name: str,
    *,
    time: Timer.Kind | Literal["wall", "cpu", "process"] | None = None,
    memory_unit: Memory.Unit | Literal["B", "KB", "MB", "GB"] | None = None,
    summary: bool = False,
) -> Iterator[None]:
    """Measure some interval in the trial.

    The results of the profiling will be available in the `.summary` attribute
    with the name of the interval as the key.

    ```python exec="true" source="material-block" result="python" title="profile"
    from amltk.optimization import Trial
    import time

    trial = Trial(name="trial", config={"x": 1})

    with trial.profile("some_interval"):
        # Do some work
        time.sleep(1)

    print(trial.profiler["some_interval"].time)
    ```

    Args:
        name: The name of the interval.
        time: The timer kind to use for the trial. Defaults to the default
            timer kind of the profiler.
        memory_unit: The memory unit to use for the trial. Defaults to the
            default memory unit of the profiler.
        summary: Whether to add the interval to the summary.

    Yields:
        The interval measured. Values will be nan until the with block is finished.
    """
    with self.profiler(name=name, memory_unit=memory_unit, time_kind=time):
        yield

    if summary:
        profile = self.profiler[name]
        self.summary.update(profile.to_dict(prefix=name))

def success(**metrics) #

Generate a success report.

success
from amltk.optimization import Trial, Metric

loss_metric = Metric("loss", minimize=True)

trial = Trial(name="trial", config={"x": 1}, metrics=[loss_metric])

with trial.begin():
    # Do some work
    report = trial.success(loss=1)

print(report)
Trial.Report(trial=Trial(name='trial', config={'x': 1}, bucket=PathBucket(PosixPath('unknown-trial-bucket')), metrics=[Metric(name='loss', minimize=True, bounds=None)], seed=None, fidelities=None, summary={}, exception=None, storage=set(), extras={}), status=<Status.SUCCESS: 'success'>, metrics={'loss': 1.0}, metric_values=(Metric.Value(metric=Metric(name='loss', minimize=True, bounds=None), value=1.0),), metric_defs={'loss': Metric(name='loss', minimize=True, bounds=None)}, metric_names=('loss',))
PARAMETER DESCRIPTION
**metrics

The metrics of the trial, where the key is the name of the metrics and the value is the metric.

TYPE: float | int DEFAULT: {}

RETURNS DESCRIPTION
Report[I]

The report of the trial.

Source code in src/amltk/optimization/trial.py
def success(self, **metrics: float | int) -> Trial.Report[I]:
    """Generate a success report.

    ```python exec="true" source="material-block" result="python" title="success" hl_lines="7"
    from amltk.optimization import Trial, Metric

    loss_metric = Metric("loss", minimize=True)

    trial = Trial(name="trial", config={"x": 1}, metrics=[loss_metric])

    with trial.begin():
        # Do some work
        report = trial.success(loss=1)

    print(report)
    ```

    Args:
        **metrics: The metrics of the trial, where the key is the name of the
            metrics and the value is the metric.

    Returns:
        The report of the trial.
    """  # noqa: E501
    _recorded_values: list[Metric.Value] = []
    for _metric in self.metrics:
        if (raw_value := metrics.get(_metric.name)) is not None:
            _recorded_values.append(_metric.as_value(raw_value))
        else:
            raise ValueError(
                f"Cannot report success without {self.metrics=}."
                f" Please provide a value for the metric.",
            )

    # Need to check if anything extra was reported!
    extra = set(metrics.keys()) - {metric.name for metric in self.metrics}
    if extra:
        raise ValueError(
            f"Cannot report success with extra metrics: {extra=}."
            f"\nOnly {self.metrics=} are allowed.",
        )

    return Trial.Report(
        trial=self,
        status=Trial.Status.SUCCESS,
        metric_values=tuple(_recorded_values),
    )

def fail(**metrics) #

Generate a failure report.

Non specifed metrics

If you do not specify metrics, this will use the .metrics to determine the .worst value of the metric, using that as the reported result

fail
from amltk.optimization import Trial, Metric

loss = Metric("loss", minimize=True, bounds=(0, 1_000))
trial = Trial(name="trial", config={"x": 1}, metrics=[loss])

with trial.begin():
    raise ValueError("This is an error")  # Something went wrong

if trial.exception: # You can check for an exception of the trial here
    report = trial.fail()

print(report.metrics)
print(report)
{'loss': 1000.0}
Trial.Report(trial=Trial(name='trial', config={'x': 1}, bucket=PathBucket(PosixPath('unknown-trial-bucket')), metrics=[Metric(name='loss', minimize=True, bounds=(0.0, 1000.0))], seed=None, fidelities=None, summary={}, exception=ValueError('This is an error'), storage=set(), extras={}), status=<Status.FAIL: 'fail'>, metrics={'loss': 1000.0}, metric_values=(Metric.Value(metric=Metric(name='loss', minimize=True, bounds=(0.0, 1000.0)), value=1000.0),), metric_defs={'loss': Metric(name='loss', minimize=True, bounds=(0.0, 1000.0))}, metric_names=('loss',))
RETURNS DESCRIPTION
Report[I]

The result of the trial.

Source code in src/amltk/optimization/trial.py
def fail(self, **metrics: float | int) -> Trial.Report[I]:
    """Generate a failure report.

    !!! note "Non specifed metrics"

        If you do not specify metrics, this will use
        the [`.metrics`][amltk.optimization.Trial.metrics] to determine
        the [`.worst`][amltk.optimization.Metric.worst] value of the metric,
        using that as the reported result

    ```python exec="true" source="material-block" result="python" title="fail"
    from amltk.optimization import Trial, Metric

    loss = Metric("loss", minimize=True, bounds=(0, 1_000))
    trial = Trial(name="trial", config={"x": 1}, metrics=[loss])

    with trial.begin():
        raise ValueError("This is an error")  # Something went wrong

    if trial.exception: # You can check for an exception of the trial here
        report = trial.fail()

    print(report.metrics)
    print(report)
    ```

    Returns:
        The result of the trial.
    """
    _recorded_values: list[Metric.Value] = []
    for _metric in self.metrics:
        if (raw_value := metrics.get(_metric.name)) is not None:
            _recorded_values.append(_metric.as_value(raw_value))
        else:
            _recorded_values.append(_metric.worst)

    return Trial.Report(
        trial=self,
        status=Trial.Status.FAIL,
        metric_values=tuple(_recorded_values),
    )

def crashed(exception=None, traceback=None) #

Generate a crash report.

Note

You will typically not create these manually, but instead if we don't recieve a report from a target function evaluation, but only an error, we assume something crashed and generate a crash report for you.

Non specifed metrics

We will use the .metrics to determine the .worst value of the metric, using that as the reported metrics

PARAMETER DESCRIPTION
exception

The exception that caused the crash. If not provided, the exception will be taken from the trial. If this is still None, a RuntimeError will be raised.

TYPE: BaseException | None DEFAULT: None

traceback

The traceback of the exception. If not provided, the traceback will be taken from the trial if there is one there.

TYPE: str | None DEFAULT: None

RETURNS DESCRIPTION
Report[I]

The report of the trial.

Source code in src/amltk/optimization/trial.py
def crashed(
    self,
    exception: BaseException | None = None,
    traceback: str | None = None,
) -> Trial.Report[I]:
    """Generate a crash report.

    !!! note

        You will typically not create these manually, but instead if we don't
        recieve a report from a target function evaluation, but only an error,
        we assume something crashed and generate a crash report for you.

    !!! note "Non specifed metrics"

        We will use the [`.metrics`][amltk.optimization.Trial.metrics] to determine
        the [`.worst`][amltk.optimization.Metric.worst] value of the metric,
        using that as the reported metrics

    Args:
        exception: The exception that caused the crash. If not provided, the
            exception will be taken from the trial. If this is still `None`,
            a `RuntimeError` will be raised.
        traceback: The traceback of the exception. If not provided, the
            traceback will be taken from the trial if there is one there.

    Returns:
        The report of the trial.
    """
    if exception is None and self.exception is None:
        raise RuntimeError(
            "Cannot generate a crash report without an exception."
            " Please provide an exception or use `with trial.begin():` to start"
            " the trial.",
        )

    self.exception = exception if exception else self.exception
    self.traceback = traceback if traceback else self.traceback

    return Trial.Report(
        trial=self,
        status=Trial.Status.CRASHED,
        metric_values=tuple(metric.worst for metric in self.metrics),
    )

def store(items, *, where=None) #

Store items related to the trial.

store
from amltk.optimization import Trial
from amltk.store import PathBucket

trial = Trial(name="trial", config={"x": 1}, bucket=PathBucket("results"))
trial.store({"config.json": trial.config})

print(trial.storage)
{'config.json'}

You could also specify where= exactly to store the thing

store-bucket
from amltk.optimization import Trial

trial = Trial(name="trial", config={"x": 1})
trial.store({"config.json": trial.config}, where="./results")

print(trial.storage)
{'config.json'}
PARAMETER DESCRIPTION
items

The items to store, a dict from the key to store it under to the item itself.If using a str, Path or PathBucket, the keys of the items should be a valid filename, including the correct extension. e.g. {"config.json": trial.config}

TYPE: Mapping[str, T]

where

Where to store the items.

  • If None, will use the bucket attached to the Trial if any, otherwise it will raise an error.

  • If a str or Path, will store a bucket will be created at the path, and the items will be stored in a sub-bucket with the name of the trial.

  • If a Bucket, will store the items in a sub-bucket with the name of the trial.

  • If a Callable, will call the callable with the name of the trial and the key-valued pair of items to store.

TYPE: str | Path | Bucket | Callable[[str, Mapping[str, T]], None] | None DEFAULT: None

Source code in src/amltk/optimization/trial.py
def store(
    self,
    items: Mapping[str, T],
    *,
    where: (
        str | Path | Bucket | Callable[[str, Mapping[str, T]], None] | None
    ) = None,
) -> None:
    """Store items related to the trial.

    ```python exec="true" source="material-block" result="python" title="store" hl_lines="5"
    from amltk.optimization import Trial
    from amltk.store import PathBucket

    trial = Trial(name="trial", config={"x": 1}, bucket=PathBucket("results"))
    trial.store({"config.json": trial.config})

    print(trial.storage)
    ```

    You could also specify `where=` exactly to store the thing

    ```python exec="true" source="material-block" result="python" title="store-bucket" hl_lines="7"
    from amltk.optimization import Trial

    trial = Trial(name="trial", config={"x": 1})
    trial.store({"config.json": trial.config}, where="./results")

    print(trial.storage)
    ```

    Args:
        items: The items to store, a dict from the key to store it under
            to the item itself.If using a `str`, `Path` or `PathBucket`,
            the keys of the items should be a valid filename, including
            the correct extension. e.g. `#!python {"config.json": trial.config}`

        where: Where to store the items.

            * If `None`, will use the bucket attached to the `Trial` if any,
                otherwise it will raise an error.

            * If a `str` or `Path`, will store
            a bucket will be created at the path, and the items will be
            stored in a sub-bucket with the name of the trial.

            * If a `Bucket`, will store the items **in a sub-bucket** with the
            name of the trial.

            * If a `Callable`, will call the callable with the name of the
            trial and the key-valued pair of items to store.
    """  # noqa: E501
    method: Bucket
    match where:
        case None:
            method = self.bucket
            method.sub(self.name).store(items)
        case str() | Path():
            method = PathBucket(where, create=True)
            method.sub(self.name).store(items)
        case Bucket():
            method = where
            method.sub(self.name).store(items)
        case _:
            # Leave it up to supplied method
            where(self.name, items)

    # Add the keys to storage
    self.storage.update(items.keys())

def delete_from_storage(items, *, where=None) #

Delete items related to the trial.

delete-storage
from amltk.optimization import Trial
from amltk.store import PathBucket

bucket = PathBucket("results")
trial = Trial(name="trial", config={"x": 1}, info={}, bucket=bucket)

trial.store({"config.json": trial.config})
trial.delete_from_storage(items=["config.json"])

print(trial.storage)
{'config.json'}

You could also create a Bucket and use that instead.

delete-storage-bucket
from amltk.optimization import Trial
from amltk.store import PathBucket

bucket = PathBucket("results")
trial = Trial(name="trial", config={"x": 1}, bucket=bucket)

trial.store({"config.json": trial.config})
trial.delete_from_storage(items=["config.json"])

print(trial.storage)
{'config.json'}
PARAMETER DESCRIPTION
items

The items to delete, an iterable of keys

TYPE: Iterable[str]

where

Where the items are stored

  • If None, will use the bucket attached to the Trial if any, otherwise it will raise an error.

  • If a str or Path, will lookup a bucket at the path, and the items will be deleted from a sub-bucket with the name of the trial.

  • If a Bucket, will delete the items in a sub-bucket with the name of the trial.

  • If a Callable, will call the callable with the name of the trial and the keys of the items to delete. Should a mapping from the key to whether it was deleted or not.

TYPE: str | Path | Bucket | Callable[[str, Iterable[str]], dict[str, bool]] | None DEFAULT: None

RETURNS DESCRIPTION
dict[str, bool]

A dict from the key to whether it was deleted or not.

Source code in src/amltk/optimization/trial.py
def delete_from_storage(
    self,
    items: Iterable[str],
    *,
    where: (
        str | Path | Bucket | Callable[[str, Iterable[str]], dict[str, bool]] | None
    ) = None,
) -> dict[str, bool]:
    """Delete items related to the trial.

    ```python exec="true" source="material-block" result="python" title="delete-storage" hl_lines="6"
    from amltk.optimization import Trial
    from amltk.store import PathBucket

    bucket = PathBucket("results")
    trial = Trial(name="trial", config={"x": 1}, info={}, bucket=bucket)

    trial.store({"config.json": trial.config})
    trial.delete_from_storage(items=["config.json"])

    print(trial.storage)
    ```

    You could also create a Bucket and use that instead.

    ```python exec="true" source="material-block" result="python" title="delete-storage-bucket" hl_lines="9"
    from amltk.optimization import Trial
    from amltk.store import PathBucket

    bucket = PathBucket("results")
    trial = Trial(name="trial", config={"x": 1}, bucket=bucket)

    trial.store({"config.json": trial.config})
    trial.delete_from_storage(items=["config.json"])

    print(trial.storage)
    ```

    Args:
        items: The items to delete, an iterable of keys
        where: Where the items are stored

            * If `None`, will use the bucket attached to the `Trial` if any,
                otherwise it will raise an error.

            * If a `str` or `Path`, will lookup a bucket at the path,
            and the items will be deleted from a sub-bucket with the name of the trial.

            * If a `Bucket`, will delete the items in a sub-bucket with the
            name of the trial.

            * If a `Callable`, will call the callable with the name of the
            trial and the keys of the items to delete. Should a mapping from
            the key to whether it was deleted or not.

    Returns:
        A dict from the key to whether it was deleted or not.
    """  # noqa: E501
    # If not a Callable, we convert to a path bucket
    method: Bucket
    match where:
        case None:
            method = self.bucket
        case str() | Path():
            method = PathBucket(where, create=False)
        case Bucket():
            method = where
        case _:
            # Leave it up to supplied method
            return where(self.name, items)

    sub_bucket = method.sub(self.name)
    return sub_bucket.remove(items)

def copy() #

Create a copy of the trial.

RETURNS DESCRIPTION
Self

The copy of the trial.

Source code in src/amltk/optimization/trial.py
def copy(self) -> Self:
    """Create a copy of the trial.

    Returns:
        The copy of the trial.
    """
    return copy.deepcopy(self)

def retrieve(key, *, where=None, check=None) #

Retrieve items related to the trial.

Same argument for where=

Use the same argument for where= as you did for store().

retrieve
from amltk.optimization import Trial
from amltk.store import PathBucket

bucket = PathBucket("results")

# Create a trial, normally done by an optimizer
trial = Trial(name="trial", config={"x": 1}, bucket=bucket)

trial.store({"config.json": trial.config})
config = trial.retrieve("config.json")

print(config)
{'x': 1}

You could also manually specify where something get's stored and retrieved

retrieve-bucket
from amltk.optimization import Trial
from amltk.store import PathBucket

path = "./config_path"

trial = Trial(name="trial", config={"x": 1})

trial.store({"config.json": trial.config}, where=path)

config = trial.retrieve("config.json", where=path)
print(config)
{'x': 1}
PARAMETER DESCRIPTION
key

The key of the item to retrieve as said in .storage.

TYPE: str

check

If provided, will check that the retrieved item is of the provided type. If not, will raise a TypeError. This is only used if where= is a str, Path or Bucket.

TYPE: type[R] | None DEFAULT: None

where

Where to retrieve the items from.

  • If None, will use the bucket attached to the Trial if any, otherwise it will raise an error.

  • If a str or Path, will store a bucket will be created at the path, and the items will be retrieved from a sub-bucket with the name of the trial.

  • If a Bucket, will retrieve the items from a sub-bucket with the name of the trial.

TYPE: str | Path | Bucket[str, Any] | None DEFAULT: None

RETURNS DESCRIPTION
R | Any

The retrieved item.

RAISES DESCRIPTION
TypeError

If check= is provided and the retrieved item is not of the provided type.

Source code in src/amltk/optimization/trial.py
def retrieve(
    self,
    key: str,
    *,
    where: str | Path | Bucket[str, Any] | None = None,
    check: type[R] | None = None,
) -> R | Any:
    """Retrieve items related to the trial.

    !!! note "Same argument for `where=`"

         Use the same argument for `where=` as you did for `store()`.

    ```python exec="true" source="material-block" result="python" title="retrieve" hl_lines="7"
    from amltk.optimization import Trial
    from amltk.store import PathBucket

    bucket = PathBucket("results")

    # Create a trial, normally done by an optimizer
    trial = Trial(name="trial", config={"x": 1}, bucket=bucket)

    trial.store({"config.json": trial.config})
    config = trial.retrieve("config.json")

    print(config)
    ```

    You could also manually specify where something get's stored and retrieved

    ```python exec="true" source="material-block" result="python" title="retrieve-bucket" hl_lines="11"

    from amltk.optimization import Trial
    from amltk.store import PathBucket

    path = "./config_path"

    trial = Trial(name="trial", config={"x": 1})

    trial.store({"config.json": trial.config}, where=path)

    config = trial.retrieve("config.json", where=path)
    print(config)
    import shutil; shutil.rmtree(path)  # markdown-exec: hide
    ```

    Args:
        key: The key of the item to retrieve as said in `.storage`.
        check: If provided, will check that the retrieved item is of the
            provided type. If not, will raise a `TypeError`. This
            is only used if `where=` is a `str`, `Path` or `Bucket`.

        where: Where to retrieve the items from.

            * If `None`, will use the bucket attached to the `Trial` if any,
                otherwise it will raise an error.

            * If a `str` or `Path`, will store
            a bucket will be created at the path, and the items will be
            retrieved from a sub-bucket with the name of the trial.

            * If a `Bucket`, will retrieve the items from a sub-bucket with the
            name of the trial.

    Returns:
        The retrieved item.

    Raises:
        TypeError: If `check=` is provided and  the retrieved item is not of the provided
            type.
    """  # noqa: E501
    # If not a Callable, we convert to a path bucket
    method: Bucket[str, Any]
    match where:
        case None:
            method = self.bucket
        case str():
            method = PathBucket(where, create=True)
        case Path():
            method = PathBucket(where, create=True)
        case Bucket():
            method = where

    # Store in a sub-bucket
    return method.sub(self.name)[key].load(check=check)

def attach_extra(name, plugin_item) #

Attach a plugin item to the trial.

PARAMETER DESCRIPTION
name

The name of the plugin item.

TYPE: str

plugin_item

The plugin item.

TYPE: Any

Source code in src/amltk/optimization/trial.py
def attach_extra(self, name: str, plugin_item: Any) -> None:
    """Attach a plugin item to the trial.

    Args:
        name: The name of the plugin item.
        plugin_item: The plugin item.
    """
    self.extras[name] = plugin_item

def rich_renderables() #

The renderables for rich for this report.

Source code in src/amltk/optimization/trial.py
def rich_renderables(self) -> Iterable[RenderableType]:  # noqa: C901
    """The renderables for rich for this report."""
    from rich.panel import Panel
    from rich.pretty import Pretty
    from rich.table import Table
    from rich.text import Text

    items: list[RenderableType] = []
    table = Table.grid(padding=(0, 1), expand=False)

    # Predfined things
    table.add_row("config", Pretty(self.config))

    if self.fidelities:
        table.add_row("fidelities", Pretty(self.fidelities))

    if any(self.extras):
        table.add_row("extras", Pretty(self.extras))

    if self.seed:
        table.add_row("seed", Pretty(self.seed))

    if self.bucket:
        table.add_row("bucket", Pretty(self.bucket))

    if self.metrics:
        items.append(
            Panel(Pretty(self.metrics), title="Metrics", title_align="left"),
        )

    # Dynamic things
    if self.summary:
        table.add_row("summary", Pretty(self.summary))

    if any(self.storage):
        table.add_row("storage", Pretty(self.storage))

    if self.exception:
        table.add_row("exception", Text(str(self.exception), style="bold red"))

    if self.traceback:
        table.add_row("traceback", Text(self.traceback, style="bold red"))

    for name, profile in self.profiles.items():
        table.add_row("profile:" + name, Pretty(profile))

    items.append(table)

    yield from items

options: members: False

History#

The History is used to keep a structured record of what occured with Trials and their associated Reports.

Usage

from amltk.optimization import Trial, History, Metric
from amltk.store import PathBucket

loss = Metric("loss", minimize=True)

def target_function(trial: Trial) -> Trial.Report:
    x = trial.config["x"]
    y = trial.config["y"]
    trial.store({"config.json": trial.config})

    with trial.begin():
        loss = x**2 - y

    if trial.exception:
        return trial.fail()

    return trial.success(loss=loss)

# ... usually obtained from an optimizer
bucket = PathBucket("all-trial-results")
history = History()

for x, y in zip([1, 2, 3], [4, 5, 6]):
    trial = Trial(name="some-unique-name", config={"x": x, "y": y}, bucket=bucket, metrics=[loss])
    report = target_function(trial)
    history.add(report)

print(history.df())
bucket.rmdir()  # markdon-exec: hide

status trial_seed ... time:kind time:unit name ... some-unique-name success ... wall seconds some-unique-name success ... wall seconds some-unique-name success ... wall seconds [3 rows x 20 columns]

You'll often need to perform some operations on a History so we provide some utility functions here:

  • filter(key=...) - Filters the history by some predicate, e.g. history.filter(lambda report: report.status == "success")
  • groupby(key=...) - Groups the history by some key, e.g. history.groupby(lambda report: report.config["x"] < 5)
  • sortby(key=...) - Sorts the history by some key, e.g. history.sortby(lambda report: report.time.end)

There is also some serialization capabilities built in, to allow you to store your reports and load them back in later:

  • df(...) - Output a pd.DataFrame of all the information available.
  • from_df(...) - Create a History from a pd.DataFrame.

You can also retrieve individual reports from the history by using their name, e.g. history["some-unique-name"] or iterate through the history with for report in history: ....