Skip to content

Trial

amltk.optimization.trial #

A Trial is typically the output of Optimizer.ask(), indicating what the optimizer would like to evaluate next. We provide a host of convenience methods attached to the Trial to make it easy to save results, store artifacts, and more.

Paired with the Trial is the Trial.Report, class, providing an easy way to report back to the optimizer's tell() with a simple trial.success(cost=...) or trial.fail(cost=...) call..

Trial#

amltk.optimization.trial.Trial dataclass #

Bases: RichRenderable, Generic[I]

A Trial encapsulates some configuration that needs to be evaluated. Typically, this is what is generated by an Optimizer.ask() call.

Usage

To begin a trial, you can use the trial.begin(), which will catch exceptions/traceback and profile the block of code.

If all went smooth, your trial was successful and you can use trial.success() to generate a success Report, typically passing what your chosen optimizer expects, e.g., "loss" or "cost".

If your trial failed, you can instead use the trial.fail() to generate a failure Report, where any caught exception will be attached to it. Each Optimizer will take care of what to do from here.

from amltk.optimization import Trial, Metric
from amltk.store import PathBucket

cost = Metric("cost", minimize=True)

def target_function(trial: Trial) -> Trial.Report:
    x = trial.config["x"]
    y = trial.config["y"]

    with trial.begin():
        cost = x**2 - y

    if trial.exception:
        return trial.fail()

    return trial.success(cost=cost)

# ... usually obtained from an optimizer
trial = Trial(name="some-unique-name", config={"x": 1, "y": 2}, metrics=[cost])

report = target_function(trial)
print(report.df())

status trial_seed ... time:kind time:unit name ... some-unique-name success ... wall seconds [1 rows x 20 columns]

What you can return with trial.success() or trial.fail() depends on the metrics of the trial. Typically, an optimizer will provide the trial with the list of metrics.

Metrics

amltk.optimization.metric.Metric dataclass #

A metric with a given name, optimal direction, and possible bounds.

Some important properties are that they have a unique .name given the optimization run, a candidate .config to evaluate, a possible .seed to use, and an .info object, which is the optimizer specific information, if required by you.

Reporting success (or failure)

When using the success() or fail() method, make sure to provide values for all metrics specified in the .metrics attribute. Usually these are set by the optimizer generating the Trial.

Each metric has a unique name, and it's crucial to use the correct names when reporting success, otherwise an error will occur.

Reporting success for metrics

For example:

from amltk.optimization import Trial, Metric

# Gotten from some optimizer usually, i.e. via `optimizer.ask()`
trial = Trial(
    name="example_trial",
    config={"param": 42},
    metrics=[Metric(name="accuracy", minimize=False)]
)

# Incorrect usage (will raise an error)
try:
    report = trial.success(invalid_metric=0.95)
except ValueError as error:
    print(error)

# Correct usage
report = trial.success(accuracy=0.95)
Cannot report success without self.metrics=[Metric(name='accuracy', minimize=False, bounds=None)]. Please provide a value for the metric 'accuracy'.
Please provide 'accuracy' as `trial.success(accuracy=value)` or rename your metric to`Metric(name="{provided_key}", minimize=False, bounds=None)`

If using Plugins, they may insert some extra objects in the .extra dict.

To profile your trial, you can wrap the logic you'd like to check with trial.begin(), which will automatically catch any errors, record the traceback, and profile the block of code, in terms of time and memory.

You can access the profiled time and memory using the .time and .memory attributes. If you've profile()'ed any other intervals, you can access them by name through trial.profiles. Please see the Profiler for more.

Profiling with a trial.

profile
from amltk.optimization import Trial

trial = Trial(name="some-unique-name", config={})

# ... somewhere where you've begun your trial.
with trial.profile("some_interval"):
    for work in range(100):
        pass

print(trial.profiler.df())
               memory:start_vms  memory:end_vms  ...  time:kind  time:unit
some_interval      1.550762e+09      1550761984  ...       wall    seconds

[1 rows x 12 columns]

You can also record anything you'd like into the .summary, a plain dict or use trial.store() to store artifacts related to the trial.

What to put in .summary?

For large items, e.g. predictions or models, these are highly advised to .store() to disk, especially if using a Task for multiprocessing.

Further, if serializing the report using the report.df(), returning a single row, or a History with history.df() for a dataframe consisting of many of the reports, then you'd likely only want to store things that are scalar and can be serialised to disk by a pandas DataFrame.

_repr_html_ #

_repr_html_() -> str

Return an HTML representation of the object.

Source code in src/amltk/_richutil/renderable.py
def _repr_html_(self) -> str:
    """Return an HTML representation of the object."""
    return self._repr_pretty_()

_repr_pretty_ #

_repr_pretty_(*_: Any, **__: Any) -> str

Representation for rich printing.

Source code in src/amltk/_richutil/renderable.py
def _repr_pretty_(self, *_: Any, **__: Any) -> str:
    """Representation for rich printing."""
    from io import StringIO

    import rich

    with closing(StringIO()) as buffer:
        rich.print(self.__rich__(), file=buffer)
        return buffer.getvalue()
Report#

amltk.optimization.trial.Trial.Report dataclass #

Bases: RichRenderable, Generic[I2]

The Trial.Report encapsulates a Trial, its status and any metrics/exceptions that may have occured.

Typically you will not create these yourself, but instead use trial.success() or trial.fail() to generate them.

from amltk.optimization import Trial, Metric

loss = Metric("loss", minimize=True)

trial = Trial(name="trial", config={"x": 1}, metrics=[loss])

with trial.begin():
    # Do some work
    # ...
    report: Trial.Report = trial.success(loss=1)

print(report.df())
        status  trial_seed exception  ... time:duration time:kind  time:unit
name                                  ...                                   
trial  success        <NA>        NA  ...      0.000026      wall    seconds

[1 rows x 19 columns]

These reports are used to report back metrics to an Optimizer with Optimizer.tell() but can also be stored for your own uses.

You can access the original trial with the .trial attribute, and the Status of the trial with the .status attribute.

You may also want to check out the History class for storing a collection of Reports, allowing for an easier time to convert them to a dataframe or perform some common Hyperparameter optimization parsing of metrics.

_repr_html_ #

_repr_html_() -> str

Return an HTML representation of the object.

Source code in src/amltk/_richutil/renderable.py
def _repr_html_(self) -> str:
    """Return an HTML representation of the object."""
    return self._repr_pretty_()

_repr_pretty_ #

_repr_pretty_(*_: Any, **__: Any) -> str

Representation for rich printing.

Source code in src/amltk/_richutil/renderable.py
def _repr_pretty_(self, *_: Any, **__: Any) -> str:
    """Representation for rich printing."""
    from io import StringIO

    import rich

    with closing(StringIO()) as buffer:
        rich.print(self.__rich__(), file=buffer)
        return buffer.getvalue()

Trial dataclass #

Bases: RichRenderable, Generic[I]

A Trial encapsulates some configuration that needs to be evaluated. Typically, this is what is generated by an Optimizer.ask() call.

Usage

To begin a trial, you can use the trial.begin(), which will catch exceptions/traceback and profile the block of code.

If all went smooth, your trial was successful and you can use trial.success() to generate a success Report, typically passing what your chosen optimizer expects, e.g., "loss" or "cost".

If your trial failed, you can instead use the trial.fail() to generate a failure Report, where any caught exception will be attached to it. Each Optimizer will take care of what to do from here.

from amltk.optimization import Trial, Metric
from amltk.store import PathBucket

cost = Metric("cost", minimize=True)

def target_function(trial: Trial) -> Trial.Report:
    x = trial.config["x"]
    y = trial.config["y"]

    with trial.begin():
        cost = x**2 - y

    if trial.exception:
        return trial.fail()

    return trial.success(cost=cost)

# ... usually obtained from an optimizer
trial = Trial(name="some-unique-name", config={"x": 1, "y": 2}, metrics=[cost])

report = target_function(trial)
print(report.df())

status trial_seed ... time:kind time:unit name ... some-unique-name success ... wall seconds [1 rows x 20 columns]

What you can return with trial.success() or trial.fail() depends on the metrics of the trial. Typically, an optimizer will provide the trial with the list of metrics.

Metrics

amltk.optimization.metric.Metric dataclass #

A metric with a given name, optimal direction, and possible bounds.

Some important properties are that they have a unique .name given the optimization run, a candidate .config to evaluate, a possible .seed to use, and an .info object, which is the optimizer specific information, if required by you.

Reporting success (or failure)

When using the success() or fail() method, make sure to provide values for all metrics specified in the .metrics attribute. Usually these are set by the optimizer generating the Trial.

Each metric has a unique name, and it's crucial to use the correct names when reporting success, otherwise an error will occur.

Reporting success for metrics

For example:

from amltk.optimization import Trial, Metric

# Gotten from some optimizer usually, i.e. via `optimizer.ask()`
trial = Trial(
    name="example_trial",
    config={"param": 42},
    metrics=[Metric(name="accuracy", minimize=False)]
)

# Incorrect usage (will raise an error)
try:
    report = trial.success(invalid_metric=0.95)
except ValueError as error:
    print(error)

# Correct usage
report = trial.success(accuracy=0.95)
Cannot report success without self.metrics=[Metric(name='accuracy', minimize=False, bounds=None)]. Please provide a value for the metric 'accuracy'.
Please provide 'accuracy' as `trial.success(accuracy=value)` or rename your metric to`Metric(name="{provided_key}", minimize=False, bounds=None)`

If using Plugins, they may insert some extra objects in the .extra dict.

To profile your trial, you can wrap the logic you'd like to check with trial.begin(), which will automatically catch any errors, record the traceback, and profile the block of code, in terms of time and memory.

You can access the profiled time and memory using the .time and .memory attributes. If you've profile()'ed any other intervals, you can access them by name through trial.profiles. Please see the Profiler for more.

Profiling with a trial.

profile
from amltk.optimization import Trial

trial = Trial(name="some-unique-name", config={})

# ... somewhere where you've begun your trial.
with trial.profile("some_interval"):
    for work in range(100):
        pass

print(trial.profiler.df())
               memory:start_vms  memory:end_vms  ...  time:kind  time:unit
some_interval      1.550762e+09      1550761984  ...       wall    seconds

[1 rows x 12 columns]

You can also record anything you'd like into the .summary, a plain dict or use trial.store() to store artifacts related to the trial.

What to put in .summary?

For large items, e.g. predictions or models, these are highly advised to .store() to disk, especially if using a Task for multiprocessing.

Further, if serializing the report using the report.df(), returning a single row, or a History with history.df() for a dataframe consisting of many of the reports, then you'd likely only want to store things that are scalar and can be serialised to disk by a pandas DataFrame.

bucket class-attribute instance-attribute #

bucket: PathBucket = field(
    default_factory=lambda: PathBucket(
        "unknown-trial-bucket"
    )
)

The bucket to store trial related output to.

config instance-attribute #

config: Mapping[str, Any]

The config of the trial provided by the optimizer.

exception class-attribute instance-attribute #

exception: BaseException | None = field(
    repr=True, default=None
)

The exception raised by the trial, if any.

extras class-attribute instance-attribute #

extras: dict[str, Any] = field(default_factory=dict)

Any extras attached to the trial.

fidelities class-attribute instance-attribute #

fidelities: dict[str, Any] | None = None

The fidelities at which to evaluate the trial, if any.

info class-attribute instance-attribute #

info: I | None = field(default=None, repr=False)

The info of the trial provided by the optimizer.

memory class-attribute instance-attribute #

memory: Interval = field(repr=False, default_factory=na)

The memory used by the trial, once ended.

metrics class-attribute instance-attribute #

metrics: Sequence[Metric] = field(default_factory=list)

The metrics associated with the trial.

name instance-attribute #

name: str

The unique name of the trial.

profiler class-attribute instance-attribute #

profiler: Profiler = field(
    repr=False,
    default_factory=lambda: Profiler(
        memory_unit="B", time_kind="wall"
    ),
)

A profiler for this trial.

profiles property #

profiles: Mapping[str, Interval]

The profiles of the trial.

seed class-attribute instance-attribute #

seed: int | None = None

The seed to use if suggested by the optimizer.

storage class-attribute instance-attribute #

storage: set[Any] = field(default_factory=set)

Anything stored in the trial, the elements of the list are keys that can be used to retrieve them later, such as a Path.

summary class-attribute instance-attribute #

summary: dict[str, Any] = field(default_factory=dict)

The summary of the trial. These are for summary statistics of a trial and are single values.

time class-attribute instance-attribute #

time: Interval = field(repr=False, default_factory=na)

The time taken by the trial, once ended.

traceback class-attribute instance-attribute #

traceback: str | None = field(repr=False, default=None)

The traceback of the exception, if any.

Report dataclass #

Bases: RichRenderable, Generic[I2]

The Trial.Report encapsulates a Trial, its status and any metrics/exceptions that may have occured.

Typically you will not create these yourself, but instead use trial.success() or trial.fail() to generate them.

from amltk.optimization import Trial, Metric

loss = Metric("loss", minimize=True)

trial = Trial(name="trial", config={"x": 1}, metrics=[loss])

with trial.begin():
    # Do some work
    # ...
    report: Trial.Report = trial.success(loss=1)

print(report.df())
        status  trial_seed exception  ... time:duration time:kind  time:unit
name                                  ...                                   
trial  success        <NA>        NA  ...      0.000024      wall    seconds

[1 rows x 19 columns]

These reports are used to report back metrics to an Optimizer with Optimizer.tell() but can also be stored for your own uses.

You can access the original trial with the .trial attribute, and the Status of the trial with the .status attribute.

You may also want to check out the History class for storing a collection of Reports, allowing for an easier time to convert them to a dataframe or perform some common Hyperparameter optimization parsing of metrics.

bucket property #
bucket: PathBucket

The bucket attached to the trial.

config property #
config: Mapping[str, Any]

The config of the trial.

exception property #
exception: BaseException | None

The exception of the trial, if any.

info property #
info: I2 | None

The info of the trial, specific to the optimizer that issued it.

memory property #
memory: Interval

The memory of the trial.

metric_defs class-attribute instance-attribute #
metric_defs: dict[str, Metric] = field(init=False)

A lookup to the metric definitions

metric_names class-attribute instance-attribute #
metric_names: tuple[str, ...] = field(init=False)

The names of the metrics.

metric_values class-attribute instance-attribute #
metric_values: tuple[Value, ...] = field(
    default_factory=tuple
)

The metrics of the trial, linked to the metrics.

metrics class-attribute instance-attribute #
metrics: dict[str, float] = field(init=False)

The metric values of the trial.

name property #
name: str

The name of the trial.

profiles property #
profiles: Mapping[str, Interval]

The profiles of the trial.

status instance-attribute #
status: Status

The status of the trial.

storage property #
storage: set[str]

The storage of the trial.

summary property #
summary: dict[str, Any]

The summary of the trial.

time property #
time: Interval

The time of the trial.

traceback property #
traceback: str | None

The traceback of the trial, if any.

trial instance-attribute #
trial: Trial[I2]

The trial that was run.

df #
df(
    *,
    profiles: bool = True,
    configs: bool = True,
    summary: bool = True,
    metrics: bool = True
) -> DataFrame

Get a dataframe of the trial.

Prefixes

  • summary: Entries will be prefixed with "summary:"
  • config: Entries will be prefixed with "config:"
  • storage: Entries will be prefixed with "storage:"
  • metrics: Entries will be prefixed with "metrics:"
  • profile:<name>: Entries will be prefixed with "profile:<name>:"
PARAMETER DESCRIPTION
profiles

Whether to include the profiles.

TYPE: bool DEFAULT: True

configs

Whether to include the configs.

TYPE: bool DEFAULT: True

summary

Whether to include the summary.

TYPE: bool DEFAULT: True

metrics

Whether to include the metrics.

TYPE: bool DEFAULT: True

Source code in src/amltk/optimization/trial.py
def df(
    self,
    *,
    profiles: bool = True,
    configs: bool = True,
    summary: bool = True,
    metrics: bool = True,
) -> pd.DataFrame:
    """Get a dataframe of the trial.

    !!! note "Prefixes"

        * `summary`: Entries will be prefixed with `#!python "summary:"`
        * `config`: Entries will be prefixed with `#!python "config:"`
        * `storage`: Entries will be prefixed with `#!python "storage:"`
        * `metrics`: Entries will be prefixed with `#!python "metrics:"`
        * `profile:<name>`: Entries will be prefixed with
            `#!python "profile:<name>:"`

    Args:
        profiles: Whether to include the profiles.
        configs: Whether to include the configs.
        summary: Whether to include the summary.
        metrics: Whether to include the metrics.
    """
    items = {
        "name": self.name,
        "status": str(self.status),
        "trial_seed": self.trial.seed if self.trial.seed else np.nan,
        "exception": str(self.exception) if self.exception else "NA",
        "traceback": str(self.traceback) if self.traceback else "NA",
        "bucket": str(self.bucket.path),
    }
    if metrics:
        for value in self.metric_values:
            items[f"metric:{value.metric}"] = value.value
    if summary:
        items.update(**prefix_keys(self.trial.summary, "summary:"))
    if configs:
        items.update(**prefix_keys(self.trial.config, "config:"))
    if profiles:
        for name, profile in sorted(self.profiles.items(), key=lambda x: x[0]):
            # We log this one seperatly
            if name == "trial":
                items.update(profile.to_dict())
            else:
                items.update(profile.to_dict(prefix=f"profile:{name}"))

    return pd.DataFrame(items, index=[0]).convert_dtypes().set_index("name")
from_df classmethod #
from_df(df: DataFrame | Series) -> Report

Create a report from a dataframe.

See Also
Source code in src/amltk/optimization/trial.py
@classmethod
def from_df(cls, df: pd.DataFrame | pd.Series) -> Trial.Report:
    """Create a report from a dataframe.

    See Also:
        * [`.from_dict()`][amltk.optimization.Trial.Report.from_dict]
    """
    if isinstance(df, pd.DataFrame):
        if len(df) != 1:
            raise ValueError(
                f"Expected a dataframe with one row, got {len(df)} rows.",
            )
        series = df.iloc[0]
    else:
        series = df

    data_dict = {"name": series.name, **series.to_dict()}
    return cls.from_dict(data_dict)
from_dict classmethod #
from_dict(d: Mapping[str, Any]) -> Report

Create a report from a dictionary.

Prefixes

Please see .df() for information on what the prefixes should be for certain fields.

PARAMETER DESCRIPTION
d

The dictionary to create the report from.

TYPE: Mapping[str, Any]

RETURNS DESCRIPTION
Report

The created report.

Source code in src/amltk/optimization/trial.py
@classmethod
def from_dict(cls, d: Mapping[str, Any]) -> Trial.Report:
    """Create a report from a dictionary.

    !!! note "Prefixes"

        Please see [`.df()`][amltk.optimization.Trial.Report.df]
        for information on what the prefixes should be for certain fields.

    Args:
        d: The dictionary to create the report from.

    Returns:
        The created report.
    """
    prof_dict = mapping_select(d, "profile:")
    if any(prof_dict):
        profile_names = sorted(
            {name.rsplit(":", maxsplit=2)[0] for name in prof_dict},
        )
        profiles = {
            name: Profile.from_dict(mapping_select(prof_dict, f"{name}:"))
            for name in profile_names
        }
    else:
        profiles = {}

    # NOTE: We assume the order of the objectives are in the right
    # order in the dict. If we attempt to force a sort-order, we may
    # deserialize incorrectly. By not having a sort order, we rely
    # on serialization to keep the order, which is not ideal either.
    # May revisit this if we need to
    raw_metrics: dict[str, float] = mapping_select(d, "metric:")
    _intermediate = {
        Metric.from_str(name): value for name, value in raw_metrics.items()
    }
    metrics: dict[Metric, Metric.Value] = {
        metric: metric.as_value(value)
        for metric, value in _intermediate.items()
    }

    _trial_profile_items = {
        k: v for k, v in d.items() if k.startswith(("memory:", "time:"))
    }
    if any(_trial_profile_items):
        trial_profile = Profile.from_dict(_trial_profile_items)
        profiles["trial"] = trial_profile
    else:
        trial_profile = Profile.na()

    exception = d.get("exception")
    traceback = d.get("traceback")
    trial_seed = d.get("trial_seed")
    if pd.isna(exception) or exception == "NA":  # type: ignore
        exception = None
    if pd.isna(traceback) or traceback == "NA":  # type: ignore
        traceback = None
    if pd.isna(trial_seed):  # type: ignore
        trial_seed = None

    if (_bucket := d.get("bucket")) is not None:
        bucket = PathBucket(_bucket)
    else:
        bucket = PathBucket(f"uknown_trial_bucket-{datetime.now().isoformat()}")

    trial: Trial[None] = Trial(
        name=d["name"],
        config=mapping_select(d, "config:"),
        info=None,  # We don't save this to disk so we load it back as None
        bucket=bucket,
        seed=trial_seed,
        fidelities=mapping_select(d, "fidelities:"),
        time=trial_profile.time,
        memory=trial_profile.memory,
        profiler=Profiler(profiles=profiles),
        metrics=list(metrics.keys()),
        summary=mapping_select(d, "summary:"),
        exception=exception,
        traceback=traceback,
    )
    status = Trial.Status(dict_get_not_none(d, "status", "unknown"))
    _values: dict[str, float] = {m.name: r.value for m, r in metrics.items()}
    if status == Trial.Status.SUCCESS:
        return trial.success(**_values)

    if status == Trial.Status.FAIL:
        return trial.fail(**_values)

    if status == Trial.Status.CRASHED:
        return trial.crashed(
            exception=Exception("Unknown status.")
            if trial.exception is None
            else None,
        )

    return trial.crashed(exception=Exception("Unknown status."))
retrieve #
retrieve(
    key: str,
    *,
    where: str | Path | Bucket[str, Any] | None = None,
    check: type[R] | None = None
) -> R | Any

Retrieve items related to the trial.

Same argument for where=

Use the same argument for where= as you did for store().

retrieve
from amltk.optimization import Trial
from amltk.store import PathBucket

bucket = PathBucket("results")
trial = Trial(name="trial", config={"x": 1}, bucket=bucket)

trial.store({"config.json": trial.config})
with trial.begin():
    report = trial.success()

config = report.retrieve("config.json")
print(config)
{'x': 1}

You could also create a Bucket and use that instead.

retrieve-bucket
from amltk.optimization import Trial
from amltk.store import PathBucket

bucket = PathBucket("results")

trial = Trial(name="trial", config={"x": 1}, bucket=bucket)

trial.store({"config.json": trial.config})

with trial.begin():
    report = trial.success()

config = report.retrieve("config.json")
print(config)
{'x': 1}
PARAMETER DESCRIPTION
key

The key of the item to retrieve as said in .storage.

TYPE: str

check

If provided, will check that the retrieved item is of the provided type. If not, will raise a TypeError. This is only used if where= is a str, Path or Bucket.

TYPE: type[R] | None DEFAULT: None

where

Where to retrieve the items from.

  • If None, will use the bucket attached to the Trial if any, otherwise it will raise an error.

  • If a str or Path, will store a bucket will be created at the path, and the items will be retrieved from a sub-bucket with the name of the trial.

  • If a Bucket, will retrieve the items from a sub-bucket with the name of the trial.

TYPE: str | Path | Bucket[str, Any] | None DEFAULT: None

RETURNS DESCRIPTION
R | Any

The retrieved item.

RAISES DESCRIPTION
TypeError

If check= is provided and the retrieved item is not of the provided type.

Source code in src/amltk/optimization/trial.py
def retrieve(
    self,
    key: str,
    *,
    where: str | Path | Bucket[str, Any] | None = None,
    check: type[R] | None = None,
) -> R | Any:
    """Retrieve items related to the trial.

    !!! note "Same argument for `where=`"

         Use the same argument for `where=` as you did for `store()`.

    ```python exec="true" source="material-block" result="python" title="retrieve" hl_lines="7"
    from amltk.optimization import Trial
    from amltk.store import PathBucket

    bucket = PathBucket("results")
    trial = Trial(name="trial", config={"x": 1}, bucket=bucket)

    trial.store({"config.json": trial.config})
    with trial.begin():
        report = trial.success()

    config = report.retrieve("config.json")
    print(config)
    ```

    You could also create a Bucket and use that instead.

    ```python exec="true" source="material-block" result="python" title="retrieve-bucket" hl_lines="11"

    from amltk.optimization import Trial
    from amltk.store import PathBucket

    bucket = PathBucket("results")

    trial = Trial(name="trial", config={"x": 1}, bucket=bucket)

    trial.store({"config.json": trial.config})

    with trial.begin():
        report = trial.success()

    config = report.retrieve("config.json")
    print(config)
    ```

    Args:
        key: The key of the item to retrieve as said in `.storage`.
        check: If provided, will check that the retrieved item is of the
            provided type. If not, will raise a `TypeError`. This
            is only used if `where=` is a `str`, `Path` or `Bucket`.
        where: Where to retrieve the items from.

            * If `None`, will use the bucket attached to the `Trial` if any,
                otherwise it will raise an error.

            * If a `str` or `Path`, will store
            a bucket will be created at the path, and the items will be
            retrieved from a sub-bucket with the name of the trial.

            * If a `Bucket`, will retrieve the items from a sub-bucket with the
            name of the trial.

    Returns:
        The retrieved item.

    Raises:
        TypeError: If `check=` is provided and  the retrieved item is not of the provided
            type.
    """  # noqa: E501
    return self.trial.retrieve(key, where=where, check=check)
rich_renderables #
rich_renderables() -> Iterable[RenderableType]

The renderables for rich for this report.

Source code in src/amltk/optimization/trial.py
def rich_renderables(self) -> Iterable[RenderableType]:
    """The renderables for rich for this report."""
    from rich.pretty import Pretty
    from rich.text import Text

    yield Text.assemble(
        ("Status", "bold"),
        ("(", "default"),
        self.status.__rich__(),
        (")", "default"),
    )
    yield Pretty(self.metrics)
    yield from self.trial.rich_renderables()
store #
store(
    items: Mapping[str, T],
    *,
    where: (
        str
        | Path
        | Bucket
        | Callable[[str, Mapping[str, T]], None]
        | None
    ) = None
) -> None

Store items related to the trial.

See: Trial.store()

Source code in src/amltk/optimization/trial.py
def store(
    self,
    items: Mapping[str, T],
    *,
    where: (
        str | Path | Bucket | Callable[[str, Mapping[str, T]], None] | None
    ) = None,
) -> None:
    """Store items related to the trial.

    See: [`Trial.store()`][amltk.optimization.trial.Trial.store]
    """
    self.trial.store(items, where=where)

Status #

Bases: str, Enum

The status of a trial.

CRASHED class-attribute instance-attribute #
CRASHED = 'crashed'

The trial crashed.

FAIL class-attribute instance-attribute #
FAIL = 'fail'

The trial failed.

SUCCESS class-attribute instance-attribute #
SUCCESS = 'success'

The trial was successful.

UNKNOWN class-attribute instance-attribute #
UNKNOWN = 'unknown'

The status of the trial is unknown.

attach_extra #

attach_extra(name: str, plugin_item: Any) -> None

Attach a plugin item to the trial.

PARAMETER DESCRIPTION
name

The name of the plugin item.

TYPE: str

plugin_item

The plugin item.

TYPE: Any

Source code in src/amltk/optimization/trial.py
def attach_extra(self, name: str, plugin_item: Any) -> None:
    """Attach a plugin item to the trial.

    Args:
        name: The name of the plugin item.
        plugin_item: The plugin item.
    """
    self.extras[name] = plugin_item

begin #

begin(
    time: (
        Kind | Literal["wall", "cpu", "process"] | None
    ) = None,
    memory_unit: (
        Unit | Literal["B", "KB", "MB", "GB"] | None
    ) = None,
) -> Iterator[None]

Begin the trial with a contextmanager.

Will begin timing the trial in the with block, attaching the profiled time and memory to the trial once completed, under .profile.time and .profile.memory attributes.

If an exception is raised, it will be attached to the trial under .exception with the traceback attached to the actual error message, such that it can be pickled and sent back to the main process loop.

begin
from amltk.optimization import Trial

trial = Trial(name="trial", config={"x": 1})

with trial.begin():
    # Do some work
    pass

print(trial.memory)
print(trial.time)
Memory.Interval(start_vms=1550761984.0, start_rss=297996288.0, end_vms=1550761984, end_rss=297996288, unit=bytes)
Timer.Interval(start=1706255424.0994244, end=1706255424.0994341, kind=wall, unit=seconds)
begin-fail
from amltk.optimization import Trial

trial = Trial(name="trial", config={"x": -1})

with trial.begin():
    raise ValueError("x must be positive")

print(trial.exception)
print(trial.traceback)
print(trial.memory)
print(trial.time)
x must be positive
Traceback (most recent call last):
  File "/opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/site-packages/amltk/optimization/trial.py", line 336, in begin
    yield
  File "<code block: n33; title begin-fail>", line 6, in <module>
ValueError: x must be positive

Memory.Interval(start_vms=1550761984.0, start_rss=297996288.0, end_vms=1550761984, end_rss=297996288, unit=bytes)
Timer.Interval(start=1706255424.1034796, end=1706255424.1037235, kind=wall, unit=seconds)
PARAMETER DESCRIPTION
time

The timer kind to use for the trial. Defaults to the default timer kind of the profiler.

TYPE: Kind | Literal['wall', 'cpu', 'process'] | None DEFAULT: None

memory_unit

The memory unit to use for the trial. Defaults to the default memory unit of the profiler.

TYPE: Unit | Literal['B', 'KB', 'MB', 'GB'] | None DEFAULT: None

Source code in src/amltk/optimization/trial.py
@contextmanager
def begin(
    self,
    time: Timer.Kind | Literal["wall", "cpu", "process"] | None = None,
    memory_unit: Memory.Unit | Literal["B", "KB", "MB", "GB"] | None = None,
) -> Iterator[None]:
    """Begin the trial with a `contextmanager`.

    Will begin timing the trial in the `with` block, attaching the profiled time and memory
    to the trial once completed, under `.profile.time` and `.profile.memory` attributes.

    If an exception is raised, it will be attached to the trial under `.exception`
    with the traceback attached to the actual error message, such that it can
    be pickled and sent back to the main process loop.

    ```python exec="true" source="material-block" result="python" title="begin" hl_lines="5"
    from amltk.optimization import Trial

    trial = Trial(name="trial", config={"x": 1})

    with trial.begin():
        # Do some work
        pass

    print(trial.memory)
    print(trial.time)
    ```

    ```python exec="true" source="material-block" result="python" title="begin-fail" hl_lines="5"
    from amltk.optimization import Trial

    trial = Trial(name="trial", config={"x": -1})

    with trial.begin():
        raise ValueError("x must be positive")

    print(trial.exception)
    print(trial.traceback)
    print(trial.memory)
    print(trial.time)
    ```

    Args:
        time: The timer kind to use for the trial. Defaults to the default
            timer kind of the profiler.
        memory_unit: The memory unit to use for the trial. Defaults to the
            default memory unit of the profiler.
    """  # noqa: E501
    with self.profiler(name="trial", memory_unit=memory_unit, time_kind=time):
        try:
            yield
        except Exception as error:  # noqa: BLE001
            self.exception = error
            self.traceback = traceback.format_exc()
        finally:
            self.time = self.profiler["trial"].time
            self.memory = self.profiler["trial"].memory

copy #

copy() -> Self

Create a copy of the trial.

RETURNS DESCRIPTION
Self

The copy of the trial.

Source code in src/amltk/optimization/trial.py
def copy(self) -> Self:
    """Create a copy of the trial.

    Returns:
        The copy of the trial.
    """
    return copy.deepcopy(self)

crashed #

crashed(
    exception: BaseException | None = None,
    traceback: str | None = None,
) -> Report[I]

Generate a crash report.

Note

You will typically not create these manually, but instead if we don't recieve a report from a target function evaluation, but only an error, we assume something crashed and generate a crash report for you.

Non specifed metrics

We will use the .metrics to determine the .worst value of the metric, using that as the reported metrics

PARAMETER DESCRIPTION
exception

The exception that caused the crash. If not provided, the exception will be taken from the trial. If this is still None, a RuntimeError will be raised.

TYPE: BaseException | None DEFAULT: None

traceback

The traceback of the exception. If not provided, the traceback will be taken from the trial if there is one there.

TYPE: str | None DEFAULT: None

RETURNS DESCRIPTION
Report[I]

The report of the trial.

Source code in src/amltk/optimization/trial.py
def crashed(
    self,
    exception: BaseException | None = None,
    traceback: str | None = None,
) -> Trial.Report[I]:
    """Generate a crash report.

    !!! note

        You will typically not create these manually, but instead if we don't
        recieve a report from a target function evaluation, but only an error,
        we assume something crashed and generate a crash report for you.

    !!! note "Non specifed metrics"

        We will use the [`.metrics`][amltk.optimization.Trial.metrics] to determine
        the [`.worst`][amltk.optimization.Metric.worst] value of the metric,
        using that as the reported metrics

    Args:
        exception: The exception that caused the crash. If not provided, the
            exception will be taken from the trial. If this is still `None`,
            a `RuntimeError` will be raised.
        traceback: The traceback of the exception. If not provided, the
            traceback will be taken from the trial if there is one there.

    Returns:
        The report of the trial.
    """
    if exception is None and self.exception is None:
        raise RuntimeError(
            "Cannot generate a crash report without an exception."
            " Please provide an exception or use `with trial.begin():` to start"
            " the trial.",
        )

    self.exception = exception if exception else self.exception
    self.traceback = traceback if traceback else self.traceback

    return Trial.Report(
        trial=self,
        status=Trial.Status.CRASHED,
        metric_values=tuple(metric.worst for metric in self.metrics),
    )

delete_from_storage #

delete_from_storage(
    items: Iterable[str],
    *,
    where: (
        str
        | Path
        | Bucket
        | Callable[[str, Iterable[str]], dict[str, bool]]
        | None
    ) = None
) -> dict[str, bool]

Delete items related to the trial.

delete-storage
from amltk.optimization import Trial
from amltk.store import PathBucket

bucket = PathBucket("results")
trial = Trial(name="trial", config={"x": 1}, info={}, bucket=bucket)

trial.store({"config.json": trial.config})
trial.delete_from_storage(items=["config.json"])

print(trial.storage)
{'config.json'}

You could also create a Bucket and use that instead.

delete-storage-bucket
from amltk.optimization import Trial
from amltk.store import PathBucket

bucket = PathBucket("results")
trial = Trial(name="trial", config={"x": 1}, bucket=bucket)

trial.store({"config.json": trial.config})
trial.delete_from_storage(items=["config.json"])

print(trial.storage)
{'config.json'}
PARAMETER DESCRIPTION
items

The items to delete, an iterable of keys

TYPE: Iterable[str]

where

Where the items are stored

  • If None, will use the bucket attached to the Trial if any, otherwise it will raise an error.

  • If a str or Path, will lookup a bucket at the path, and the items will be deleted from a sub-bucket with the name of the trial.

  • If a Bucket, will delete the items in a sub-bucket with the name of the trial.

  • If a Callable, will call the callable with the name of the trial and the keys of the items to delete. Should a mapping from the key to whether it was deleted or not.

TYPE: str | Path | Bucket | Callable[[str, Iterable[str]], dict[str, bool]] | None DEFAULT: None

RETURNS DESCRIPTION
dict[str, bool]

A dict from the key to whether it was deleted or not.

Source code in src/amltk/optimization/trial.py
def delete_from_storage(
    self,
    items: Iterable[str],
    *,
    where: (
        str | Path | Bucket | Callable[[str, Iterable[str]], dict[str, bool]] | None
    ) = None,
) -> dict[str, bool]:
    """Delete items related to the trial.

    ```python exec="true" source="material-block" result="python" title="delete-storage" hl_lines="6"
    from amltk.optimization import Trial
    from amltk.store import PathBucket

    bucket = PathBucket("results")
    trial = Trial(name="trial", config={"x": 1}, info={}, bucket=bucket)

    trial.store({"config.json": trial.config})
    trial.delete_from_storage(items=["config.json"])

    print(trial.storage)
    ```

    You could also create a Bucket and use that instead.

    ```python exec="true" source="material-block" result="python" title="delete-storage-bucket" hl_lines="9"
    from amltk.optimization import Trial
    from amltk.store import PathBucket

    bucket = PathBucket("results")
    trial = Trial(name="trial", config={"x": 1}, bucket=bucket)

    trial.store({"config.json": trial.config})
    trial.delete_from_storage(items=["config.json"])

    print(trial.storage)
    ```

    Args:
        items: The items to delete, an iterable of keys
        where: Where the items are stored

            * If `None`, will use the bucket attached to the `Trial` if any,
                otherwise it will raise an error.

            * If a `str` or `Path`, will lookup a bucket at the path,
            and the items will be deleted from a sub-bucket with the name of the trial.

            * If a `Bucket`, will delete the items in a sub-bucket with the
            name of the trial.

            * If a `Callable`, will call the callable with the name of the
            trial and the keys of the items to delete. Should a mapping from
            the key to whether it was deleted or not.

    Returns:
        A dict from the key to whether it was deleted or not.
    """  # noqa: E501
    # If not a Callable, we convert to a path bucket
    method: Bucket
    match where:
        case None:
            method = self.bucket
        case str() | Path():
            method = PathBucket(where, create=False)
        case Bucket():
            method = where
        case _:
            # Leave it up to supplied method
            return where(self.name, items)

    sub_bucket = method.sub(self.name)
    return sub_bucket.remove(items)

fail #

fail(**metrics: float | int) -> Report[I]

Generate a failure report.

Non specifed metrics

If you do not specify metrics, this will use the .metrics to determine the .worst value of the metric, using that as the reported result

fail
from amltk.optimization import Trial, Metric

loss = Metric("loss", minimize=True, bounds=(0, 1_000))
trial = Trial(name="trial", config={"x": 1}, metrics=[loss])

with trial.begin():
    raise ValueError("This is an error")  # Something went wrong

if trial.exception: # You can check for an exception of the trial here
    report = trial.fail()

print(report.metrics)
print(report)
{'loss': 1000.0}
Trial.Report(trial=Trial(name='trial', config={'x': 1}, bucket=PathBucket(PosixPath('unknown-trial-bucket')), metrics=[Metric(name='loss', minimize=True, bounds=(0.0, 1000.0))], seed=None, fidelities=None, summary={}, exception=ValueError('This is an error'), storage=set(), extras={}), status=<Status.FAIL: 'fail'>, metrics={'loss': 1000.0}, metric_values=(Metric.Value(metric=Metric(name='loss', minimize=True, bounds=(0.0, 1000.0)), value=1000.0),), metric_defs={'loss': Metric(name='loss', minimize=True, bounds=(0.0, 1000.0))}, metric_names=('loss',))
RETURNS DESCRIPTION
Report[I]

The result of the trial.

Source code in src/amltk/optimization/trial.py
def fail(self, **metrics: float | int) -> Trial.Report[I]:
    """Generate a failure report.

    !!! note "Non specifed metrics"

        If you do not specify metrics, this will use
        the [`.metrics`][amltk.optimization.Trial.metrics] to determine
        the [`.worst`][amltk.optimization.Metric.worst] value of the metric,
        using that as the reported result

    ```python exec="true" source="material-block" result="python" title="fail"
    from amltk.optimization import Trial, Metric

    loss = Metric("loss", minimize=True, bounds=(0, 1_000))
    trial = Trial(name="trial", config={"x": 1}, metrics=[loss])

    with trial.begin():
        raise ValueError("This is an error")  # Something went wrong

    if trial.exception: # You can check for an exception of the trial here
        report = trial.fail()

    print(report.metrics)
    print(report)
    ```

    Returns:
        The result of the trial.
    """
    _recorded_values: list[Metric.Value] = []
    for _metric in self.metrics:
        if (raw_value := metrics.get(_metric.name)) is not None:
            _recorded_values.append(_metric.as_value(raw_value))
        else:
            _recorded_values.append(_metric.worst)

    return Trial.Report(
        trial=self,
        status=Trial.Status.FAIL,
        metric_values=tuple(_recorded_values),
    )

profile #

profile(
    name: str,
    *,
    time: (
        Kind | Literal["wall", "cpu", "process"] | None
    ) = None,
    memory_unit: (
        Unit | Literal["B", "KB", "MB", "GB"] | None
    ) = None,
    summary: bool = False
) -> Iterator[None]

Measure some interval in the trial.

The results of the profiling will be available in the .summary attribute with the name of the interval as the key.

profile
from amltk.optimization import Trial
import time

trial = Trial(name="trial", config={"x": 1})

with trial.profile("some_interval"):
    # Do some work
    time.sleep(1)

print(trial.profiler["some_interval"].time)
Timer.Interval(start=1706255424.1720717, end=1706255425.1731663, kind=wall, unit=seconds)
PARAMETER DESCRIPTION
name

The name of the interval.

TYPE: str

time

The timer kind to use for the trial. Defaults to the default timer kind of the profiler.

TYPE: Kind | Literal['wall', 'cpu', 'process'] | None DEFAULT: None

memory_unit

The memory unit to use for the trial. Defaults to the default memory unit of the profiler.

TYPE: Unit | Literal['B', 'KB', 'MB', 'GB'] | None DEFAULT: None

summary

Whether to add the interval to the summary.

TYPE: bool DEFAULT: False

YIELDS DESCRIPTION
Iterator[None]

The interval measured. Values will be nan until the with block is finished.

Source code in src/amltk/optimization/trial.py
@contextmanager
def profile(
    self,
    name: str,
    *,
    time: Timer.Kind | Literal["wall", "cpu", "process"] | None = None,
    memory_unit: Memory.Unit | Literal["B", "KB", "MB", "GB"] | None = None,
    summary: bool = False,
) -> Iterator[None]:
    """Measure some interval in the trial.

    The results of the profiling will be available in the `.summary` attribute
    with the name of the interval as the key.

    ```python exec="true" source="material-block" result="python" title="profile"
    from amltk.optimization import Trial
    import time

    trial = Trial(name="trial", config={"x": 1})

    with trial.profile("some_interval"):
        # Do some work
        time.sleep(1)

    print(trial.profiler["some_interval"].time)
    ```

    Args:
        name: The name of the interval.
        time: The timer kind to use for the trial. Defaults to the default
            timer kind of the profiler.
        memory_unit: The memory unit to use for the trial. Defaults to the
            default memory unit of the profiler.
        summary: Whether to add the interval to the summary.

    Yields:
        The interval measured. Values will be nan until the with block is finished.
    """
    with self.profiler(name=name, memory_unit=memory_unit, time_kind=time):
        yield

    if summary:
        profile = self.profiler[name]
        self.summary.update(profile.to_dict(prefix=name))

retrieve #

retrieve(
    key: str,
    *,
    where: str | Path | Bucket[str, Any] | None = None,
    check: type[R] | None = None
) -> R | Any

Retrieve items related to the trial.

Same argument for where=

Use the same argument for where= as you did for store().

retrieve
from amltk.optimization import Trial
from amltk.store import PathBucket

bucket = PathBucket("results")

# Create a trial, normally done by an optimizer
trial = Trial(name="trial", config={"x": 1}, bucket=bucket)

trial.store({"config.json": trial.config})
config = trial.retrieve("config.json")

print(config)
{'x': 1}

You could also manually specify where something get's stored and retrieved

retrieve-bucket
from amltk.optimization import Trial
from amltk.store import PathBucket

path = "./config_path"

trial = Trial(name="trial", config={"x": 1})

trial.store({"config.json": trial.config}, where=path)

config = trial.retrieve("config.json", where=path)
print(config)
{'x': 1}
PARAMETER DESCRIPTION
key

The key of the item to retrieve as said in .storage.

TYPE: str

check

If provided, will check that the retrieved item is of the provided type. If not, will raise a TypeError. This is only used if where= is a str, Path or Bucket.

TYPE: type[R] | None DEFAULT: None

where

Where to retrieve the items from.

  • If None, will use the bucket attached to the Trial if any, otherwise it will raise an error.

  • If a str or Path, will store a bucket will be created at the path, and the items will be retrieved from a sub-bucket with the name of the trial.

  • If a Bucket, will retrieve the items from a sub-bucket with the name of the trial.

TYPE: str | Path | Bucket[str, Any] | None DEFAULT: None

RETURNS DESCRIPTION
R | Any

The retrieved item.

RAISES DESCRIPTION
TypeError

If check= is provided and the retrieved item is not of the provided type.

Source code in src/amltk/optimization/trial.py
def retrieve(
    self,
    key: str,
    *,
    where: str | Path | Bucket[str, Any] | None = None,
    check: type[R] | None = None,
) -> R | Any:
    """Retrieve items related to the trial.

    !!! note "Same argument for `where=`"

         Use the same argument for `where=` as you did for `store()`.

    ```python exec="true" source="material-block" result="python" title="retrieve" hl_lines="7"
    from amltk.optimization import Trial
    from amltk.store import PathBucket

    bucket = PathBucket("results")

    # Create a trial, normally done by an optimizer
    trial = Trial(name="trial", config={"x": 1}, bucket=bucket)

    trial.store({"config.json": trial.config})
    config = trial.retrieve("config.json")

    print(config)
    ```

    You could also manually specify where something get's stored and retrieved

    ```python exec="true" source="material-block" result="python" title="retrieve-bucket" hl_lines="11"

    from amltk.optimization import Trial
    from amltk.store import PathBucket

    path = "./config_path"

    trial = Trial(name="trial", config={"x": 1})

    trial.store({"config.json": trial.config}, where=path)

    config = trial.retrieve("config.json", where=path)
    print(config)
    import shutil; shutil.rmtree(path)  # markdown-exec: hide
    ```

    Args:
        key: The key of the item to retrieve as said in `.storage`.
        check: If provided, will check that the retrieved item is of the
            provided type. If not, will raise a `TypeError`. This
            is only used if `where=` is a `str`, `Path` or `Bucket`.

        where: Where to retrieve the items from.

            * If `None`, will use the bucket attached to the `Trial` if any,
                otherwise it will raise an error.

            * If a `str` or `Path`, will store
            a bucket will be created at the path, and the items will be
            retrieved from a sub-bucket with the name of the trial.

            * If a `Bucket`, will retrieve the items from a sub-bucket with the
            name of the trial.

    Returns:
        The retrieved item.

    Raises:
        TypeError: If `check=` is provided and  the retrieved item is not of the provided
            type.
    """  # noqa: E501
    # If not a Callable, we convert to a path bucket
    method: Bucket[str, Any]
    match where:
        case None:
            method = self.bucket
        case str():
            method = PathBucket(where, create=True)
        case Path():
            method = PathBucket(where, create=True)
        case Bucket():
            method = where

    # Store in a sub-bucket
    return method.sub(self.name)[key].load(check=check)

rich_renderables #

rich_renderables() -> Iterable[RenderableType]

The renderables for rich for this report.

Source code in src/amltk/optimization/trial.py
def rich_renderables(self) -> Iterable[RenderableType]:  # noqa: C901
    """The renderables for rich for this report."""
    from rich.panel import Panel
    from rich.pretty import Pretty
    from rich.table import Table
    from rich.text import Text

    items: list[RenderableType] = []
    table = Table.grid(padding=(0, 1), expand=False)

    # Predfined things
    table.add_row("config", Pretty(self.config))

    if self.fidelities:
        table.add_row("fidelities", Pretty(self.fidelities))

    if any(self.extras):
        table.add_row("extras", Pretty(self.extras))

    if self.seed:
        table.add_row("seed", Pretty(self.seed))

    if self.bucket:
        table.add_row("bucket", Pretty(self.bucket))

    if self.metrics:
        items.append(
            Panel(Pretty(self.metrics), title="Metrics", title_align="left"),
        )

    # Dynamic things
    if self.summary:
        table.add_row("summary", Pretty(self.summary))

    if any(self.storage):
        table.add_row("storage", Pretty(self.storage))

    if self.exception:
        table.add_row("exception", Text(str(self.exception), style="bold red"))

    if self.traceback:
        table.add_row("traceback", Text(self.traceback, style="bold red"))

    for name, profile in self.profiles.items():
        table.add_row("profile:" + name, Pretty(profile))

    items.append(table)

    yield from items

store #

store(
    items: Mapping[str, T],
    *,
    where: (
        str
        | Path
        | Bucket
        | Callable[[str, Mapping[str, T]], None]
        | None
    ) = None
) -> None

Store items related to the trial.

store
from amltk.optimization import Trial
from amltk.store import PathBucket

trial = Trial(name="trial", config={"x": 1}, bucket=PathBucket("results"))
trial.store({"config.json": trial.config})

print(trial.storage)
{'config.json'}

You could also specify where= exactly to store the thing

store-bucket
from amltk.optimization import Trial

trial = Trial(name="trial", config={"x": 1})
trial.store({"config.json": trial.config}, where="./results")

print(trial.storage)
{'config.json'}
PARAMETER DESCRIPTION
items

The items to store, a dict from the key to store it under to the item itself.If using a str, Path or PathBucket, the keys of the items should be a valid filename, including the correct extension. e.g. {"config.json": trial.config}

TYPE: Mapping[str, T]

where

Where to store the items.

  • If None, will use the bucket attached to the Trial if any, otherwise it will raise an error.

  • If a str or Path, will store a bucket will be created at the path, and the items will be stored in a sub-bucket with the name of the trial.

  • If a Bucket, will store the items in a sub-bucket with the name of the trial.

  • If a Callable, will call the callable with the name of the trial and the key-valued pair of items to store.

TYPE: str | Path | Bucket | Callable[[str, Mapping[str, T]], None] | None DEFAULT: None

Source code in src/amltk/optimization/trial.py
def store(
    self,
    items: Mapping[str, T],
    *,
    where: (
        str | Path | Bucket | Callable[[str, Mapping[str, T]], None] | None
    ) = None,
) -> None:
    """Store items related to the trial.

    ```python exec="true" source="material-block" result="python" title="store" hl_lines="5"
    from amltk.optimization import Trial
    from amltk.store import PathBucket

    trial = Trial(name="trial", config={"x": 1}, bucket=PathBucket("results"))
    trial.store({"config.json": trial.config})

    print(trial.storage)
    ```

    You could also specify `where=` exactly to store the thing

    ```python exec="true" source="material-block" result="python" title="store-bucket" hl_lines="7"
    from amltk.optimization import Trial

    trial = Trial(name="trial", config={"x": 1})
    trial.store({"config.json": trial.config}, where="./results")

    print(trial.storage)
    ```

    Args:
        items: The items to store, a dict from the key to store it under
            to the item itself.If using a `str`, `Path` or `PathBucket`,
            the keys of the items should be a valid filename, including
            the correct extension. e.g. `#!python {"config.json": trial.config}`

        where: Where to store the items.

            * If `None`, will use the bucket attached to the `Trial` if any,
                otherwise it will raise an error.

            * If a `str` or `Path`, will store
            a bucket will be created at the path, and the items will be
            stored in a sub-bucket with the name of the trial.

            * If a `Bucket`, will store the items **in a sub-bucket** with the
            name of the trial.

            * If a `Callable`, will call the callable with the name of the
            trial and the key-valued pair of items to store.
    """  # noqa: E501
    method: Bucket
    match where:
        case None:
            method = self.bucket
            method.sub(self.name).store(items)
        case str() | Path():
            method = PathBucket(where, create=True)
            method.sub(self.name).store(items)
        case Bucket():
            method = where
            method.sub(self.name).store(items)
        case _:
            # Leave it up to supplied method
            where(self.name, items)

    # Add the keys to storage
    self.storage.update(items.keys())

success #

success(**metrics: float | int) -> Report[I]

Generate a success report.

success
from amltk.optimization import Trial, Metric

loss_metric = Metric("loss", minimize=True)

trial = Trial(name="trial", config={"x": 1}, metrics=[loss_metric])

with trial.begin():
    # Do some work
    report = trial.success(loss=1)

print(report)
Trial.Report(trial=Trial(name='trial', config={'x': 1}, bucket=PathBucket(PosixPath('unknown-trial-bucket')), metrics=[Metric(name='loss', minimize=True, bounds=None)], seed=None, fidelities=None, summary={}, exception=None, storage=set(), extras={}), status=<Status.SUCCESS: 'success'>, metrics={'loss': 1.0}, metric_values=(Metric.Value(metric=Metric(name='loss', minimize=True, bounds=None), value=1.0),), metric_defs={'loss': Metric(name='loss', minimize=True, bounds=None)}, metric_names=('loss',))
PARAMETER DESCRIPTION
**metrics

The metrics of the trial, where the key is the name of the metrics and the value is the metric.

TYPE: float | int DEFAULT: {}

RETURNS DESCRIPTION
Report[I]

The report of the trial.

Source code in src/amltk/optimization/trial.py
def success(self, **metrics: float | int) -> Trial.Report[I]:
    """Generate a success report.

    ```python exec="true" source="material-block" result="python" title="success" hl_lines="7"
    from amltk.optimization import Trial, Metric

    loss_metric = Metric("loss", minimize=True)

    trial = Trial(name="trial", config={"x": 1}, metrics=[loss_metric])

    with trial.begin():
        # Do some work
        report = trial.success(loss=1)

    print(report)
    ```

    Args:
        **metrics: The metrics of the trial, where the key is the name of the
            metrics and the value is the metric.

    Returns:
        The report of the trial.
    """  # noqa: E501
    _recorded_values: list[Metric.Value] = []
    for _metric in self.metrics:
        if (raw_value := metrics.get(_metric.name)) is not None:
            _recorded_values.append(_metric.as_value(raw_value))
        else:
            raise ValueError(
                f"Cannot report success without {self.metrics=}."
                f" Please provide a value for the metric '{_metric.name}'."
                f"\nPlease provide '{_metric.name}' as `trial.success("
                f"{_metric.name}=value)` or rename your metric to"
                f'`Metric(name="{{provided_key}}", minimize={_metric.minimize}, '
                f"bounds={_metric.bounds})`",
            )

    # Need to check if anything extra was reported!
    extra = set(metrics.keys()) - {metric.name for metric in self.metrics}
    if extra:
        raise ValueError(
            f"Cannot report success with extra metrics: {extra=}."
            f"\nOnly {self.metrics=} are allowed.",
        )

    return Trial.Report(
        trial=self,
        status=Trial.Status.SUCCESS,
        metric_values=tuple(_recorded_values),
    )