Stochastic Gradient Descent On Multiple Datasets

Example for optimizing a Multi-Layer Perceptron (MLP) across multiple (dataset) instances.

Alternative to budgets, here wlog. we consider instances as a fidelity type. An instance represents a specific scenario/condition (e.g. different datasets, subsets, transformations) for the algorithm to run. SMAC then returns the algorithm that had the best performance across all the instances. In this case, an instance is a binary dataset i.e., digit-2 vs digit-3.

If we use instance as our fidelity, we need to initialize scenario with argument instance. In this case the argument budget is no longer required by the target function. But due to the scenario instance argument, the target function now is required to have an instance argument.

[WARNING][successive_halving.py:123] The target function is specified to be non-deterministic, but number of seeds to evaluate are set to 1. Consider increasing `n_seeds` from the intensifier.
[INFO][successive_halving.py:197] Using successive halving with budget type INSTANCES, min budget 1, max budget 45 and eta 3.
[INFO][successive_halving.py:217] The argument `incumbent_selection` is ignored because instances are used as budget type.
[INFO][abstract_initial_design.py:133] Using 40 initial design and 0 additional configurations.
[WARNING][abstract_parallel_intensifier.py:93] Hyperband is executed with 1 worker(s) only. However, your system supports up to 2 workers. Consider increasing the workers in the scenario.
[INFO][hyperband_worker.py:165] Finished Hyperband iteration-step 1-1 with initial budget 1.
[INFO][successive_halving_worker.py:396] First run and no incumbent provided. Challenger is assumed to be the incumbent.
[INFO][abstract_intensifier.py:340] Challenger (0.0137) is better than incumbent (0.0164) on 1 trials.
[INFO][abstract_intensifier.py:364] Changes in incumbent:
[INFO][abstract_intensifier.py:367] --- alpha: 0.5488135039273248 -> 0.11827442586893322
[INFO][abstract_intensifier.py:367] --- eta0: 0.0006273927602293597 -> 9.614170248896958e-05
[INFO][abstract_intensifier.py:367] --- l1_ratio: 0.317983179393976 -> 0.24875314351995803
[INFO][successive_halving_worker.py:245] Finished Successive Halving iteration-step 1-1 with budget [1.67 / 45] and 27 evaluated challenger(s).
[INFO][successive_halving_worker.py:245] Finished Successive Halving iteration-step 1-2 with budget [5.00 / 45] and 9 evaluated challenger(s).
[INFO][successive_halving_worker.py:245] Finished Successive Halving iteration-step 1-3 with budget [15.00 / 45] and 3 evaluated challenger(s).
[INFO][successive_halving_worker.py:245] Finished Successive Halving iteration-step 1-4 with budget [45.00 / 45] and 1 evaluated challenger(s).
[INFO][hyperband_worker.py:165] Finished Hyperband iteration-step 1-2 with initial budget 5.
[INFO][successive_halving_worker.py:245] Finished Successive Halving iteration-step 1-1 with budget [5.00 / 45] and 9 evaluated challenger(s).
[INFO][successive_halving_worker.py:245] Finished Successive Halving iteration-step 1-2 with budget [15.00 / 45] and 3 evaluated challenger(s).
[INFO][successive_halving_worker.py:245] Finished Successive Halving iteration-step 1-3 with budget [45.00 / 45] and 1 evaluated challenger(s).
[INFO][hyperband_worker.py:165] Finished Hyperband iteration-step 1-3 with initial budget 15.
[INFO][base_smbo.py:260] Configuration budget is exhausted.
[INFO][abstract_facade.py:325] Final Incumbent: {'alpha': 0.11827442586893322, 'eta0': 9.614170248896958e-05, 'l1_ratio': 0.24875314351995803, 'learning_rate': 'adaptive'}
[INFO][abstract_facade.py:326] Estimated cost: 0.008231389269086701
Default cost: 0.13423167867113728
Incumbent cost: 0.007983889335257179

from __future__ import annotations

import itertools
import warnings

import numpy as np
from ConfigSpace import Categorical, Configuration, ConfigurationSpace, Float
from sklearn import datasets
from sklearn.linear_model import SGDClassifier
from sklearn.model_selection import StratifiedKFold, cross_val_score

from smac import MultiFidelityFacade, Scenario

__copyright__ = "Copyright 2021, AutoML.org Freiburg-Hannover"
__license__ = "3-clause BSD"


class DigitsDataset:
    def __init__(self) -> None:
        self._data = datasets.load_digits()

    def get_instances(self) -> list[str]:
        """Create instances from the dataset which include two classes only."""
        return [f"{classA}-{classB}" for classA, classB in itertools.combinations(self._data.target_names, 2)]

    def get_instance_features(self) -> dict[str, list[int | float]]:
        """Returns the mean and variance of all instances as features."""
        features = {}
        for instance in self.get_instances():
            data, _ = self.get_instance_data(instance)
            features[instance] = [np.mean(data), np.var(data)]

        return features

    def get_instance_data(self, instance: str) -> tuple[np.ndarray, np.ndarray]:
        """Retrieve data from the passed instance."""
        # We split the dataset into two classes
        classA, classB = instance.split("-")
        indices = np.where(np.logical_or(int(classA) == self._data.target, int(classB) == self._data.target))

        data = self._data.data[indices]
        target = self._data.target[indices]

        return data, target


class SGD:
    def __init__(self, dataset: DigitsDataset) -> None:
        self.dataset = dataset

    @property
    def configspace(self) -> ConfigurationSpace:
        """Build the configuration space which defines all parameters and their ranges for the SGD classifier."""
        cs = ConfigurationSpace()

        # We define a few possible parameters for the SGD classifier
        alpha = Float("alpha", (0, 1), default=1.0)
        l1_ratio = Float("l1_ratio", (0, 1), default=0.5)
        learning_rate = Categorical("learning_rate", ["constant", "invscaling", "adaptive"], default="constant")
        eta0 = Float("eta0", (0.00001, 1), default=0.1, log=True)
        # Add the parameters to configuration space
        cs.add_hyperparameters([alpha, l1_ratio, learning_rate, eta0])

        return cs

    def train(self, config: Configuration, instance: str, seed: int = 0) -> float:
        """Creates a SGD classifier based on a configuration and evaluates it on the
        digits dataset using cross-validation."""

        with warnings.catch_warnings():
            warnings.filterwarnings("ignore")

            # SGD classifier using given configuration
            clf = SGDClassifier(
                loss="log",
                penalty="elasticnet",
                alpha=config["alpha"],
                l1_ratio=config["l1_ratio"],
                learning_rate=config["learning_rate"],
                eta0=config["eta0"],
                max_iter=30,
                early_stopping=True,
                random_state=seed,
            )

            # get instance
            data, target = self.dataset.get_instance_data(instance)

            cv = StratifiedKFold(n_splits=4, random_state=seed, shuffle=True)  # to make CV splits consistent
            scores = cross_val_score(clf, data, target, cv=cv)

        return 1 - np.mean(scores)


if __name__ == "__main__":
    dataset = DigitsDataset()
    model = SGD(dataset)

    scenario = Scenario(
        model.configspace,
        walltime_limit=30,  # We want to optimize for 30 seconds
        n_trials=5000,  # We want to try max 5000 different trials
        min_budget=1,  # Use min one instance
        max_budget=45,  # Use max 45 instances (if we have a lot of instances we could constraint it here)
        instances=dataset.get_instances(),
        instance_features=dataset.get_instance_features(),
    )

    # Create our SMAC object and pass the scenario and the train method
    smac = MultiFidelityFacade(
        scenario,
        model.train,
        overwrite=True,
    )

    # Now we start the optimization process
    incumbent = smac.optimize()

    default_cost = smac.validate(model.configspace.get_default_configuration())
    print(f"Default cost: {default_cost}")

    incumbent_cost = smac.validate(incumbent)
    print(f"Incumbent cost: {incumbent_cost}")

Total running time of the script: ( 0 minutes 37.318 seconds)