.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "examples/40_advanced/example_debug_logging.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note Click :ref:`here ` to download the full example code or to run this example in your browser via Binder .. rst-class:: sphx-glr-example-title .. _sphx_glr_examples_40_advanced_example_debug_logging.py: ===================== Logging and debugging ===================== This example shows how to provide a custom logging configuration to *auto-sklearn*. We will be fitting 2 pipelines and showing any INFO-level msg on console. Even if you do not provide a logging_configuration, autosklearn creates a log file in the temporal working directory. This directory can be specified via the `tmp_folder` as exemplified below. This example also highlights additional information about *auto-sklearn* internal directory structure. .. GENERATED FROM PYTHON SOURCE LINES 16-25 .. code-block:: default import pathlib import sklearn.datasets import sklearn.metrics import sklearn.model_selection import autosklearn.classification .. GENERATED FROM PYTHON SOURCE LINES 26-29 Data Loading ============ Load kr-vs-kp dataset from https://www.openml.org/d/3 .. GENERATED FROM PYTHON SOURCE LINES 29-36 .. code-block:: default X, y = data = sklearn.datasets.fetch_openml(data_id=3, return_X_y=True, as_frame=True) X_train, X_test, y_train, y_test = sklearn.model_selection.train_test_split( X, y, random_state=1 ) .. GENERATED FROM PYTHON SOURCE LINES 37-42 Create a logging config ======================= *auto-sklearn* uses a default `logging config `_ We will instead create a custom one as follows: .. GENERATED FROM PYTHON SOURCE LINES 42-74 .. code-block:: default logging_config = { "version": 1, "disable_existing_loggers": True, "formatters": { "custom": { # More format options are available in the official # `documentation `_ "format": "%(asctime)s - %(name)s - %(levelname)s - %(message)s" } }, # Any INFO level msg will be printed to the console "handlers": { "console": { "level": "INFO", "formatter": "custom", "class": "logging.StreamHandler", "stream": "ext://sys.stdout", }, }, "loggers": { "": { # root logger "level": "DEBUG", }, "Client-EnsembleBuilder": { "level": "DEBUG", "handlers": ["console"], }, }, } .. GENERATED FROM PYTHON SOURCE LINES 75-77 Build and fit a classifier ========================== .. GENERATED FROM PYTHON SOURCE LINES 77-105 .. code-block:: default cls = autosklearn.classification.AutoSklearnClassifier( time_left_for_this_task=30, # Bellow two flags are provided to speed up calculations # Not recommended for a real implementation initial_configurations_via_metalearning=0, smac_scenario_args={"runcount_limit": 2}, # Pass the config file we created logging_config=logging_config, # *auto-sklearn* generates temporal files under tmp_folder tmp_folder="./tmp_folder", # By default tmp_folder is deleted. We will preserve it # for debug purposes delete_tmp_folder_after_terminate=False, ) cls.fit(X_train, y_train, X_test, y_test) # *auto-sklearn* generates intermediate files which can be of interest # Dask multiprocessing information. Useful on multi-core runs: # * tmp_folder/distributed.log # The individual fitted estimators are written to disk on: # * tmp_folder/.auto-sklearn/runs # SMAC output is stored in this directory. # For more info, you can check the `SMAC documentation `_ # * tmp_folder/smac3-output # Auto-sklearn always outputs to this log file # tmp_folder/AutoML*.log for filename in pathlib.Path("./tmp_folder").glob("*"): print(filename) .. rst-class:: sphx-glr-script-out .. code-block:: none Fitting to the training data: 0%| | 0/30 [00:00` .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: example_debug_logging.ipynb ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_