Scatter plots show the costs of the default and optimized parameter configuration on each instance. Since this looses detailed information about the individual cost on each instance by looking at aggregated cost values in tables, scatter plots provide a more detailed picture. They provide insights whether overall performance improvements can be explained only by some outliers or whether they are due to improvements on the entire instance set. On the left side the training-data is scattered, on the right side the test-data is scattered.
Creates a scatterplot of the two configurations on the given set of instances. Saves plot to file.
_plot_scatter(default: ConfigSpace.configuration_space.Configuration, incumbent: ConfigSpace.configuration_space.Configuration, rh: smac.runhistory.runhistory.RunHistory, train: List[str], test: Optional[List[str]], run_obj: str, cutoff, output_dir)¶
incumbent (default,) – configurations to be compared
rh (RunHistory) – runhistory to use for cost-estimations
test] (train[,) – instance-names
run_obj (str) – run-objective (time or quality)
cutoff (float) – maximum runtime of ta
output_dir (str) – output directory
get_html(d=None, tooltip=None) → Tuple[str, str]¶
General reports in html-format, to be easily integrated in html-code. ALSO FOR BOKEH-OUTPUT.
d (Dictionary) – a dictionary that will be later turned into a website
script, div – header and body part of html-code
- Return type
Depending on analysis, this creates jupyter-notebook compatible output.
This function needs to be called if bokeh-plots are to be displayed in notebook AND saved to webpage.