Introduction#
Getting Help#
Please use our github and raise an issue at: automl/neps
Development Workflow#
We use one main branch master
and feature branches for development.
We use pull requests to merge feature branches into master
.
Versions released to PyPI are tagged with a version number.
Automatic checks are run on every pull request and on every commit to master
.
Installation#
There are three required steps and one optional:
- Optional: Install miniconda and create an environment
- Install poetry
- Install the neps package using poetry
- Activate pre-commit for the repository
For instructions see below.
1. Optional: Install miniconda and create a virtual environment#
To manage python versions install e.g., miniconda with
wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O install_miniconda.sh
bash install_miniconda.sh -b -p $HOME/.conda # Change to place of preference
rm install_miniconda.sh
Consider running ~/.conda/bin/conda init
or ~/.conda/bin/conda init zsh
.
Then finally create the environment and activate it
2. Install poetry#
First, install poetry, e.g., via
curl -sSL https://install.python-poetry.org | python3 -
# or directly into your virtual env using `pip install poetry`
Then consider appending
to your .zshrc
/ .bashrc
or alternatively simply running the export manually.
3. Install the neps Package Using poetry#
Clone the repository, e.g.,
Then, inside the main directory of neps run
This will installthe neps package but also additional dev dependencies.
4. Activate pre-commit for the repository#
With the python environment used to install the neps package run in the main directory of neps
This install a set of hooks that will run basic linting and type checking before every comment.
If you ever need to unsinstall the hooks, you can do so with pre-commit uninstall
.
These mostly consist of ruff
for formatting and linting and mypy
for type checking.
We highly recommend you install at least ruff
either on command line, or in the editor of
your choice, e.g.
VSCode,
PyCharm.
Checks and Tests#
We have setup checks and tests at several points in the development flow:
- At every commit we automatically run a suite of pre-commit hooks that perform static code analysis, autoformating, and sanity checks. This is setup during our installation process.
- At every commit / push locally running a minimal suite of integration tests is encouraged. The tests correspond directly to examples in neps_examples and only check for crash-causing errors.
- At every push all integration tests and regression tests are run automatically using github actions.
Linting (Ruff)#
For linting we use ruff
for checking code quality. You can install it locally and use it as so:
This will also be run using pre-commit
hooks.
To ignore a rule for a specific line, you can add a comment with ruff: disable
at the end of the line, e.g.
The configuration of ruff
is in the pyproject.toml
file and we refer you to the
documentation if you require any changes to be made.
There you can find the documentation for all of the rules employed.
Type Checking (Mypy)#
For type checking we use mypy
. You can install it locally and use it as so:
Types are helpful for making your code more understandable by your editor and tools, allowing them to warn you of potential issues, as well as allow for safer refactoring. Copilot also works better with types.
To ignore some error you can use # type: ignore
at the end of the line, e.g.
A common place to ignore types is when dealing with numpy arrays, tensors and pandas, where the type checker can not be sure of the return type.
In the worse case, please just use Any
and move on with your life, the type checker is meant to help you catch bugs,
not hinder you. However it will take some experience to know whe it's trying to tell you something useful vs. something
it just can not infer properly. A good rule of thumb is that you're only dealing with simple native types from python
or types defined from NePS, there is probably a good reason for a mypy error.
If you have issues regarding typing, please feel free to reach out for help @eddiebergman
.
Examples and Integration Tests#
We use examples in neps_examples as integration tests, which we run from the main directory via
If tests fail for you on the master, please raise an issue on github, preferabbly with some informationon the error, traceback and the environment in which you are running, i.e. python version, OS, etc.
Regression Tests#
Regression tests are run on each push to the repository to assure the performance of the optimizers don't degrade.
Currently, regression runs are recorded on JAHS-Bench-201 data for 2 tasks: cifar10
and fashion_mnist
and only for optimizers: random_search
, bayesian_optimization
, mf_bayesian_optimization
.
This information is stored in the tests/regression_runner.py
as two lists: TASKS
, OPTIMIZERS
.
The recorded results are stored as a json dictionary in the tests/losses.json
file.
Adding new optimizer algorithms#
Once a new algorithm is added to NEPS library, we need to first record the performance of the algorithm for 100 optimization runs.
-
If the algorithm expects standard loss function (pipeline) and accepts fidelity hyperparameters in pipeline space, then recording results only requires adding the optimizer name into
OPTIMIZERS
list intests/regression_runner.py
and runningtests/regression_runner.py
-
In case your algorithm requires custom pipeline and/or pipeline space you can modify the
runner.run_pipeline
andrunner.pipeline_space
attributes of theRegressionRunner
after initialization (around line#322
intests/regression_runner.py
)
You can verify the optimizer is recorded by rerunning the regression_runner.py
.
Now regression test will be run on your new optimizer as well on every push.
Regression test metrics#
For each regression test the algorithm is run 10 times to sample its performance, then they are statistically compared to the 100 recorded runs. We use these 3 boolean metrics to define the performance of the algorithm on any task:
- Kolmogorov-Smirnov test for goodness of fit -
pvalue
>= 10% - Absolute median distance - bounded within 92.5% confidence range of the expected median distance
- Median improvement - Median improvement over the recorded median
Test metrics are run for each (optimizer, task)
combination separately and then collected.
The collected metrics are then further combined into 2 metrics
- Task pass - either both
Kolmogorov-Smirnov test
andAbsolute median distance
test passes or justMedian improvement
- Test aggregate - Sum_over_tasks(
Kolmogorov-Smirnov test
+Absolute median distance
+ 2 *Median improvement
)
Finally, a test for an optimizer only passes when at least for one of the tasks Task pass
is true, and Test aggregate
is higher than 1 + number of tasks
On regression test failures#
Regression tests are stochastic by nature, so they might fail occasionally even the algorithm performance didn't degrade. In the case of regression test failure, try running it again first, if the problem still persists, then you can contact Danny Stoll or Samir. You can also run tests locally by running:
Disabling and Skipping Checks etc.#
Pre-commit: How to not run hooks?#
To commit without running pre-commit
use git commit --no-verify -m <COMMIT MESSAGE>
.
Mypy: How to ignore warnings?#
There are two options:
- Disable the warning locally:
Managing Dependencies#
To manage dependencies and for package distribution we use poetry (replaces pip).
Add dependencies#
To install a dependency use
and commit the updated pyproject.toml
to git.
For more advanced dependency management see examples in pyproject.toml
or have a look at the poetry documentation.
Install dependencies added by others#
When other contributors added dependencies to pyproject.toml
, you can install them via
Documentation#
We use MkDocs, more specifically Material for MkDocs for documentation. To support documentation for multiple versions, we use the plugin mike.
Source files for the documentation are under /docs
and configuration at mkdocs.yml.
To build and view the documentation run
and open the URL shown by the mike serve
command.
To publish the documentation run
Releasing a New Version#
There are four steps to releasing a new version of neps:
- Understand Semantic Versioning
- Update the Package Version
- Commit and Push With a Version Tag
- Update Documentation
- Publish on PyPI
0. Understand Semantic Versioning#
We follow the semantic versioning scheme.
1. Update the Package Version and CITATION.cff#
and manually change the version specified in CITATION.cff
.
2. Commit with a Version Tag#
First commit and test
Then tag and push
3. Update Documentation#
First check if the documentation has any issues via
and then looking at it.
Afterwards, publish it via
4. Publish on PyPI#
To publish to PyPI:
- Get publishing rights, e.g., asking Danny or Maciej or Neeratyoy.
- Be careful, once on PyPI we can not change things.
- Run
This will ask for your PyPI credentials.