CARL- Contextually Adaptive Reinforcement Learning¶
Welcome to the documentation of CARL, a benchmark library for Contextually Adaptive Reinforcement Learning. CARL extends well-known RL environments with context, making them easily configurable to test robustness and generalization.
CARL is being developed in Python 3.9.
Feel free to check out our paper and our blog post on CARL!
Contact¶
CARL is developed by https://www.automl.org/. If you want to contribute or found an issue please visit our github page https://github.com/automl/CARL.