back to list

Project: AutoML: scalable meta-learning and continual learning for Bayesian Optimization

Description

Bayesian Optimization is often used in Automated machine learning to predict which models to evaluate next. It works by learning a 'surrogate model' that is trained on previous models tried and can predict which models are interesting to try next. In all current AutoML systems, this model is discarded afterwards. Even when a next dataset is similar, the whole search starts from scratch.

Recent work proposed an effective way to leverage prior experience, by storing prior surrogate models and leveraging them to make predictions for new tasks: https://arxiv.org/abs/1802.02219 However, storing many such models can be very memory-intensive.

This assignment aims to resolve this issue by storing these models more efficiently. For instance, when Gaussian Processes are used, we can use sparse GP's instead, which only require a set of 'inducing points' to be stored. Many other ideas can certainly be explored as well. The end goals is to build an AutoML system that can leverage experience from 100s or even 1000s of prior tasks.

Details
Supervisor
Joaquin Vanschoren
Interested?
Get in contact