Skip to yearly menu bar Skip to main content


Virtual presentation / poster accept

Deep Ranking Ensembles for Hyperparameter Optimization

Abdus Salam Khazi · Sebastian Pineda Arango · Josif Grabocka

Keywords: [ Deep Learning and representational learning ] [ hyperparameter optimization ] [ Deep Ensembles ] [ Ranking Losses ] [ meta-learning ]


Abstract:

Automatically optimizing the hyperparameters of Machine Learning algorithms is one of the primary open questions in AI. Existing work in Hyperparameter Optimization (HPO) trains surrogate models for approximating the response surface of hyperparameters as a regression task. In contrast, we hypothesize that the optimal strategy for training surrogates is to preserve the ranks of the performances of hyperparameter configurations as a Learning to Rank problem. As a result, we present a novel method that meta-learns neural network surrogates optimized for ranking the configurations' performances while modeling their uncertainty via ensembling. In a large-scale experimental protocol comprising 12 baselines, 16 HPO search spaces and 86 datasets/tasks, we demonstrate that our method achieves new state-of-the-art results in HPO.

Chat is not available.