Skip to yearly menu bar Skip to main content


Poster

Insights from the Use of Previously Unseen Neural Architecture Search Datasets

Rob Geada · David Towers · Matthew Forshaw · Amir Atapour-Abarghouei · Stephen McGough


Abstract:

The boundless possibility of neural networks which can be used to solve a problem -- each with different performance -- leads to a situation where a Deep Learning (DL) expert is required to identify the best neural network. This goes against the hopes for DL to remove the need for experts. Neural Architecture Search (NAS) offers a solution for this by automatically identifying the best architecture. However, to date, NAS work has focused on a small set of datasets which we argue are not representative of real-world problems. We introduce eight new datasets that were created for a series of NAS Challenges (More details will be provided post review to maintain anonymity); AddNIST, Language, MultNIST, CIFARTile, Gutenberg, Isabella, GeoClassing, and Chesseract. The datasets and the challenges they were a part of were developed to direct attention to issues in NAS development and to encourage authors to consider how their models will perform on datasets unknown to them at development time. We present experimentation using standard Deep Learning methods as well as the best results from the participants of the challenge.

Live content is unavailable. Log in and register to view live content