](https://deep-paper.org/en/papers/2025-10/1802.03268/images/cover.png)
ENAS: Making Neural Architecture Search 1000x Faster
Designing a high-performing neural network is often described as a dark art. It requires deep expertise, intuition, and a whole lot of trial and error. What if we could automate this process? This is the promise of Neural Architecture Search (NAS), a field that aims to automatically discover the best network architecture for a given task. The original NAS paper by Zoph & Le (2017) was a landmark achievement. It used reinforcement learning to discover state-of-the-art architectures for image classification and language modeling, surpassing designs created by human experts. But it came with a colossal price tag: the search process required hundreds of GPUs running for several days. For example, NASNet (Zoph et al., 2018) used 450 GPUs for 3–4 days. This level of computational resources is simply out of reach for most researchers, students, and companies. ...