Neural Architecture Search (NAS) is the process of automating architecture engineering, searching for the best deep learning conﬁguration. One of the main NAS approaches proposed in the literature, Progressive Neural Architecture Search (PNAS), seeks for the architectures with a sequential model-based optimization strategy: it deﬁnes a common recursive structure to generate the networks, whose number of building blocks rises through iterations. However, NAS algorithms are generally designed for an ideal setting without considering the needs and the technical constraints imposed by practical applications. In this paper, we propose a new architecture search named Pareto-Optimal Progressive Neural Architecture Search (POPNAS) that combines the beneﬁts of PNAS to a time-accuracy Pareto optimization problem. POPNAS adds a new time predictor to the existing approach to carry out a joint prediction of time and accuracy for each candidate neural network, searching through the Pareto front. This allows us to reach a trade-off between accuracy and training time, identifying neural network architectures with competitive accuracy in the face of a drastically reduced training time.
- Danilo Ardagna, Politecnico di Milano, Italy
- Eugenio Lomurno, Politecnico di Milano, Italy
- Matteo Matteucci, Politecnico di Milano, Italy
- Stefano Samele, Politecnico di Milano, Italy