Designing state-of-the-art deep learning models is an incredibly complex challenge that researchers have been tackling using an approach called Neural Architecture Search (NAS). The goal of NAS is to automate the discovery of optimal neural network architectures for a given task by evaluating thousands of candidate architectures against a performance metric like accuracy on a validation dataset.
However, previous NAS methods faced significant bottlenecks due to the need to extensively train each candidate architecture, making the process extremely computationally expensive and time-consuming. Researchers have proposed various techniques, such as weight sharing, differentiable search spaces, and predictor-based methods, to accelerate NAS, but computational complexity remained a major hurdle.
This paper presents NASGraph (shown in Figure 1), an innovative method that drastically reduces the computational burden of neural architecture search. Instead of fully training each candidate architecture, NASGraph converts them into graph representations and uses graph metrics to estimate their performance efficiently.
Specifically, the neural network is first split into graph blocks containing layers like convolutions and activations. For each block, the technique determines how strongly each input channel contributes to the output channels by doing a single forward pass. These contributions form the weighted edges when mapping the inputs to nodes and connections to edges in the graph representation.
Once the architecture is represented as a graph, NASGraph computes the average degree (average number of connections per node) as a proxy for ranking architecture quality. However, the researchers introduce surrogate models with reduced computational requirements to further accelerate this process.
These NASGraph(h, c, m) surrogate models have fewer channels h, fewer search cells c per module, and fewer modules m. As shown in their systematic study following the convention in EcoNAS, using such computationally reduced settings allows trading off accuracy for significant speedups.
To evaluate NASGraph, the team tested it on multiple NAS benchmarks like NAS-Bench-101, NAS-Bench-201, and TransNAS-Bench-101. They compared the rankings from the average degree metric to the ground truth and other training-free NAS methods. The average degree metric exhibited a strong correlation with true architecture performance, outperforming previous training-free NAS methods and exhibited low bias towards particular operations compared to ground truth rankings. Moreover, combining this graph measure with other training-free metrics like Jacobian covariance boosted the ranking capabilities further, achieving new state-of-the-art Spearman ranking correlations exceeding 0.8 on datasets like CIFAR-10, CIFAR-100 and ImageNet-16-120.
In conclusion, NASGraph presents a paradigm shift in neural architecture search by leveraging an ingenious graph-based approach. It overcomes a major computational bottleneck plaguing previous NAS methods by circumventing the need for architecture training. With its stellar performance, low bias, data-agnostic nature, and remarkable efficiency, NASGraph could catalyze a new era of rapid neural architecture exploration and discovery of powerful AI models across diverse applications.
Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our Telegram Channel, Discord Channel, and LinkedIn Group.
If you like our work, you will love our newsletter..
Don’t Forget to join our 41k+ ML SubReddit
The post NASGraph: A Novel Graph-based Machine Learning Method for NAS Featuring Lightweight (CPU-only) Computation and is Data-Agnostic and Training-Free appeared first on MarkTechPost.
Source: Read MoreÂ