Skip to content

Google AI Proposes LayerNAS That Formulates Multi-Objective Neural Architecture Search To Combinatorial Optimization Niharika Singh Artificial Intelligence Category – MarkTechPost

  • by

Neural architecture search (NAS) techniques create complex model architectures by manually searching a smaller portion of the model space. Different NAS algorithms have been proposed and have discovered several efficient model architectures, including MobileNetV3 and EfficientNet. By reformulating the multi-objective NAS problem within the context of combinatorial optimization, the LayerNAS method significantly reduces the complexity of the problem. This substantially reduces the number of model candidates that must be searched, the computation required for multi-trial searches, and the identification of model architectures that perform better. Models with top-1 accuracy on ImageNet, up to 4.9% better than existing state-of-the-art alternatives, were discovered using a search space constructed using backbones obtained from MobileNetV2 and MobileNetV3.

LayerNAS is built on search spaces that meet the following two criteria: One of the model choices produced by searching the previous layer and using those search options on the current layer can be used to build an ideal model. If the current layer has a FLOP constraint, we can constrain the preceding layer by lowering the FLOPs of the current layer. In these circumstances, it is possible to search linearly from layer 1 to layer n because it is known that changing any previous layer after finding the best option for layer i will not improve the model’s performance. 

The candidates can then be grouped according to their cost, limiting the number of candidates stored per layer. Only the more accurate model is kept when two models have the same FLOPs, provided that doing so won’t change the architecture of the layers below. The layerwise cost-based approach enables one to significantly reduce the search space while rigorously reasoning over the algorithm’s polynomial complexity. In contrast, to complete treatment, the search space would exponentially increase with layers because the full range of options is available at each layer. The experimental evaluation results demonstrate that the best models may be found within these limitations.

LayerNAS reduces NAS to a combinatorial optimization problem by applying a layerwise-cost approach. After training with a specific component Si, the cost and reward may be calculated for each layer i. This implies the following combinatorial issue: How can one choose one option for each layer while staying within a cost budget to achieve the best reward? There are numerous ways to overcome this issue, but dynamic programming is one of the easiest. The following metrics are evaluated when comparing NAS algorithms: Quality, Stability, and Efficiency. The algorithm is evaluated on the standard benchmark NATS-Bench using 100 NAS runs and compared against other NAS algorithms such as random search, regularized evolution, and proximal policy optimization. The differences between these search algorithms are visualized for the metrics described above. The average accuracy and accuracy variation for each comparison are mentioned (variation is indicated by a shaded rectangle corresponding to the 25% to 75% interquartile range). 

To avoid searching for many useless model designs, LayerNAS performance formulates the problem differently by separating the cost and reward. Fewer channels in earlier layers tend to improve performance in model candidates. This explains how LayerNAS discovers better models faster than other methods because it doesn’t waste time on models with unfavorable cost distributions. Using combinatorial optimization, which effectively limits the search complexity to be polynomial, LayerNAS is proposed as a solution to the multi-objective NAS challenge.

The researchers created a new way to find better models for neural networks called LayerNAS. They compared it with other methods and found that it worked better. They also used it to find better models for MobileNetV2 and MobileNetV3.

Check out the Paper and Reference Article. Don’t forget to join our 20k+ ML SubRedditDiscord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more. If you have any questions regarding the above article or if we missed anything, feel free to email us at Asif@marktechpost.com

Check Out 100’s AI Tools in AI Tools Club

The post Google AI Proposes LayerNAS That Formulates Multi-Objective Neural Architecture Search To Combinatorial Optimization appeared first on MarkTechPost.

 Neural architecture search (NAS) techniques create complex model architectures by manually searching a smaller portion of the model space. Different NAS algorithms have been proposed and have discovered several efficient model architectures, including MobileNetV3 and EfficientNet. By reformulating the multi-objective NAS problem within the context of combinatorial optimization, the LayerNAS method significantly reduces the complexity
The post Google AI Proposes LayerNAS That Formulates Multi-Objective Neural Architecture Search To Combinatorial Optimization appeared first on MarkTechPost.  Read More AI Shorts, Applications, Artificial Intelligence, Editors Pick, Machine Learning, Staff, Tech News, Technology, Uncategorized 

Leave a Reply

Your email address will not be published. Required fields are marked *