Skip to content

Researchers from Vanderbilt University and UC Davis Introduce PRANC: A Deep Learning Framework that is Memory-Efficient during both the Learning and Reconstruction Phases Adnan Hassan Artificial Intelligence Category – MarkTechPost

  • by

Researchers from Vanderbilt University and the University of California, Davis, introduced PRANC, a framework demonstrating the reparameterization of a deep model as a linear combination of randomly initialized and frozen deep models in the weight space. During training, local minima within the subspace spanned by these basis networks are sought, enabling significant compaction of the deep model. PRANC addresses challenges in storing and communicating deep models, offering potential applications in multi-agent learning, continual learners, federated systems, and edge devices. PRANC enables memory-efficient inference through on-the-fly generation of layerwise weights.

The study discusses prior works on model compression and continual learning using randomly initialized networks and subnetworks. It compares various compression methods, including hashing, pruning, and quantization, highlighting their limitations. The proposed PRANC framework aims at extreme model compression, outperforming existing methods. PRANC is compared with traditional codecs and learning-based approaches in image compression, showing its efficacy. Limitations include challenges in reparameterizing specific model parameters and the computational cost of training large models.

The research challenges the notion that improved accuracy in deep models stems solely from increased complexity or parameters. PRANC is a novel approach parameterizing a deep model as a linear combination of frozen random models, aiming to compress models significantly for efficient storage and communication. PRANC addresses challenges in multi-agent learning, continual learners, federated systems, and edge devices. The study emphasizes the need for extreme compression rates and compares PRANC with other compression methods. Limitations include challenges in reparameterizing specific model parameters and computational expense for large models.

PRANC is a framework that parametrizes deep models by combining randomly initialized models in the weight space. It optimizes weights for task-solving, achieving task loss minimization in the span of basis models. Using a single scalar seed for model generation and learned coefficients for reconstruction reduces communication costs. The optimization employs standard backpropagation, enhancing memory efficiency by chunking basis models and generating each chunk with a GPU-based pseudo-random generator. PRANC’s application to image compression is explored, comparing its performance with other methods.

The approach evaluates PRANC’s image classification and compression performance, showcasing its superiority in both tasks. PRANC achieves significant compression, outperforming baselines almost 100 times in image classification, enabling memory-efficient inference. Image compression surpasses JPEG and trained INR methods in PSNR and MS-SSIM evaluations across bitrates. Visualizations illustrate reconstructed images using different subsets. Comparisons with pruning methods highlight competitive accuracy and parameter efficiency.

PRANC is a framework that significantly compresses deep models by parametrizing them as a linear combination of randomly initialized and frozen models. PRANC outperforms baselines in image classification, achieving substantial compression. It enables memory-efficient inference with on-the-fly generation of layerwise weights. In image compression, PRANC surpasses JPEG and trained INR methods in PSNR and MS-SSIM evaluations across bitrates. The study suggests PRANC’s applicability in lifelong learning and distributed scenarios. Limitations include challenges in reparameterizing certain model parameters and computational expenses for large models.

Future applications and improvements for PRANC suggest extending PRANC to compact generative models like GANs or diffusion models for efficient parameter storage and communication. Potential directions include learning linear mixture coefficients in decreasing importance to enhance compactness. Another avenue is optimizing the ordering of basis models to trade off accuracy and compactness based on communication or storage constraints. The study also proposes exploring PRANC in exemplar-based semi-supervised learning methods, emphasizing its role in representation learning through aggressive image augmentation.

Check out the Paper and Github. All credit for this research goes to the researchers of this project. Also, don’t forget to join our 33k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

If you like our work, you will love our newsletter..

The post Researchers from Vanderbilt University and UC Davis Introduce PRANC: A Deep Learning Framework that is Memory-Efficient during both the Learning and Reconstruction Phases appeared first on MarkTechPost.

 Researchers from Vanderbilt University and the University of California, Davis, introduced PRANC, a framework demonstrating the reparameterization of a deep model as a linear combination of randomly initialized and frozen deep models in the weight space. During training, local minima within the subspace spanned by these basis networks are sought, enabling significant compaction of the
The post Researchers from Vanderbilt University and UC Davis Introduce PRANC: A Deep Learning Framework that is Memory-Efficient during both the Learning and Reconstruction Phases appeared first on MarkTechPost.  Read More AI Shorts, Applications, Artificial Intelligence, Deep Learning, Editors Pick, Machine Learning, Staff, Tech News, Technology, Uncategorized 

Leave a Reply

Your email address will not be published. Required fields are marked *