Skip to content

Researchers at Boston University Release the Platypus Family of Fine-Tuned LLMs: To Achieve Cheap, Fast and Powerful Refinement of Base LLMs Tanya Malhotra Artificial Intelligence Category – MarkTechPost

  • by

Large Language Models (LLMs) have taken the world by storm. These super-effective and efficient models stand as the modern marvels of Artificial Intelligence. With the ability to comprehend context, generate text, and converse coherently, they have become capable of redefining communication between humans and machines. Researchers have been focusing on improving the performance of base Large Language Models with the help of a procedure termed parameter efficient tuning (PEFT), which entails optimizing LLMs on the small and potent Open-Platypus dataset.

Recently, a team of researchers from Boston University has introduced Platypus, a unique family of improved and combined Large Language Models that have attained unmatched performance and currently maintain the top spot on HuggingFace’s Open LLM Leaderboard. The meticulously curated dataset known as Open-Platypus is one of the cornerstones, and this dataset has been made accessible to the public after being carefully chosen from a variety of other free datasets. It is a smaller subset of bigger datasets that focuses on particular elements that are crucial for improving the performance of LLMs. 

While utilizing domain-specific information, the goal of the team is to maintain the strong prior knowledge of pretrained LLMs and fine-tune and merge the LoRA modules. The model can be tailored to particular tasks by fine-tuning while preserving the more comprehensive knowledge amassed during initial training. When LoRA modules are combined, several components are brought together to produce a stronger LLM. The model’s hidden potential and specialized domain knowledge can be unveiled due to the synergy.

One crucial aspect of the work is the rigorous efforts that have been put into verifying the integrity of test data and identifying potential contamination within the training data. Some comprehensive checks support the Platypus series of models’ reliability and accuracy, and disclosing the method for this verification procedure could act as a manual for further fieldwork.

The Platypus family of models, which span a variety of model sizes, has exceptional performance in quantitative LLM metrics. It is at the top of the Open LLM leaderboard globally, a feat that attests to the effectiveness of the strategy. The team has shared that their model performs as well as other state-of-the-art fine-tuned LLMs while employing a small portion of the fine-tuning data and computational resources. For instance, a 13B Platypus model may be successfully trained in a remarkable 5 hours using just a single A100 GPU and only 25k questions. This incredible efficiency highlights the excellent caliber of the Open-Platypus dataset and paves the way for additional developments in the area.

The contributions can be summarized as – 

Open-Platypus, a compact dataset comprising 11 public text datasets, has been introduced to enhance LLMs’ STEM and logic knowledge.

This dataset, consisting mainly of human-designed questions, provides strong performance with minimal fine-tuning time and cost.

The team has shared the description of the process for excluding similar data to reduce dataset size and redundancy.

The challenge of data contamination in LLM training sets and the data filtering process have been explored. 

An explanation of the selection and merging approach for specialized fine-tuned LoRA modules has been shared, contributing to the overall performance enhancement of LLMs.

Check out the Paper and Project. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 28k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

We are super excited to release the Platypus family of finetuned LLMs . Platypus achieves the top score in the Hugging Face Open LLM Leaderboard ! The main focus of our work is to achieve cheap, fast and powerful refinement of base LLMs.
page: https://t.co/QHJ6kDoCYa pic.twitter.com/MOSiflQLDU

— Nataniel Ruiz (@natanielruizg) August 11, 2023

The post Researchers at Boston University Release the Platypus Family of Fine-Tuned LLMs: To Achieve Cheap, Fast and Powerful Refinement of Base LLMs appeared first on MarkTechPost.

 Large Language Models (LLMs) have taken the world by storm. These super-effective and efficient models stand as the modern marvels of Artificial Intelligence. With the ability to comprehend context, generate text, and converse coherently, they have become capable of redefining communication between humans and machines. Researchers have been focusing on improving the performance of base
The post Researchers at Boston University Release the Platypus Family of Fine-Tuned LLMs: To Achieve Cheap, Fast and Powerful Refinement of Base LLMs appeared first on MarkTechPost.  Read More AI Shorts, Applications, Artificial Intelligence, Editors Pick, Generative AI, Language Model, Large Language Model, Machine Learning, Staff, Tech News, Technology, Uncategorized 

Leave a Reply

Your email address will not be published. Required fields are marked *