Skip to content

This AI Paper from the University of Cambridge and UCL Unveils ‘Blending’: A Breakthrough in Efficiently Achieving ChatGPT-level Performance with Smaller Models Vineet Kumar Artificial Intelligence Category – MarkTechPost

  • by

In the realm of conversational AI, the trend toward larger models, exemplified by behemoths like ChatGPT, Bard, and Gemini, has been palpable. The understanding is that increasing model parameters and training data significantly enhances language models’ quality and capabilities. However, the computational demands of these colossal models raise concerns about efficiency. When intelligently combined, can smaller models match or even outperform their larger counterparts? This is the central question explored in the study at hand.

Reference: https://arxiv.org/pdf/2401.02994.pdf
Reference: https://arxiv.org/pdf/2401.02994.pdf

To test this idea, the authors present Blended (shown in Algorithm 1), a groundbreaking approach demonstrating that random selection of responses from a group of base chat AIs results in a highly capable and engaging combined chat AI. Surprisingly, this collaborative model outperforms systems with orders of magnitude more parameters. The blended model seems to embody the “best of all” characteristics, adapting and learning abilities from diverse systems based on conversational history. This fosters more captivating and varied responses, leading to a more engaging user experience. The efficacy of Blended is validated through large-scale A/B tests on real users within the CHAI platform.

In the landscape of chat AI, the objective is to create an automatic system that produces engaging and entertaining conversations. A chat AI, parameterized by θ, as an implicit language model, models the probability of the next response given the conversational history. To make these models, the three-stage pipeline, inspired by InstructGPT, involves fine-tuning a pre-trained language model, training a reward model using human feedback, and using this model to enhance the original language model.

Designing a chat AI involves various choices, including the base language model, fine-tuning data, and feedback nature. So, one can expect that different recipes and training seeds may lead to highly diverse systems demonstrating unique strengths and characteristics. The study proposes the combination of such other chat AIs, represented by parameters {θ1, θ2…θN}, through a discrete summation approximation of the continuous integral, aligning with Bayesian statistical principles.

The focal point of this approach (Blended) randomly selects a chat AI θ for each response, creating a conversation that blends the strengths of individual chat AIs. This blending is a collaborative process where previous responses influence the current output, resulting in a more engaging and diverse conversation.

Now, the question arises: How do we evaluate NLG outputs? The traditional gold-standard approaches use human evaluators that score the quality of generated responses, which can be costly. However, the study uses user interaction statistics to provide meaningful measures of engagement and quality. User retention and engagement are considered proxy functions for assessing chat AI quality. User retention, measured by the fraction of returning users, and user engagement, represented by the average time spent per user, serve as industry-standard metrics.

The experiments (shown in Figure 1) involve four base chat AIs, including moderately sized open-sourced LLMs and the state-of-the-art GPT-3.5. Blended (25B parameters), comprising Pygmillion, Chai Model, and Vicuna, is compared with individual systems and GPT-3.5(175B parameters) through A/B tests on real users. Blended demonstrates significantly higher engagement and user retention, even outperforming GPT-3.5, with a fraction of the parameters and inference costs equivalent to a single 6B/13B system (shown in Figure 2 and Figure 3).

Reference: https://arxiv.org/pdf/2401.02994.pdf

The results challenge the notion of scaling up models for quality improvement. Blending smaller open-source systems proves to be a viable strategy for enhancing conversational experiences without increasing computational burdens. The study concludes by suggesting avenues for further improvement, emphasizing the importance of model collaboration over simplistic parameter scaling in designing successful chat AIs.

Check out the PaperAll credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our 36k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and LinkedIn Group.

If you like our work, you will love our newsletter..

Don’t Forget to join our Telegram Channel

The post This AI Paper from the University of Cambridge and UCL Unveils ‘Blending’: A Breakthrough in Efficiently Achieving ChatGPT-level Performance with Smaller Models appeared first on MarkTechPost.

 In the realm of conversational AI, the trend toward larger models, exemplified by behemoths like ChatGPT, Bard, and Gemini, has been palpable. The understanding is that increasing model parameters and training data significantly enhances language models’ quality and capabilities. However, the computational demands of these colossal models raise concerns about efficiency. When intelligently combined, can
The post This AI Paper from the University of Cambridge and UCL Unveils ‘Blending’: A Breakthrough in Efficiently Achieving ChatGPT-level Performance with Smaller Models appeared first on MarkTechPost.  Read More Applications, Artificial Intelligence, Editors Pick, Language Model, Large Language Model, Staff, Tech News, Technology, Uncategorized 

Leave a Reply

Your email address will not be published. Required fields are marked *