[[{“value”:”
Large language models (LLMs) have revolutionized the field of artificial intelligence by performing a wide range of tasks across different domains. These models are expected to work seamlessly in multiple languages, solving complex problems while ensuring safety. However, the challenge lies in maintaining safety without compromising performance, especially in multilingual settings. As AI technologies become globally pervasive, addressing the safety concerns that arise when models trained predominantly in English are deployed across various languages and cultural contexts is essential.
The core issue revolves around balancing performance and safety in LLMs. Safety concerns arise when models produce biased or harmful outputs, particularly in languages with limited training data. Typically, the methods to address this involve fine-tuning models on mixed datasets, combining general-purpose and safety tasks. However, these approaches can lead to undesirable trade-offs. In many cases, increasing the safety measures in LLMs can negatively impact their ability to perform well on general tasks. The challenge, therefore, is to develop an approach that improves both safety and performance in multilingual LLMs without requiring massive amounts of task-specific data.
Current methods used to balance these objectives often rely on data-mixing techniques. This involves creating a single model by training it on various datasets from various tasks and languages. While these methods help achieve some level of multitasking ability, they can result in under-addressed safety concerns in languages other than English. In addition, the complexity of managing numerous tasks simultaneously often reduces the model’s ability to perform well in any of them. The lack of specialized attention to each task and language limits the model’s capacity to address safety and general performance effectively.
To overcome these limitations, researchers from Cohere AI have introduced an innovative approach based on model merging. Instead of relying on the traditional method of data mixing, where a single model is trained across multiple tasks and languages, the researchers propose merging separate models that have been independently fine-tuned for specific tasks and languages. This method allows for better specialization within each model before merging them into a unified system. By doing so, the models retain their unique capabilities, offering improvements in safety and general performance across diverse languages.
The merging process is conducted through multiple techniques. The primary method introduced by the researchers is Spherical Linear Interpolation (SLERP), which allows for smooth transitions between different models by blending their weights along a spherical path. This technique ensures that the unique properties of each model are preserved, allowing the merged model to handle various tasks without compromising safety or performance. Another method, TIES (Task Interference Elimination Strategy), focuses on resolving conflicts between task-specific fine-tuned models by adjusting model parameters to align better. The merging techniques also include linear merging and DARE-TIES, which further enhance the robustness of the final model by addressing interference issues and ensuring that the model parameters contribute positively to performance.
The results of this research show clear improvements in both general performance and safety. For instance, SLERP merging achieved an impressive 7% improvement in general performance and a 3.1% reduction in harmful outputs compared to traditional data mixing methods. On the other hand, TIES merging delivered a remarkable 10.4% reduction in harmful outputs, although it slightly reduced general performance by 7.4%. These numbers indicate that model merging significantly outperforms data mixing when balancing safety and performance. Moreover, when models were fine-tuned for individual languages and merged, the researchers observed up to a 6.6% reduction in harmful outputs and a 3.8% improvement in general benchmarks, further proving the effectiveness of language-specific model merging over multilingual model training.
The performance improvements were particularly noteworthy in some languages, with Russian showing the highest reduction in harmful generations (up to 15%) using TIES merging. Spanish, meanwhile, exhibited a 10% improvement in general performance with both SLERP and TIES methods. However, not all languages benefit equally. English models, for example, showed a decline in safety performance when merged, highlighting the variability in outcomes based on the underlying training data and merging strategy.
The research provides a comprehensive framework for building safer and more effective multilingual LLMs. By merging models fine-tuned for safety and performance on specific tasks and languages, the researchers from Cohere AI demonstrated a more efficient and scalable method for improving LLMs. The approach reduces the need for massive amounts of training data and allows for better alignment of safety protocols across languages, which is critically needed in today’s AI landscape.
In conclusion, model merging represents a promising step forward in addressing balancing performance and safety challenges in LLMs, particularly in multilingual settings. This method significantly improves LLMs’ ability to deliver safe and high-quality outputs, especially when applied to low-resource languages. As AI evolves, techniques like model merging could become essential tools for ensuring that AI systems are robust and safe across diverse linguistic and cultural contexts.
Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. If you like our work, you will love our newsletter.. Don’t Forget to join our 50k+ ML SubReddit.
[Upcoming Live Webinar- Oct 29, 2024] The Best Platform for Serving Fine-Tuned Models: Predibase Inference Engine (Promoted)
The post This AI Research from Cohere for AI Compares Merging vs Data Mixing as a Recipe for Building High-Performant Aligned LLMs appeared first on MarkTechPost.
“}]] [[{“value”:”Large language models (LLMs) have revolutionized the field of artificial intelligence by performing a wide range of tasks across different domains. These models are expected to work seamlessly in multiple languages, solving complex problems while ensuring safety. However, the challenge lies in maintaining safety without compromising performance, especially in multilingual settings. As AI technologies become
The post This AI Research from Cohere for AI Compares Merging vs Data Mixing as a Recipe for Building High-Performant Aligned LLMs appeared first on MarkTechPost.”}]] Read More AI Paper Summary, AI Shorts, Applications, Artificial Intelligence, Editors Pick, Language Model, Staff, Tech News, Technology