Skip to content

Salesforce AI Introduces ReGenesis: A Novel AI Approach to Improving Large Language Model Reasoning Capabilities Nikhil Artificial Intelligence Category – MarkTechPost

  • by

​[[{“value”:”

Large language models (LLMs) have revolutionized how machines process and generate human language, but their ability to reason effectively across diverse tasks remains a significant challenge. Researchers in AI are working to enable these models to perform not just language understanding but also complex reasoning tasks like problem-solving in mathematics, logic, and general knowledge. The focus is creating systems that can perform reasoning-based tasks autonomously and accurately across various domains.

One of the critical problems faced by AI researchers is that many current methods for enhancing LLM reasoning capabilities rely heavily on human intervention. These methods often require meticulous human-designed reasoning examples or the use of superior models, both of which are costly and time-consuming. Furthermore, when LLMs are tested on tasks outside their original training domain, they lose accuracy, revealing that current systems must be truly generalists in their reasoning capabilities. This gap in performance across varied tasks presents a barrier to creating adaptable, general-purpose AI systems.

Several existing methods aim to tackle this issue. These approaches typically prompt LLMs to generate reasoning steps, often called chain-of-thought (CoT) reasoning, and filter these steps based on the outcome or self-consistency. However, these methods, such as STaR and LMSI, have limitations. They utilize small, fixed sets of human-designed reasoning paths that help the models perform well in tasks similar to those they were trained on but struggle when applied to out-of-domain (OOD) tasks, limiting their overall usefulness. Thus, while these models can enhance reasoning in a controlled environment, they need to generalize and provide consistent performance when faced with new challenges.

In response to these limitations, researchers from Salesforce AI Research introduced a novel method called ReGenesis. This method allows LLMs to self-improve their reasoning abilities without requiring additional human-designed examples. ReGenesis enables models to synthesize their reasoning paths as post-training data, helping them adapt to new tasks more effectively. By progressively refining reasoning from abstract guidelines to task-specific structures, the method addresses the shortcomings of existing models and helps build a more generalized reasoning capability.

The methodology behind ReGenesis is structured into three key phases. First, it generates broad, task-agnostic reasoning guidelines that are general principles applicable to various tasks. These guidelines are not tied to any particular problem, which allows the model to maintain flexibility in its reasoning. Next, these abstract guidelines are adapted into task-specific reasoning structures, allowing the model to develop more focused reasoning strategies for particular problems. Finally, the LLM uses these reasoning structures to create detailed reasoning paths. Once the paths are generated, the model filters them using ground-truth answers or majority-vote techniques to eliminate incorrect solutions. This process, therefore, enhances the model’s reasoning capabilities without relying on predefined examples or extensive human input, making the entire process more scalable and effective for a range of tasks.

The results of implementing ReGenesis are impressive. The researchers evaluated the method across in- and out-of-domain tasks and observed that ReGenesis consistently outperformed existing methods. Specifically, ReGenesis delivered a 6.1% improvement in OOD tasks, whereas other models exhibited an average performance drop of 4.6%. In one set of evaluations involving six OOD tasks like mathematical reasoning and logic, ReGenesis managed to maintain its performance, while other models saw a significant decline after post-training. On in-domain tasks, such as those that the models were originally trained on, ReGenesis also showed superior performance. For example, it achieved between 7.1% and 18.9% better results across various tasks, including common-sense reasoning and mathematical problem-solving.

More detailed results from ReGenesis further highlight its effectiveness. For six OOD tasks, including math, logic, and natural language inference, ReGenesis showed a consistent improvement in accuracy. In one instance, the model exhibited a 6.1% boost in OOD performance, in contrast to the 4.6% average performance drop seen in baseline methods. Further, while existing methods like STaR suffered from declines in accuracy when applied to new tasks, ReGenesis could avoid this decline and demonstrate tangible improvements, making it a more robust solution for reasoning generalization. In another evaluation involving five in-domain tasks, ReGenesis outperformed five baseline methods by a margin of 7.1% to 18.9%, further underscoring its superior ability to reason through diverse tasks effectively.

In conclusion, introducing ReGenesis by Salesforce AI Research addresses a significant gap in developing LLMs. By enabling models to self-synthesize reasoning paths from general guidelines and adapt them to specific tasks, ReGenesis provides a scalable solution to improve both in-domain and out-of-domain performance. The method’s ability to enhance reasoning without relying on costly human supervision or task-specific training data marks an important step forward in developing AI systems that can truly generalize across a wide range of tasks. The performance gains reported in in- and out-of-domain tasks make ReGenesis a promising tool for advancing reasoning capabilities in AI.

Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. If you like our work, you will love our newsletter.. Don’t Forget to join our 50k+ ML SubReddit.

[Upcoming Live Webinar- Oct 29, 2024] The Best Platform for Serving Fine-Tuned Models: Predibase Inference Engine (Promoted)

The post Salesforce AI Introduces ReGenesis: A Novel AI Approach to Improving Large Language Model Reasoning Capabilities appeared first on MarkTechPost.

“}]] [[{“value”:”Large language models (LLMs) have revolutionized how machines process and generate human language, but their ability to reason effectively across diverse tasks remains a significant challenge. Researchers in AI are working to enable these models to perform not just language understanding but also complex reasoning tasks like problem-solving in mathematics, logic, and general knowledge. The
The post Salesforce AI Introduces ReGenesis: A Novel AI Approach to Improving Large Language Model Reasoning Capabilities appeared first on MarkTechPost.”}]]  Read More AI Paper Summary, AI Shorts, Applications, Artificial Intelligence, Editors Pick, Language Model, Large Language Model, Machine Learning, Staff, Tech News, Technology 

Leave a Reply

Your email address will not be published. Required fields are marked *