Skip to content

A New AI Research Introduces A Method To Answer Questions By Meta-Reasoning over Multiple Chains of Thought Tanushree Shenwai Artificial Intelligence Category – MarkTechPost

  • by

CoT prompting uses a step-by-step explanation to guide a big language model to develop a response. CoT prompting has been demonstrated to increase productivity in activities that require extensive reasoning significantly. The self-consistency (SC) technique further enhances accuracy by sampling multiple chains of thought and returning the majority output.

The efficiency gains result from SC, but the method has flaws. The first is that it’s impossible to get a consensus when there are many conceivable outcomes because each reasoning chain could end up with a different result. Second, ignoring the thinking process that led to the outcome can lead to missing important details. 

In their paper “Multichain Reasoning,” researchers from Tel Aviv University, the Allen Institute for Artificial Intelligence, and Bar Ilan University present a method called MCR, in which they instruct a large language model (LLM) to meta-reason across several reasoning chains and generate a conclusive response and explanation. Sampled reasoning chains are not used for their predictions (as they are in SC) but rather to gather data from various chains. While both approaches rely on drawing from a pool of possible reasoning chains, SC gives the answer most commonly reached by those chains: “No” (grey box, lower right). Conversely, MCR combines the intermediate results from each chain (blue boxes, top left) into a single context that is then handed along to a meta-reasoner model alongside the original inquiry. The meta-reasoner is a distinct LLM that is asked to meta-reason on several different lines of reasoning before coming up with a conclusive solution and justification. 

The core of MCR consists of three parts. The reasoning chain is generated by combining a decomposition model and a retriever. After these chains are combined, a multichain context is created and fed into the meta-reasoner.

The team tests MCR on numerous difficult multi-hop QA datasets in an open-domain scenario. They categorize problems as either implicit or explicit. They use SC and versions of Self-Ask and CoT with retrieval as reference points for comparisons with MCR. Using the same number of reasoning chains, the results reveal that MCR consistently beats all other baselines. They evaluate MCR’s value by careful rating and measuring the quality of the explanations it generates. According to the findings, MCR can produce well-reasoned explanations for more than 82% of situations.

Check out the Research Paper and Github Link. Don’t forget to join our 20k+ ML SubRedditDiscord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more. If you have any questions regarding the above article or if we missed anything, feel free to email us at Asif@marktechpost.com

Check Out 100’s AI Tools in AI Tools Club

The post A New AI Research Introduces A Method To Answer Questions By Meta-Reasoning over Multiple Chains of Thought appeared first on MarkTechPost.

 CoT prompting uses a step-by-step explanation to guide a big language model to develop a response. CoT prompting has been demonstrated to increase productivity in activities that require extensive reasoning significantly. The self-consistency (SC) technique further enhances accuracy by sampling multiple chains of thought and returning the majority output. The efficiency gains result from SC,
The post A New AI Research Introduces A Method To Answer Questions By Meta-Reasoning over Multiple Chains of Thought appeared first on MarkTechPost.  Read More AI Shorts, Applications, Artificial Intelligence, Editors Pick, Language Model, Large Language Model, Machine Learning, Staff, Tech News, Technology, Uncategorized 

Leave a Reply

Your email address will not be published. Required fields are marked *