Skip to content

This AI Paper Introduces the Complexity-Impacted Reasoning Score (CIRS): Evaluating the Role of Code Complexity in Enhancing the Reasoning Abilities of Large Language Models Aneesh Tickoo Artificial Intelligence Category – MarkTechPost

  • by

Large language models (LLMs) have become a general-purpose approach to embodied artificial intelligence problem-solving. When agents need to understand the semantic nuances of their environment for efficient control, LLMs’ reasoning skills are crucial in embodied AI. Recent methods, which they refer to as “programs of thought,” use programming languages as an improved prompting system for challenging reasoning tasks. Program-of-thought prompting separates the issues into executable code segments and deals with them one at a time, unlike chain-of-thought prompting. However, the relationship between the use of programming languages and the development of LLMs’ thinking skills has yet to receive enough research. When does program-of-thought suggesting work for reasoning2 remain the crucial question? 

The complexity-impacted reasoning score (CIRS), a thorough metric for the link between code reasoning stages and their effects on LLMs’ reasoning abilities, is proposed in this paper. They contend that programming languages are inherently superior to serialized natural language because of (1) their improved modeling of complex structures. (2) Their innate procedure-oriented logic aids in solving difficulties involving several steps in thinking. Because of this, their suggested measure assesses the code complexity from both a structural and a logical standpoint. In particular, they compute the structural complexity of code reasoning stages (rationales) using an abstract syntax tree (AST). Their method uses three AST indicators (node count, node type, and depth) to keep all structural information in AST represented as a tree, which thoroughly comprehends code structures. 

Researchers from Zhejiang University, Donghai Laboratory and National University of Singapore develop a way to determine logical complexity by combining coding difficulty with cyclomatic complexity, drawing inspiration from Halsted and McCabe’s idea. Thus, it is possible to consider the code’s operators, operands, and control flow. They can explicitly calculate the logic’s complexity within the code. They discover through an empirical investigation using their suggested CIRS that present LLMs have a restricted comprehension of symbolic information like code and that not all sophisticated code data can be taught and understood by LLMs.Low-complexity code blocks lack the necessary information, but high-complexity code blocks could be too challenging for LLMs to understand. To effectively improve the reasoning abilities of LLMs, only code data with an appropriate amount of complexity (structure & logic), both basic and detailed, are needed. 

They provide a method for automatically synthesizing and stratifying data that can produce and exclude data with the strongest capacity for reasoning. They use their approach in two different situations: (1) directing the creation of instructions for activities requiring mathematical thinking. (2) filtering code data for activities involving code creation. Their suggested strategy outperforms baseline models in mathematical reasoning and demonstrates success in code creation challenges. 

Their contributions to this publication are: 

• They suggest CIRS, a unique approach to measuring reasoning difficulty for code data. Their method, which analyses the code data from logical and structural angles, can precisely measure the relationship between code complexity and reasoning capacity. 

• They conduct an empirical analysis of the effects of various levels of complexity, determining the ideal degree of code languages that LLMs can learn as the key determinant of program-of-thought prompting reasoning skills. 

• They create an auto-synthesizing and stratifying algorithm and use their method for code data filtering and instruction creation for jobs requiring mathematical reasoning. Numerous findings support the viability of their suggested viewpoint.

Check out the Paper and Github link. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 29k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

If you like our work, you will love our newsletter..

The post This AI Paper Introduces the Complexity-Impacted Reasoning Score (CIRS): Evaluating the Role of Code Complexity in Enhancing the Reasoning Abilities of Large Language Models appeared first on MarkTechPost.

 Large language models (LLMs) have become a general-purpose approach to embodied artificial intelligence problem-solving. When agents need to understand the semantic nuances of their environment for efficient control, LLMs’ reasoning skills are crucial in embodied AI. Recent methods, which they refer to as “programs of thought,” use programming languages as an improved prompting system for
The post This AI Paper Introduces the Complexity-Impacted Reasoning Score (CIRS): Evaluating the Role of Code Complexity in Enhancing the Reasoning Abilities of Large Language Models appeared first on MarkTechPost.  Read More AI Shorts, Applications, Artificial Intelligence, Editors Pick, Language Model, Large Language Model, Machine Learning, Staff, Tech News, Technology, Uncategorized 

Leave a Reply

Your email address will not be published. Required fields are marked *