Skip to content

Google DeepMind Researchers Propose Optimization by PROmpting (OPRO): Large Language Models as Optimizers Tanya Malhotra Artificial Intelligence Category – MarkTechPost

  • by

With the constant advancements in the field of Artificial Intelligence, its subfields, including Natural Language Processing, Natural Language Generation, Natural Language Understanding, and Computer Vision, are getting significantly popular. Large language models (LLMs) that recently gained a lot of attention are being used as optimizers. Their capacity is being utilized for natural language comprehension to enhance optimization procedures. Optimization has practical implications in a number of different industries and contexts. Derivative-based optimization methods have historically proven good at handling a variety of issues. 

This comes with certain challenges as gradients may only sometimes be available in real-world circumstances, which presents difficult problems. To address these issues, a team of researchers from Google DeepMind has introduced a unique approach called Optimisation by PROmpting (OPRO) as a solution to this problem. Through the use of LLMs as optimizers, OPRO provides a straightforward yet incredibly powerful technique. In this case, the main novelty is the use of everyday language to express optimization tasks, which makes the process simpler and more approachable.

OPRO begins by providing a natural language description of the optimization problem. This indicates that the issue is expressed using simple language rather than convoluted mathematical formulae, making it easier to comprehend. Secondly, it provides an Iterative Solution Generation. The LLM creates new candidate solutions for each optimization step depending on the given natural language prompt. This prompt, which is significant, contains details on previously created solutions and their associated values. These traditional options serve as a starting point for further development.

Updated and assessed solutions are then developed, and their performance or quality is evaluated. The prompt for the following optimization step includes these solutions after they have been examined. The solutions are progressively improved as the iterative process proceeds. Some practical examples have been used to illustrate OPRO’s effectiveness. In the beginning, OPRO was used to tackle two well-known optimization issues: the linear regression problem and the traveling salesman problem. These issues are prominent and serve as a standard for assessing the method’s efficacy. OPRO demonstrated its capacity to identify excellent solutions to these issues.

Secondly, it has been used for prompt optimization. OPRO went above and beyond addressing particular optimization issues. The issue of optimizing prompts themselves was also covered. Finding instructions that increase a task’s accuracy was the goal. This is especially true for tasks involving natural language processing, where the structure and content of the prompt have a big influence on the outcome.

The team has shown that OPRO-optimized prompts routinely outperform those created by humans. In one instance, they enhance performance on Big-Bench Hard workloads by up to an astonishing 50% and up to 8% on the GSM8K benchmark. This demonstrates the substantial potential of OPRO in improving optimization results.

In conclusion, OPRO presents a revolutionary method of optimization that makes use of big language models. OPRO shows its efficiency in resolving common optimization issues and improving prompts by explaining optimization tasks in normal language and repeatedly producing and refining solutions. The results indicate significant performance gains over conventional approaches, especially when gradient information is either unavailable or difficult to collect.

Check out the PaperAll Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 30k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

If you like our work, you will love our newsletter..

The post Google DeepMind Researchers Propose Optimization by PROmpting (OPRO): Large Language Models as Optimizers appeared first on MarkTechPost.

 With the constant advancements in the field of Artificial Intelligence, its subfields, including Natural Language Processing, Natural Language Generation, Natural Language Understanding, and Computer Vision, are getting significantly popular. Large language models (LLMs) that recently gained a lot of attention are being used as optimizers. Their capacity is being utilized for natural language comprehension to
The post Google DeepMind Researchers Propose Optimization by PROmpting (OPRO): Large Language Models as Optimizers appeared first on MarkTechPost.  Read More AI Shorts, Applications, Artificial Intelligence, Editors Pick, Language Model, Large Language Model, Machine Learning, Staff, Tech News, Technology, Uncategorized 

Leave a Reply

Your email address will not be published. Required fields are marked *