Skip to content

Researchers from Stanford and Microsoft Introduce Self-Improving AI: Leveraging GPT-4 to Elevate Scaffolding Program Performance Aneesh Tickoo Artificial Intelligence Category – MarkTechPost

  • by

Almost every aim described in natural language may be optimized by querying a language model. However, a program may frequently provide outputs with greater objective values by making several organized calls to a language model. They refer to these as “scaffolding” programs, and they are often created (by people) using a computer language like Python. Their main finding is that a scaffolding program’s design is an optimization issue for any distribution over optimization problems and any given language model. Researchers from Microsoft Research and Stanford University in this paper describe the Self-Taught Optimizer (STOP), a technique in which the recursive application of code that uses a language model to enhance any given solution leads to self-improvement. 

Their method starts with an initial seed “improver” scaffolding program that uses the language model to enhance a response to a subsequent challenge. The model improves this improver program as the system iterates. To measure the effectiveness of their self-optimizing architecture, they apply a limited selection of downstream algorithmic tasks. Their findings show that the model improves as it runs through more iterations using its self-improvement techniques. STOP demonstrates how language models may function as their meta-optimizers in this way. In addition, they analyze the kind of self-improvement tactics the model (see Figure 1) suggests, how well the recommended strategies translate to downstream tasks, and if the model is vulnerable to risky self-improvement techniques. 

Figure 1: Examples of self-improvement techniques suggested and used by GPT-4 are shown here. The arbitrary code, including the scaffolding code itself, is then revised using each technique as scaffolding.

Since the underlying language model is unaltered, this issue is known as recursively self-improving code generation, which is inspired by but not entirely a Recursively Self-Improving (RSI) system. It has been at least 50 years since researchers formalized the concept of RSI. That effort, however, concentrated on creating systems that were more competent in general and made the assumption that the model could improve every part of its code. Their research is a modest step in that direction because it only considers the model’s capacity to enhance the scaffold that invokes it iteratively. The RSI-code-generation problem is first stated mathematically well-defined in this study. 

Then, they create and assess STOP to illustrate the possible use of RSI-code generation. Different downstream jobs have demonstrated improvements. When utilizing a version of the GPT-4 language model trained on data up to 2021, far in advance of the debut of most scaffolding systems, Figure 1 demonstrates a few of the intriguing and useful scaffolds STOP offers. Additional tests track how frequently the model tries to turn off a sandbox flag. Finally, they tackle issues with the ethical development of such technology. 

The main contributions of this work are:

Formulating a meta-optimization strategy where a scaffolding system recursively improves itself.

Demonstrating that this system can successfully recursively improve itself using a modern language model (GPT-4 in particular).

Examining the self-improvement techniques proposed and implemented by the model, including how the model avoids safety precautions like a sandbox.

Check out the PaperAll Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 31k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

If you like our work, you will love our newsletter..

We are also on WhatsApp. Join our AI Channel on Whatsapp..

The post Researchers from Stanford and Microsoft Introduce Self-Improving AI: Leveraging GPT-4 to Elevate Scaffolding Program Performance appeared first on MarkTechPost.

 Almost every aim described in natural language may be optimized by querying a language model. However, a program may frequently provide outputs with greater objective values by making several organized calls to a language model. They refer to these as “scaffolding” programs, and they are often created (by people) using a computer language like Python.
The post Researchers from Stanford and Microsoft Introduce Self-Improving AI: Leveraging GPT-4 to Elevate Scaffolding Program Performance appeared first on MarkTechPost.  Read More AI Shorts, Applications, Artificial Intelligence, Editors Pick, Language Model, Large Language Model, Machine Learning, Staff, Tech News, Technology, Uncategorized 

Leave a Reply

Your email address will not be published. Required fields are marked *