We have used generative models such as ChatGPT at least once. If you still haven’t, you should. It is imperative to understand what we are going to discuss today. While the results provided by these large language models seem to be appropriate and frankly much better, real, and natural than expected, there are still a few things we miss out which can be quite important if we are using these models as a source of ground truth or a reference somewhere else.
The above paragraph entails an obvious question: what is wrong when it doesn’t seem to be at fault?
Nothing specifically is wrong, but there are a few questions that we need to ask while using these models, or at least the results produced by them somewhere else, such as
What is the source of ground truth for these models? Where do they source their information from? It has to come from somewhere.
What about the bias? Are these models biased? And, if so, can we estimate this bias? Can we counter it?
What are the alternatives to the Model you are using, and what if those perform better in certain fact-checking scenarios?
These are the exact issues that Daniel Balsam and the team have tackled with their project surv_ai.
Surv_ai is a multi-agent large-language model framework designed for mult-agent modeling. This framework enables large-language models to be used as engines to enhance the quality of research, bias estimation, researching hypothesis, and doing the comparative analysis in a much better and more efficient way, all packed under one hood.
To completely understand what it does, it is important to understand the core philosophy of this approach. The framework was inspired by a common predictive analytics technique called bagging (bootstrap aggregating), an example of classic ensemble techniques. The idea revolves around the fact that instead of one weak learner with a vast amount of information, sometimes a lot of weak performers with limited information, when aggregated, perform much better and give higher-quality net results.
Similarly, multi-agent modeling involves generating multiple statistical models based on the actions of numerous agents. In the case of Surv_ai, these models are made by agents querying and processing text from a data corpus. These agents then test and reason the hypothesis, in simple terms, whatever you have asked them to verify or give an opinion on and generate a suitable response.
Due to the stochastic nature of large language models, individual data points can vary. It can be countered by increasing the number of agents employed.
Surv_ai employs two approaches a user can opt for based on the requirements. One segment in the provided repository can produce multi-agent data points and is called Survey. A Survey takes a statement as input and returns the percentage of agents who agree.
A more complex implementation is called a Model, which can do what Survey can but with more control and nuances. You can control a lot of variation in the input parameters of a Model implementation and hence can increase the precision of results you wish to see.
These implementations help us compare the ground truth by the consolidated opinions of different agents. It can help us reach and analyze a hypothesis’s change of sentiment over time. It can also allow us to understand and estimate the bias in the source information and bias in the large-language Model itself.
Rapid advancements in large language models and generative engines are guaranteed to happen. Such a multi-agent modeling framework proves itself a promising and valuable framework for such use cases. As claimed, it can also serve as a potential and indispensable tool for researchers to investigate complex issues with numerous factors it relies on. It will be interesting to see how this evolves and adapts over time.
Check out the Project. Don’t forget to join our 22k+ ML SubReddit, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more. If you have any questions regarding the above article or if we missed anything, feel free to email us at Asif@marktechpost.com
Check Out 100’s AI Tools in AI Tools Club
The post Meet Surv_ai: An Open Source Framework for Modeling and Comparative Analysis Using AI Agents appeared first on MarkTechPost.
We have used generative models such as ChatGPT at least once. If you still haven’t, you should. It is imperative to understand what we are going to discuss today. While the results provided by these large language models seem to be appropriate and frankly much better, real, and natural than expected, there are still a
The post Meet Surv_ai: An Open Source Framework for Modeling and Comparative Analysis Using AI Agents appeared first on MarkTechPost. Read More AI Shorts, Applications, Artificial Intelligence, Deep Learning, Editors Pick, Language Model, Machine Learning, Staff, Tech News, Technology, Uncategorized