ChatGPT has revolutionized the capability of easily producing a wide range of fluent text on a wide range of topics. But how good are they really? Language models are prone to factual errors and hallucinations. This lets readers know if such tools have been used to ghostwrite news articles or other informative text when deciding whether or not to trust a source. The advancement in these models has also raised concerns regarding the authenticity and originality of the text. Many educational institutions have also restricted the usage of ChatGPT due to content being easy to produce.
LLMs like ChatGPT generate responses based on patterns and information in the vast amount of text they were trained on. It doesn’t reproduce responses verbatim but generates new content by predicting and understanding the most suitable continuation for a given input. However, the reactions may draw upon and synthesize information from its training data, leading to similarities with existing content. It’s important to note that LLMs aim for originality and accuracy; it’s not infallible. Users should exercise discretion and not solely rely on AI-generated content for critical decision-making or situations requiring expert advice.
Many detection frameworks exist, like DetectGPT and GPTZero, to detect whether an LLM has generated the content. However, these framework’s performance falters on datasets they were originally not evaluated. Researchers from the University of California present Ghostbusters. It is a method for detection based on structured search and linear classification.
Ghostbuster uses a three-stage training process named probability computation, feature selection, and classifier training. Firstly, it converts each document into a series of vectors by computing per-token probabilities under a series of language models. Then, it selects features by running a structured search procedure over a space of vector and scalar functions that combine these probabilities by defining a set of operations that combine these features and run forward feature selection. Finally, it trains a simple classifier on the best probability-based features and some additional manually selected features.
Ghostbuster’s classifiers are trained on combinations of the probability-based features chosen through structured search and seven additional features based on word length and the largest token probabilities. These other features are intended to incorporate qualitative heuristics observed about AI-generated text.
Ghostbuster performance gains over previous models are robust with respect to the similarity of the training and testing datasets. Ghostbuster achieved 97.0 F1 averaged across all conditions and outperformed DetectGPT by 39.6 F1 and GPTZero by 7.5 F1. Ghostbuster outperformed the RoBERTa baseline on all domains except creative writing out-of-domain, and RoBERTa had a much worse out-of-domain performance. The F1 score is a metric commonly used to evaluate the performance of a classification model. It’s a measure that combines both precision and recall into a single value and is particularly useful when dealing with imbalanced datasets.
Check out the Paper and Blog Article. All credit for this research goes to the researchers of this project. Also, don’t forget to join our 33k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.
If you like our work, you will love our newsletter..
The post UC Berkeley Researchers Introduce Ghostbuster: A SOTA AI Method for Detecting LLM-Generated Text appeared first on MarkTechPost.
ChatGPT has revolutionized the capability of easily producing a wide range of fluent text on a wide range of topics. But how good are they really? Language models are prone to factual errors and hallucinations. This lets readers know if such tools have been used to ghostwrite news articles or other informative text when deciding
The post UC Berkeley Researchers Introduce Ghostbuster: A SOTA AI Method for Detecting LLM-Generated Text appeared first on MarkTechPost. Read More AI Shorts, Applications, Artificial Intelligence, Editors Pick, Language Model, Large Language Model, Machine Learning, Staff, Tech News, Technology, Uncategorized