Skip to content

A New Artificial Intelligence Research From Stanford Shows How Explanations Can Reduce Overreliance on AI Systems During Decision-Making Khushboo Gupta Artificial Intelligence Category – MarkTechPost

  • by

The boom of artificial intelligence (AI) in recent years is closely related to how much better human lives have become due to AI’s ability to perform jobs faster and with less effort. Nowadays, there are hardly any fields that do not make use of AI. For instance, AI is everywhere, from AI agents in voice assistants such as Amazon Echo and Google Home to using machine learning algorithms in predicting protein structure. So, it is reasonable to believe that a human working with an AI system will produce decisions that are superior to each acting alone. Is that actually the case, though?

Previous studies have demonstrated that this is not always the case. In several situations, AI does not always produce the right response, and these systems must be trained again to correct biases or any other issues. However, another associated phenomenon that poses a danger to the effectiveness of human-AI decision-making teams is AI overreliance, which establishes that people are influenced by AI and often accept incorrect decisions without verifying whether the AI is correct. This can be quite harmful when conducting critical and essential tasks like identifying bank fraud and delivering medical diagnoses. Researchers have also shown that explainable AI, which is when an AI model explains at each step why it took a certain decision instead of just providing predictions, does not reduce this problem of AI overreliance. Some researchers have even claimed that cognitive biases or uncalibrated trust are the root cause of overreliance, attributing overreliance to the inevitable nature of human cognition.

Yet, these findings do not entirely confirm the idea that AI explanations should decrease overreliance. To further explore this, a team of researchers at Stanford University’s Human-Centered Artificial Intelligence (HAI) lab asserted that people strategically choose whether or not to engage with an AI explanation, demonstrating that there are situations in which AI explanations can help people become less overly reliant. According to their paper, individuals are less likely to depend on AI predictions when the related AI explanations are easier to understand than the activity at hand and when there is a bigger benefit to doing so (which can be in the form of a financial reward). They also demonstrated that overreliance on AI could be considerably decreased when we concentrate on engaging people with the explanation rather than just having the target supply it.

The team formalized this tactical decision in a cost-benefit framework to put their theory to the test. In this framework, the costs and benefits of actively participating in the task are compared against the costs and benefits of relying on AI. They urged online crowdworkers to work with an AI to solve a maze challenge at three distinct levels of complexity. The corresponding AI model offered the answer and either no explanation or one of several degrees of justification, ranging from a single instruction for the following step to turn-by-turn directions for exiting the entire maze. The results of the trials showed that costs, such as task difficulty and explanation difficulties, and benefits, such as monetary compensation, substantially influenced overreliance. Overreliance was not at all decreased for complex tasks where the AI model supplied step-by-step directions because deciphering the generated explanations was just as challenging as clearing the maze alone. Moreover, the majority of justifications had no impact on overreliance when it was simple to escape the maze on one’s own.

The team concluded that if the work at hand is challenging and the associated explanations are clear, they can help prevent overreliance. Yet, when the work and the explanations are both difficult or simple, these explanations have little effect on overreliance. Explanations don’t matter much if the activities are simple to do because people can execute the task themselves just as readily rather than depending on explanations to generate conclusions. Also, when jobs are complex, people have two choices: either complete the task manually or examine the generated AI explanations, which are frequently just as complicated. The main cause of this is that few explainability tools are available to AI researchers that need much less effort to verify than doing the task manually. So, it is not surprising that people tend to trust the AI’s judgment without questioning it or seeking an explanation.

As an additional experiment, the researchers also introduced the facet of monetary benefit into the equation. They offered crowdworkers the option of working independently through mazes of varying degrees of difficulty for a sum of money or taking less money in exchange for assistance from an AI, either without explanation or with complicated turn-by-turn directions. The findings showed that workers value AI assistance more when the task is challenging and prefer a straightforward explanation to a complex one. Additionally, it was found that overreliance reduces as the long-term advantage of using AI increases (in this example, the financial reward).

The Stanford researchers have high hopes that their discovery will provide some solace to academics who have been perplexed by the fact that explanations don’t lessen overreliance. Additionally, they wish to inspire explainable AI researchers with their work by providing them with a compelling argument for enhancing and streamlining AI explanations.

Check out the Paper and Stanford Article. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 16k+ ML SubRedditDiscord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

The post A New Artificial Intelligence Research From Stanford Shows How Explanations Can Reduce Overreliance on AI Systems During Decision-Making appeared first on MarkTechPost.

 The boom of artificial intelligence (AI) in recent years is closely related to how much better human lives have become due to AI’s ability to perform jobs faster and with less effort. Nowadays, there are hardly any fields that do not make use of AI. For instance, AI is everywhere, from AI agents in voice
The post A New Artificial Intelligence Research From Stanford Shows How Explanations Can Reduce Overreliance on AI Systems During Decision-Making appeared first on MarkTechPost.  Read More AI Paper Summary, AI Shorts, Applications, Artificial Intelligence, Deep Learning, Editors Pick, Explainable AI, Language Model, Large Language Model, Machine Learning, Staff, Tech News, Technology, Uncategorized 

Leave a Reply

Your email address will not be published. Required fields are marked *