Skip to content

Amazon Researchers Introduce Fortuna: An AI Library for Uncertainty Quantification in Deep Learning Tanya Malhotra Artificial Intelligence Category – MarkTechPost

  • by

The recent developments in the fields of Artificial Intelligence and Machine Learning have made everyone’s lives easier. With their incredible capabilities, AI and ML are diving into every industry and solving problems. A key component of Machine Learning is predictive uncertainty, which enables the evaluation of the accuracy of model predictions. In order to make sure that the ML systems are reliable and safe, it is important to estimate the uncertainty correctly. 

Overconfidence is a prevalent issue, particularly in the context of deep neural networks. Overconfidence is when the model predicts a certain class with a substantially higher likelihood than it really does. This can affect judgements and behaviours in the real world, which makes it a matter of concern. 

A number of approaches capable of estimating and calibrating uncertainty in ML have been developed. Among these methods are Bayesian inference, conformal prediction, and temperature scaling. Although these methods exist, putting them into practice is a challenge. Many open-source libraries provide unique implementations of particular techniques or generic probabilistic programming languages, but there is a lack of a cohesive framework supporting a broad spectrum of latest methodologies. 

To overcome these challenges, a team of researchers has presented Fortuna, an open-source uncertainty quantification library. Modern, scalable techniques are integrated into Fortuna from the literature and are made available to users via a consistent, intuitive interface. Its main objective is to make the application of sophisticated uncertainty quantification methods in regression and classification applications more straightforward.

The team has shared the two primary features of Fortuna that greatly improve deep learning uncertainty quantification.

Calibration techniques: Fortuna supports a number of tools for calibration, one of which is conformal prediction. Any pre-trained neural network can be used with conformal prediction to produce reliable uncertainty estimates. It assists in balancing the confidence scores of the model with the actual accuracy of its predictions. This is extremely helpful as it enables users to discern between instances in which the model’s predictions are dependable and those that are not. The team has shared an example of a doctor in which the doctor can get help in determining whether an AI system’s diagnosis or a self-driving car’s interpretation of its environment is reliable.

Scalable Bayesian Inference: Fortuna provides scalable Bayesian inference tools in addition to calibration procedures. Deep neural networks that are being trained from the start can be trained using these techniques. A probabilistic method called Bayesian inference enables the incorporation of uncertainty in both the model parameters and the predictions. Users can increase the overall accuracy of Fortuna as well as the model’s ability to quantify uncertainty by implementing scalable Bayesian inference. 

In conclusion, Fortuna offers a consistent framework for measuring and calibrating uncertainty in model predictions, definitely making it a useful addition to the field of Machine Learning. 

Check out the PaperAll credit for this research goes to the researchers of this project. Also, don’t forget to join our 32k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

If you like our work, you will love our newsletter..

We are also on Telegram and WhatsApp.

The post Amazon Researchers Introduce Fortuna: An AI Library for Uncertainty Quantification in Deep Learning appeared first on MarkTechPost.

 The recent developments in the fields of Artificial Intelligence and Machine Learning have made everyone’s lives easier. With their incredible capabilities, AI and ML are diving into every industry and solving problems. A key component of Machine Learning is predictive uncertainty, which enables the evaluation of the accuracy of model predictions. In order to make
The post Amazon Researchers Introduce Fortuna: An AI Library for Uncertainty Quantification in Deep Learning appeared first on MarkTechPost.  Read More AI Shorts, Applications, Artificial Intelligence, Deep Learning, Editors Pick, Machine Learning, Staff, Tech News, Technology 

Leave a Reply

Your email address will not be published. Required fields are marked *