Over the past few years, several technological breakthroughs have taken place in the Artificial Intelligence (AI) domain, thereby profoundly impacting several industries and sectors. AI has significant potential when it comes to completely revolutionalizing the healthcare industry, transforming how businesses operate and how individuals interact with technology. However, certain things need to be ensured, even with the widespread adoption of AI technology, which will only increase in the coming years. This is where the need to ensure security measures to protect the AI system and the kind of data it relies on becomes increasingly important. AI systems rely heavily on data for training which might contain sensitive and personal information. As a result, it is extremely crucial for researchers and developers to come up with robust security measures that can prevent attacks on such AI systems and ensure that sensitive information is not stolen.
In this context, the security of AI applications has become a hot topic among researchers and developers since it directly affects several institutions like the government, businesses, etc., as a whole. Contributing to this wave of research, a team of researchers from the Cybersecurity Department at the University of Surrey has created software that can verify how much information an AI system has collected from an organization’s database. The software may also determine whether an AI system has discovered any potential flaws in software code that could be used for malicious operations. For instance, the software can determine whether an AI chess player has become unbeatable because of a potential bug in the code. One of the major use cases that the Surrey researchers are looking at for their software is to use it as a part of a company’s online security protocol. A business can then better determine whether AI can access the company’s sensitive data. Surrey’s verification software has also won an award for the best paper at the esteemed 25th International Symposium on Formal Methods.
With the widespread adoption of AI into our daily lives, it is safe to assume that these systems are required to interact with other AI systems or humans in complex and dynamic environments. Self-driving cars, for instance, need to interact with other sources, such as other vehicles and sensors, in order to make decisions when navigating through traffic. On the other hand, some businesses make use of robots to complete certain tasks at hand for which they require to interact with other humans. In these situations, ensuring the security of AI systems can be particularly challenging, as the interactions between systems and humans can introduce new vulnerabilities. Thus, to develop a solution for this problem, the primary step is to determine what an AI system actually knows. This has been a fascinating research problem for the AI community for several years, and the researchers at the University of Surrey have come up with something groundbreaking.
The verification software developed by Surrey researchers can determine how much AI can learn from their interactions and whether they know enough or too much to compromise privacy. In order to specify what the AI systems know exactly, the researchers defined a “program epistemic” logic, which also includes reasoning about future events. The researchers hope that by using their one-of-a-kind software to evaluate what an AI has learned, businesses will be able to adopt AI into their systems more securely. The University of Surrey’s research represents a very important step in ensuring the confidentiality and integrity of training datasets. Their efforts will accelerate the pace of research into creating trustworthy and responsible AI systems.
Check out the Paper and Reference. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 18k+ ML SubReddit, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.
The post A New Software Developed by the University of Surrey Researchers can Verify How Much Information AI Actually Knows appeared first on MarkTechPost.
Over the past few years, several technological breakthroughs have taken place in the Artificial Intelligence (AI) domain, thereby profoundly impacting several industries and sectors. AI has significant potential when it comes to completely revolutionalizing the healthcare industry, transforming how businesses operate and how individuals interact with technology. However, certain things need to be ensured, even
The post A New Software Developed by the University of Surrey Researchers can Verify How Much Information AI Actually Knows appeared first on MarkTechPost. Read More AI Shorts, Applications, Artificial Intelligence, Deep Learning, Editors Pick, Machine Learning, Staff, Tech News, Technology, Uncategorized