Did you ever imagine that a person who lost his ability to speak and cannot clearly utter words would be able to communicate what he wants to say?
This has come true with the incredible advancements in Artificial intelligence and Machine Learning. Several new pieces of research are being carried out, and many algorithms are being instituted to cater to the different needs of mankind. Researchers at Stanford University have developed one such interface. This Brain-computer interface can help a person with a condition like a paralysis to convey his thoughts and communicate at 62 words per minute.
Brain-Computer Interface (BCI) is simply defined as a device that permits users to interact with computers solely through brain activity. It is a direct pathway with the help of which the electrical activity of the brain tries to communicate with a foreign device. This external device is mostly a computer or a robotic limb. In Artificial Intelligence, BCI measures the central nervous system’s (CNS) activity and changes it into an artificial output. This output substitutes and enhances the natural central nervous system output, modifying the interactions between the CNS and the foreign environment.
The Stanford researchers used the Recurrent Neural Network (RNN) to process the Brain-Computer Interface, making it capable of synthesizing speech from signals found and captured in a patient’s brain. Compared to the previously existing BCI approaches that allow speech decoding, this latest method enables a person to communicate at 62 words per minute which is 3.4 times faster than the previous ones. With Artificial Intelligence revolutionizing and stepping into every field, such as healthcare and medicine, this new speech-to-text interface can help people with an inability to produce clear speech to communicate effectively.
The researchers have shared that the system has been demonstrated in a person suffering from lost speech ability due to amyotrophic lateral sclerosis (ALS). The system has been processed by the training of RNN, specifically a Gated Recurrent Unit (GRU) model. The team tried to capture the words uttered by the patient when she tried to speak by using the intracortical microelectrode arrays implanted in the patient’s brain. These microelectrode arrays record signals at a single neuron resolution. These signals were then transferred to the GRU model to decode the speech.
The researchers mentioned that when the RNN model was trained on a bounded vocabulary of 50 words, the BCI system displayed an error rate of 9.1 percent. After increasing the vocabulary to 125k words, the error rate changed to 23.8%. The error rate improved to 17.4% when adding a language model to the decoder. The total data that the team collected for the training purpose was 10850 sentences which were done by showing a few hundred sentences every day to the patient to utter. The microelectrodes captured the neural signals as soon as the patient mouthed the sentences.
This system is definitely a breakthrough in the work of BCIs, as a lot of research takes place on deciphering brain activity. This development can greatly help patients with paralysis, stroke, etc. With 3.4 times better performance than currently existing approaches, this system can work wonders.
Check out the Paper. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 14k+ ML SubReddit, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.
The post <strong>Stanford Researchers Develop An Incredible Brain-Computer Interface </strong>(<strong>BCI) System That Can Convert Speech-Related Neural Activity Into Text At 62 Words Per Minute</strong> appeared first on MarkTechPost.
Did you ever imagine that a person who lost his ability to speak and cannot clearly utter words would be able to communicate what he wants to say? This has come true with the incredible advancements in Artificial intelligence and Machine Learning. Several new pieces of research are being carried out, and many algorithms are
The post <strong>Stanford Researchers Develop An Incredible Brain-Computer Interface </strong>(<strong>BCI) System That Can Convert Speech-Related Neural Activity Into Text At 62 Words Per Minute</strong> appeared first on MarkTechPost. Read More AI Shorts, Applications, Artificial Intelligence, Deep Learning, Editors Pick, Machine Learning, Staff, Tech News, Technology, Uncategorized