Researchers have undertaken the formidable task of enhancing the independence of individuals with visual impairments through the innovative Project Guideline. This initiative seeks to empower people who are blind or have low vision by leveraging on-device machine learning (ML) on Google Pixel phones, enabling them to walk or run independently. The project revolves around a waist-mounted phone, a designated guideline on a pedestrian pathway, and a sophisticated combination of audio cues and obstacle detection to guide users safely through the physical world.
Project Guideline emerges as a groundbreaking solution for computer vision accessibility technology. Departing from conventional methods that often involve external guides or guide animals, the project utilizes on-device ML tailored for Google Pixel phones. The researchers behind Project Guideline have devised a comprehensive method that employs ARCore for tracking the user’s position and orientation, a segmentation model based on DeepLabV3+ for detecting the guideline, and a monocular depth ML model for identifying obstacles. This unique approach allows users to navigate outdoor paths marked with a painted line independently, marking a significant advancement in assistive technology.
Delving into the intricacies of Project Guideline’s technology reveals a sophisticated system at work. The core platform is crafted using C++, seamlessly integrating essential libraries such as MediaPipe. ARCore, a fundamental component, estimates the user’s position and orientation as they traverse the designated path. Simultaneously, a segmentation model processes each frame, generating a binary mask that outlines the guideline. The aggregated points create a 2D map of the guideline’s trajectory, ensuring a stateful representation of the user’s environment.
The control system dynamically selects target points on the line, providing a navigation signal that considers the user’s current position, velocity, and direction. This forward-thinking approach eliminates noise caused by irregular camera movements during activities like running, offering a more reliable user experience. Including obstacle detection, facilitated by a depth model trained on a diverse dataset known as SANPO, adds an extra layer of safety. The model is adept at discerning the depth of various obstacles, including people, vehicles, posts, and more. The depth maps are converted into 3D point clouds, similar to the line segmentation process, forming a comprehensive understanding of the user’s surroundings. The entire system is complemented by a low-latency audio system, ensuring real-time delivery of audio cues to guide the user effectively.
In conclusion, Project Guideline represents a transformative stride in computer vision accessibility. The researchers’ meticulous approach addresses the challenges faced by individuals with visual impairments, offering a holistic solution that combines machine learning, augmented reality technology, and audio feedback. The decision to open-source the Project Guideline further emphasizes the commitment to inclusivity and innovation. This initiative not only enhances users’ autonomy but also sets a precedent for future advancements in assistive technology. As technology evolves, Project Guideline serves as a beacon, illuminating the path toward a more accessible and inclusive future.
Check out the GitHub and Blog. All credit for this research goes to the researchers of this project. Also, don’t forget to join our 33k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.
If you like our work, you will love our newsletter..
The post Google Announce the Open Source Release of Project Guideline: Revolutionizing Accessibility with On-Device Machine Learning for Independent Mobility appeared first on MarkTechPost.
Researchers have undertaken the formidable task of enhancing the independence of individuals with visual impairments through the innovative Project Guideline. This initiative seeks to empower people who are blind or have low vision by leveraging on-device machine learning (ML) on Google Pixel phones, enabling them to walk or run independently. The project revolves around a
The post Google Announce the Open Source Release of Project Guideline: Revolutionizing Accessibility with On-Device Machine Learning for Independent Mobility appeared first on MarkTechPost. Read More AI Shorts, Applications, Artificial Intelligence, Editors Pick, Machine Learning, Staff, Tech News, Technology, Uncategorized