Virtual and augmented reality solutions could significantly lower the barriers people with visual impairments currently encounter when accessing everyday services, mobility training, workplaces and education.
Extended reality (XR) technology largely revolves around offering immersive experiences through rich visuals and audio. Put on a headset and you're suddenly transported into a lush rainforest, soaring through the skies paragliding, or diving into an exclusively imaginary world. But this technology could also be used in other ways.
As the Nuremberg Institute of Technology in Germany found out, advanced tactile representation of virtual objects, terrains and alerts, as well as making rooms and other virtual spaces sound realistic, could significantly enhance the usefulness of XR solutions for those who are blind or visually impaired.
Visually impaired individuals grapple with various accessibility challenges on a daily basis, from navigating urban environments and buildings, to understanding educational materials that heavily rely on visual learning methods. Multisensory virtual reality solutions could lower those barriers.
Take, for instance, a recent project by Microsoft, named Canetroller. This combines detailed haptic feedback with intricate soundscapes to simulate various paths for someone with visual impairments.
Using a virtual cane, a blind person can explore the virtual environment, grasp the spatial relations, size and appearance of objects, thereby equipping themselves to navigate the same environment in reality later. Such a multisensory solution could offer even more ways to learn orientation and mobility skills.
By giving computers a clear understanding of our environment, we allow them to translate it into different mediums. Equipped with these technologies, a blind person could navigate a city while wearing an augmented reality headset and clothing capable of providing tactile, physical feedback.
This gear could inform the wearer of their location based on GPS data, alert them to badly parked bicycles on the pavement using computer vision, identify an approaching bus by reading the line information printed on it, and even find an empty seat using lidar and computer vision solutions.
Many of these technologies already exist in various apps. However, to access all these features today, a visually impaired person would need to hold a smartphone in their hands, often pausing their journey to send images of their surroundings to a remote server to be processed by an AI model for hazard recognition. Consolidating these technologies into one wearable device could greatly enhance an individual's independence.
For those with low vision, identifying obstacles often poses a challenge. Obstructions like trees, poles or any hindrance can be difficult to notice, depending on the surrounding lighting, colours and background. An augmented reality solution could potentially enhance the visual appearance of one's surroundings, drawing attention to various elements based on the wearer's specific type of low vision.
In a nutshell, the emerging technologies of virtual and augmented reality are primed to revolutionise the experiences of visually impaired individuals in ways we've never experienced before.
While a lot of challenges still exist, with ongoing research, collaboration and innovation, we can make steps towards creating an environment where technology becomes a seamless extension of one's senses. This can empower visually impaired individuals to navigate the world independently and confidently.