The VISOR

Vision For The Future

The ViSOR project is the attempt to develop a solution for people with low vision blindness and complete blindness.

In the United States, more than 3 million Americans have low vision (National Advisory Eye Council, 1998). It is also estimated that approximately 12 million people have some form of vision impairment that cannot be corrected by glasses (National Advisory Eye Council, 1998). 

The devices currently in development are the RP ViSOR (a.k.a. the Noah series) and the ViSOR 2.0

 

RP ViSOR

The RP ViSOR is designed to alleviate issues with tunnel vision caused by Retinis Pigmentosa, Glaucoma, Detached Retinas, Stroke, Brain Damage or Visual Trauma of some kind

There is currently no cure for tunnel vision, and the most reliable solutions so far have been to use a cane and a seeing eye dog. While effective, these solutions are archaic in their inventiveness. The RP ViSOR uses technology to condense the world down to the scale that people with tunnel vision can see in. The RP ViSOR also provides solutions for patients that have trouble with glare and night blindness ( Nyctalopia ).    ..

 

ViSOR 2.0

The visual information simulated optical reflector (V.I.S.O.R.) is a device that translates real world information into digital elements and sensory information in real time. In short, the V.I.S.O.R. allows users to perceive the surrounding environment through tonal representations and spoken words transmitted to the central nervous system via bone conduction creating a new sense of “blind” sight and augmented reality. 

How does it work?

In principle, when the visor projects a narrow band of light outside of the visible spectrum in a structured pattern onto any three-dimensional surface, distortions in the pattern registered from alternate perspectives by a depth sensing CMOS are used to create an exact geometric reconstruction of the surrounding environment. That information is translated into signals that wearer can use to discern the location, size, shape, distance, and color of objects in their surrounding environment. Object, facial, and text recognition operate with open source software, and the GPS locator is a facet of the device. This sound is then passed on to the user so that they can use the assorted noises to identify complex objects and even read sentences.

The VISOR is relatively easy to use – many people are capable of identifying objects after a “brief period of training”. It is hoped that after an extended time, users will be able to learn how to interpret a large stream of information from the algorithm’s soundscape.

Perhaps the most intriguing part is that the sounds activate the otherwise dormant visual cortices of congenitally blind people. The visual cortex organizes data into two parallel pathways – the ventral occipito-temporal pathway (which deals with form, identity, and color) and the dorsal occipito-parietal pathway (which focuses on object location and coordinates visual data with motor function).

MRI scans showed that blind people using the sensory device saw these previously dormat pathways acting as they would if they had had normal vision. For the blind population of the world, this next generation technology could soon see massive benefits.

 

Any complications?

Reflective or transparent surfaces still raise difficulties. Reflections cause light to be reflected either away from the camera or right into its optics. In both cases, the dynamic range of the camera can be exceeded. Transparent or semi-transparent surfaces also cause major difficulties. In these cases, coating the surfaces with a thin opaque lacquer just for measuring purposes is a common practice. For measuring entirely reflective surfaces, the alternative method of fringe reflection has been implemented. Alternative optical techniques have been proposed for handling perfectly transparent and specular objects.

There’s also been some issue when people lose their VISOR. 

How are signals transferred to the user?

Sounds conducted through the bone via transducers positioned on the zygomatic arch, temporal bone, or bone anchor. The electromechanical transducer converts electric signals into mechanical vibrations and sends sound to the internal ear bone conduction. 

How long doe it take to learn?

Our current research studies involve participation between 10-12 hours*.  Within minutes of introduction, users may understand where in space stimulation arises (up, down, left and right) and the direction of movement.  Within an hour of practice, users can generally identify and reach for nearby objects, and point to and estimate the distance of objects out of reach.  With additional training, subjects can identify and recognize landmark information when using the device in a mobile scenario.

What does the V.I.S.O.R. do?

The Visor provides situational orientation, increased mobility, and object identification. Through enabling perception of visual and digital information in the form of sound, including color, and commonly recognizable objects (e.g. numbers, words, etc.)

View from the V.I.S.O.R. at the BDNT

When these sensors work in concert, they produce a symphony of sonocular perception.

Please Login For Full Article