This Sensory Substitution Device uses the camera to gather visual data and then uses a rather nifty computer algorithm to translates this data into sound. With a little practice, blind users can identify complex objects, and even read words.
The invention is the invention of Hebrew University of Jerusalem's Dr. Amir Amedi, who you can see modeling the device in the picture up top. Amedi says that with only a relatively brief period of training, users can learn how to interpret a ton of information the "soundscapes" created by the computer algorithm, including the nature of complex everyday objects, the location and posture of people in a room, and even written letters and words.
What makes this particularly cool is that the sounds being created actually activate the ohterwise dormant visual cortices of congenitally blind people. Previous research had indicated that the visual cortex organizes data into two parallel pathways. The ventral occipito-temporal pathway, called the "what" pathway, deals with form, identity, and color, while the dorsal occipito-parietal pathway, or the "where/how" pathway, focuses on object location and coordinates visual data with motor function.
MRI scans revealed that blind people using this device activated these pathways just as people with normal vision would, indicating the proper functioning of the visual cortex doesn't actually require any visual information. In a statement, Amedi argued that this means that "The brain is not a sensory machine, although it often looks like one; it is a task machine."
This is one of a few recent studies that have suggested that actual visual, auditory, or tactile data aren't necessary for the brain to interpret what is going on around it. The various pathways of the brain seem to stand ready to interpret data, even if the corresponding sensory organs or receptors don't actually work. It seems, as far as the visual cortex concerned, no experience is required.
Via Cerebral Cortex. Image courtesy of Hebrew University of Jerusalem.