A company called Vicarious sent out a press release last week claiming they had a technology that could be the bedrock of a true artificial intelligence system. Is it a breakthrough or a bunch of hype?
Right now, Vicarious' technology is just an algorithm that breaks CAPTCHAs, those online tests that require you to read some distorted words to prove you're not a bot. The algorithm does visual recognition, and Vicarious' founders claim it can recognize those distorted letters as well as a human would. They say it's evidence that their algorithm can pass a Turing Test, which you can see in this video they've released:
Critics say there are two problems: 1) Vicarious won't release its algorithm for scientists to examine; and 2) the algorithm doesn't work in many situations, including when the letters are on a checkerboard background or are not in the Roman alphabet.
Science's John Bohannan sums up:
What does all this have to do with the human brain?
Vicarious calls its algorithm the Recursive Cortical Network™. The reference to the human brain is built right into the name, as well as the commercial nature of this research. Whether it really has anything to do with how cortical neurons process information remains to be seen.
Breaking CAPTCHA wasn’t the goal, Phoenix says. “It was just a sanity check. We believe that higher level intelligences are all built on the somatosensory system. So that’s why we started with vision.” The company plans to hook up this visual system to robots. The benchmark then will be, for example, “Preparing a meal in an arbitrary kitchen.”
So does it really work?
Vicarious was concerned when I sent the company an e-mail describing its claim as “unsubstantiated,” so [Vicarious founders Scott] Phoenix and [Dileep] George offered to do a demonstration over Skype. I sent them CAPTCHAs off the Internet. They were able to solve the first one, from a Paypal website, immediately. But the algorithm was stumped by two others. One had Cyrillic characters. “We haven’t trained our system on other languages yet,” Phoenix said. And it also failed on a CAPTCHA that used alternating patches of black and white like a chess board.
There have been dozens of challenges to CAPTCHA over the years, with many different groups claiming they'd invented algorithms that could read them. So far, none have worked. We'll have to wait for Phoenix and George to release more data, perhaps in a scientific paper, before we know if this is just another example of science-by-press-release or something more.
Read more via Science