What Happens When Artificial Intelligence Is Taught To See Like Humans

Nov 7, 2017
Originally published on November 7, 2017 6:17 pm
Copyright 2017 NPR. To see more, visit http://www.npr.org/.

ROBERT SIEGEL, HOST:

When a website needs to make sure that you are really human, it might use a system known as CAPTCHA.

KELLY MCEVERS, HOST:

It's like a puzzle. On screen you see a series of squiggly numbers or letters, and you're asked to type those symbols out to let the owner of the website know you are not a robot.

SIEGEL: If you are a robot, please stop listening to this story now.

MCEVERS: The CAPTCHA system is based on the way our brains recognize different variations of a shape, like knowing that a cursive B is the same as a typed B.

SIEGEL: We mention all this to tell you that recently we learned that the robots won. The artificial intelligence company Vicarious used technology it perfected to defeat one CAPTCHA system. They're trying to teach robots to see the way humans see.

SCOTT PHOENIX: And to give them that skill, we need to design a visual system that works a lot like the human brain.

SIEGEL: Scott Phoenix is co-founder of Vicarious.

PHOENIX: And we humans are able to do tasks and recognize objects with only, you know, very few examples, sometimes just one example. And that's part of what gives us our super powers.

MCEVERS: Before this, scientists had to feed thousands of examples into a machine to get it to understand what something looked like, even something as simple as a letter of the alphabet.

SIEGEL: So if AI can figure out a puzzle only humans are supposed to crack, does this mean that websites will be more vulnerable to bots? Scott Phoenix says no. He says big tech firms think about a lot more than CAPTCHAS when they design security features for their sites.

PHOENIX: What geographic location the IP address of the computer is coming from and what other websites it tends to visit. And is it acting like a human or is it acting like a robot?

MCEVERS: And New York-based technology writer Charles Choi says the security fears stirred up by Vicarious' breakthrough actually miss the point.

CHARLES CHOI: My view of technology like this isn't, like, oh, no, people are going to find a way to crack these things 'cause people are always going to find a way to crack these things. But it's more like, how much better can we make computers at acting human?

SIEGEL: This does not mean the end of CAPTCHA as a security tool. The ones that ask us to pick similar sorts of objects out of an array of photos like cars or street signs - only humans can figure those out for now.

(SOUNDBITE OF YPPAH'S "GUMBALL MACHINE WEEKEND") Transcript provided by NPR, Copyright NPR.