So, a program finally passed the Turing Test. Cool.
The Verge: Computer passes Turing Test for first time by convincing judges it is a 13-year-old boy
Some will say this is a cheat along the lines of Parry (an early chatbot that pretended to be a paranoid person, thus providing excuses for when the program diverged radically from sensible answers), but this is a far more significant milestone.
That plus the gigantic amount of work into speech synthesis at the commercial level:
The Verge: How Siri Found Its Voice
should add up to intelligent-sounding programs soon.
There will be funny sidelights along the way:
xkcd: Turing Test
This does not mean we have AI yet. What we have are programs that, with a ton of information and clever programming, can interoperate with humans in natural ways. But that’s huge. Programs like Siri are already useful, and we will probably see voice-driven devices be the norm rather than the exception in the next 10 years.
The big question is – when will we actually have human-level artificial intelligence? There’s a hard-core minority that thinks it will never happen because insert-reason-here: John Searls is one of those very vocal people, and his Chinese room argument is still quoted today (hint: it’s nonsense – really):
JOHN R. SEARLE’S CHINESE ROOM
There’s another hard-core minority that thinks “any day now”, and this camp is represented by people like Ray Kurzweil – see his latest book “How To Create a Mind” as an example of this thinking, which is “ok, we finally get it, we just connect a bunch of insert-thing-here and intelligence springs out of it automatically”:
How to Create a Mind
Every single “AI is not possible” argument boils down to “because”. We (humans) are the existence proof – there’s no magic involved in constructing a human, we just don’t know how to do it. However, we’ve been underestimating just how hard intelligence is to create from scratch for almost 3000 years.
I studied AI in college, and I even named my first company as an AI homage. However, I have to agree with people like Scott Aaronson, who, when asked, said “a few thousand years”:
intelligence.org: When Will AI Be Created?
I don’t know if I agree with the “few thousand years” timeframe, but on the other hand I think it’s very possible that we don’t understand anything meaningful about intelligence yet. One reason I feel comfortable in saying that is that most peoples’ answers about how to create AI start saying things like “emergent behavior” and “we just need 100 billion artificial neurons”. In other words “magic happens”.
You don’t make things with magic. Really.
When do i think we could have AI? Honestly, there’s no good way to estimate it. I can give a counter-estimate, which is “absolutely not in the next 100 years”. One of the things that holds us back is that we actually can’t define it yet. This means that as milestones happen, the rabid pro-AI crowd claims: “look, we have AI now” and the rabid anti-AI crowd says: “No you don’t, it’s a trick, intelligence means new-definition-inserted here”. Remember, in the 1950s, we thought that a chess-playing program would be AI. Of course, it turned out that all you need is a brute-force search program with some clever optimizations and a large memory, and you can even beat the world champion. We didn’t learn anything positive about intelligence from writing chess programs.
I think we have lots of wonderful technology as a result of our quest for AI. But at best we have intelligence-amplifying programs, not intelligent programs.
I’m happy to be proven wrong. Unlike some, an artificially intelligent program won’t threaten my sense of being. At best, it will have the same effect as knowing that really smart humans exist: some jealousy and envy, but I go on with my life and do stuff.
I for one welcome our new robot overlords. But they’ll be a very long time in coming, and when they get here, they almost certainly won’t be interested in being our overlords, just like I don’t want to be the ruler of a kingdom full of two-year olds.