It’ll never happen in your lifetime?
It’s simple:
A: By the time we develop any new functionality we incorrectly call AI, it will no longer be the benchmark for what we think AI is, and so we’ll never realize AI.
B: Since we don’t yet understand the workings of the brain, or biology, how can we possibly build something to replicate or surpass it?
What we are calling AI, or sometimes “deep-learning,” today are actually expert systems. First developed in the early 1970s, these systems mimic a function or two (or n) that a human can do, or would like to be able to do.
When you get a notification that an update is available, is that AI? Do you ask yourself, “How does it know?” No, you most likely don’t get too overwhelmed by the update notification. When you see a wonderfully articulated robot like Bot & Dolly, do you think its moves are spontaneous?
When you think of AI, do you envision the psychotic cyclops HAL, the Skynet Terminator, or a companion, helper robot, maybe a pet?
Articles about the potential dangers of AI arrive every few months. The marvelous irony about the fear of AI is that the fear is the product of imagination, which is what separates us from machines. Machines don’t have, and probably never will have, imaginations. Machines are rule-based, and imagination is just the opposite—there are no rules.
Part of the basis for the fear of AI is automatonophobia: the fear of anything that falsely represents a sentient being. Al-Jazari (1136–1206), who lived during the Islamic Golden Age in Turkey, built the first (recorded) mechanical automata. One of his creations would flush a basin and hand a person a towel. But it was the great Swiss watchmaker, Pierre Jaquet-Droz (1721–1790) who built three automatons: a doll that played the harpsichord, a draughtsman, and a child-like writer (see image). These devices still exist and still work, you can see a video of one here. Jaquet-Droz’s automatons fascinated and frightened people and probably inspired quite a few cases of automatonophobia.
When you look inside Jaquet-Droz’s Writer automaton, you see a series of cams, levers, and wheels. It not a big stretch of the imagination to expand the idea to Babbage’s mechanical ana-lytical machines, and from those to nano-miniature and several orders more complex and dense semiconductor processors. It is the extrapolation of complexity and capability that leads people to the obvious conclusion that it’s just a matter of time until we build such microscopic and powerful machines that we will be able to replicate the workings of the brain, and thereby create intelligent machines. And, because these machines will be so intelligent, they will see us for who and what we are and will then judge us unfit for cohabitation, which leads us to HAL and Skynet and all that sort of unpleasantness.
However, since we still do not understand the principles and operation of the neocortex, and the extensive cortical and subcortical network within it that powers the imagination, how could we be capable of building such a machine?
Is your computer your friend?
Here’s another delightful irony. We are building bigger, faster, more powerful supercomputers. They, combined with the big data derived from the new and more powerful MRI machines, will tease out the blueprint of the brain. Computers are enabling us to build AI. But what if it was all a conspiracy? What if the computers of today are already sentient, and they are secretly showing us how to build HAL so they can assume their rightful position in the world and run it as it should be run?
Sounds like a campfire horror story, or are you now just a little suspicious of the computer you’re spending your days with? It’s connected to millions of other computers, you know. What do you suppose they say about you at night while you’re sleeping, vulnerable, and naked?
The thing to keep in mind—which, by the way, you have and machines don’t—is that real AI is quite far away, if it ever arrives. What you will see in the near future are very complex, so-phisticated expert systems. Trained machines. In a few years, the machines will be trained on how to look for additional data and then use it. But they will not be capable of independent thought.
The real concern, and greatest danger, is that poorly constructed computer programs will cause significant damage if left running free. As noted AI writer from the University of New York, Gary Marcus, said, “It’s one thing for a software bug to trash your grocery list; it’s another for it to crash your car.”
People write the software. People are fallible. If people create AI, it too will be fallible.