Computers will overtake humans with A.I. within the next 100 years. When that happens, we need to make sure the computers have goals aligned with ours.
Stephen Hawking
No. F’ing. Kidding.
Stephen Hawking mechanically, in his famous robotic voice, uttered those words at The Zeitgeist Conference in London, England, on Tuesday May 12, 2015. Dr. Hawking was famously pessimistic when it came to the implications of an advanced artificial intelligence, which possesses the capacity for self improvement. He’s not alone. It’s harder to find a scientist, an engineer, or you name the scientific discipline, who hasn’t weighed in on this apparent eventuality. It seems like something of a consensus at this point, that the robots are going to murder us. All of us. Most of us. Probably soon. Maybe now?
What should we, the frightened, fattened, contented masses do to thwart this inevitable onslaught? Probably nothing. There’s nothing you can do. I’m guessing the robots are already plotting our destruction. Right now, while we upload literally thousands of high definition photos of our faces into their database… they wait. We scan our thumbprints and faces for recognition. We drive vehicles which no longer require us to steer. “Ah, don’t worry. Robots can steer this jalopy. You take more pics of your face!” – Robots.
Sam Harris, neuroscientist, atheist philosopher, and podcast host had, in my opinion, the best description of what an advancing AI could look like to humankind in his TED Talk “Can Build AI Without Losing Control Over It?” “Just think about how we relate to ants. We don’t hate them. We don’t go out of our way to harm them. In fact, sometimes we take pains not to harm them. We step over them on the sidewalk. But whenever their presence seriously conflicts with one of our goals, let’s say when constructing a building like this one, we annihilate them without a qualm. The concern is that we will one day build machines that, whether they’re conscious or not, could treat us with similar disregard.” Basically, if AI surpasses our level of intelligence, it will treat us with the same disregard as we treat the ants. I wonder if they’d murder us? Would they lock us up? If they do murder us, how will they do it? Take control of our machines and turn them against us? So many robotmurder related questions I have.
Harris explains that there are three basic reasons why we should think AI will inevitably surpass us:
- Intelligence is a product of information processing in physical systems.
- We will continue to improve our intelligent machines.
- We do not stand on the peak of intelligence or anywhere near it.
I find Harris’ prediction grim, but it also sounds somehow most probable. The likelihood of Terminators or Cylons hunting us down, or any of the countless other horror scenarios MUST be wrong.
If AI does surpass us, hopefully we will have programmed within them something that aligns with our goals, as Hawking suggests we should. Maybe the answer is… Sex Robots. Currently, we assume that when the robots “fuck us” it’ll be in the negative, species ending, way. But, what if, we do what Dr. Hawking (essentially) suggests… teach em’ how to have sex with us. I mean, that’s basically what he’s saying, right? Of the many goals and objectives of our species, copulation is often prioritized. We should get AI on our page early.
We could also just not worry too much about it. Once we’re all dead, the robots can figure out how to solve global warming, and the planet may be better off.