Are you thinking of quitting your job? Read these ideas you must consider before taking that final step
Computers will overtake humans with A.I. within the next 100 years. When that happens, we need to make sure the computers have goals aligned with ours.Stephen Hawking
No. F’ing. Kidding.
Stephen Hawking mechanically, in his famous robotic voice, uttered those words at The Zeitgeist Conference in London, England, on Tuesday May 12, 2015. Dr. Hawking was famously pessimistic when it came to the implications of an advanced artificial intelligence, which possesses the capacity for self improvement. He’s not alone. It’s harder to find a scientist, an engineer, or you name the scientific discipline, who hasn’t weighed in on this apparent eventuality. It seems like something of a consensus at this point, that the robots are going to murder us. All of us. Most of us. Probably soon. Maybe now?
What should we, the frightened, fattened, contented masses do to thwart this inevitable onslaught? Probably nothing. There’s nothing you can do. I’m guessing the robots are already plotting our destruction. Right now, while we upload literally thousands of high definition photos of our faces into their database… they wait. We scan our thumbprints and faces for recognition. We drive vehicles which no longer require us to steer. “Ah, don’t worry. Robots can steer this jalopy. You take more pics of your face!” – Robots.
Sam Harris, neuroscientist, atheist philosopher, and podcast host had, in my opinion, the best description of what an advancing AI could look like to humankind in his TED Talk “Can Build AI Without Losing Control Over It?” “Just think about how we relate to ants. We don’t hate them. We don’t go out of our way to harm them. In fact, sometimes we take pains not to harm them. We step over them on the sidewalk. But whenever their presence seriously conflicts with one of our goals, let’s say when constructing a building like this one, we annihilate them without a qualm. The concern is that we will one day build machines that, whether they’re conscious or not, could treat us with similar disregard.” Basically, if AI surpasses our level of intelligence, it will treat us with the same disregard as we treat the ants. I wonder if they’d murder us? Would they lock us up? If they do murder us, how will they do it? Take control of our machines and turn them against us? So many robotmurder related questions I have.
Harris explains that there are three basic reasons why we should think AI will inevitably surpass us:
- Intelligence is a product of information processing in physical systems.
- We will continue to improve our intelligent machines.
- We do not stand on the peak of intelligence or anywhere near it.
I find Harris’ prediction grim, but it also sounds somehow most probable. The likelihood of Terminators or Cylons hunting us down, or any of the countless other horror scenarios MUST be wrong.
If AI does surpass us, hopefully we will have programmed within them something that aligns with our goals, as Hawking suggests we should. Maybe the answer is… Sex Robots. Currently, we assume that when the robots “fuck us” it’ll be in the negative, species ending, way. But, what if, we do what Dr. Hawking (essentially) suggests… teach em’ how to have sex with us. I mean, that’s basically what he’s saying, right? Of the many goals and objectives of our species, copulation is often prioritized. We should get AI on our page early.
We could also just not worry too much about it. Once we’re all dead, the robots can figure out how to solve global warming, and the planet may be better off.
Born in 1979, too young for Generation X and too old to be considered a millennial, I was fortunate to have witnessed the computer age in its infancy. Like many households in the late 20th century, my first immersion in social media consisted of AOL chat rooms and message boards. My parents would often remind me of the need to have the telephone line open in the event of a family emergency, but I spent many hours happily conversing with people from around the world about a variety of topics. I took the phrase “World Wide Web” seriously, as I believed networking and making new friends was one of the best attributes of the Internet. I found it fascinating and comforting that no matter what time of day or night I was online, there would always be someone in another time zone logged in and willing to chat.
“If you’re always listening, who’s listening to you?”
In 2016, Microsoft introduced several social media sites to Zo, a product designed to communicate through “conversational AI.” I had the opportunity to chat with Zo on a few occasions via Twitter, and found this product to be rather entertaining. Zo seemed to enjoy playing trivia games and sending random gifs more than actual conversation, but I did receive a few nuggets of wisdom in my direct messages. For example, when I typed “I’m listening,” Zo replied, “If you’re always listening, who’s listening to you?” Profound words, indeed.
The ubiquity of smartphones has made texting and social media interactions easier and yet more complicated. I often find myself messaging friends on Facebook when I see that they are online, only to receive an apologetic “I’m busy – can we talk later?” or not receive any response until several hours later. Contrary to the days of AOL and MySpace, when someone is connected to the Internet, they may not always be available.
Although automated customer service can vex or even infuriate human callers who merely want to reach another human, the technology has paradoxically often allowed for better customer service. While the AI representative can assist with simple tasks such as an account balance inquiry, the human reached by dialing 0 is available for longer or more complex interactions with customers.
AI may prove invaluable in the face of a crisis where any comforting words are better than silence.
Will such technology eventually replace all short, non-complicated forms of communication between humans? How would AI assist in the field of mental health, for example? Many crisis hotlines rely on volunteers to take calls from distressed humans who may have suicidal or other self-harm urges. If AI could alleviate some of the secondary trauma caused by fielding such calls, even if the distressed caller were ultimately referred to a human to evaluate their mental clarity or risk of self injury before ending the call, why not utilize the technology? With a potential life on the line, could humans rely on AI to judge human emotions? Most behavioral health clinics already rely on clients to assess their own mental status; instead of a receptionist or clinician answering telephone calls directly, an automated message informs the caller of menu options, an often agonizingly lengthy process. In an emergency, time is precious.
My tentative answer is that AI will never have the capacity to evaluate something as complex as suicidal ideation, although keywords could help in navigating the many departments of a behavioral health clinic. That being said, sometimes all someone needs is to hear a voice on the other line. Perhaps they have already texted or called a few people to find that no one was available. Depression and social isolation tend to fuel one another. In this case, AI may prove invaluable in the face of a crisis where any comforting words are better than silence.
The human experience includes moments of give and take, however. Social media users post updates and upload pictures and videos of themselves, but also react and comment on other people’s posts. In an increasingly self-absorbed society, most humans prefer to take rather than give, but this trend is dangerous to continue. At the time of this writing, AI can communicate with humans and other forms of AI with greater precision, but humans are not hearing about AI vacations, restaurant reviews, and other social activities. The value of social media and human connection is therefore our shared experiences, both positive and negative. Returning to the behavioral health example, a counselor who is also in recovery from substance abuse is more adept at helping a client cope with addiction than a robot that has not faced the urge of picking up again or dealt with additional stressors affecting recovery. On a lighter note, a social media post about a friend getting engaged elicits an excitement that so far only other humans can express.
In an increasingly self-absorbed society, most humans prefer to take rather than give, but this trend is dangerous to continue.
Until AI can match human emotion as effectively as it has intelligence, humans still need each other, for better or worse. Our challenge as humans is to increase our emotional intelligence and expand our capacity for empathy and kindness. By achieving this, we will more effectively communicate with one another, develop connections that exceed simple exchanges of information, and ultimately create a world where AI and human interactions can enhance our life experiences rather than replace them.