Google recently held their I/O conference to preview all of their upcoming technology. New phones, new laptops, faster processors, and some cool but unnecessary gadgets are the usual bread and butter of these events. Yet, the biggest announcement was, on the surface, as innocuous as they come: There was a phone call to book a haircut.
It wasn’t just any phone call, though. It was made by a functioning artificial intelligence (AI) via the Google Assistant, and the shocking part was that it sounded like a human. The AI was able to use natural language cues in order to fully convince the hairdresser on the other end of the phone that she was talking to a real person booking a haircut.
The Google Assistant AI is not capable of open-ended conversations and can only make restaurant reservations, book haircuts, and ask for a store’s holiday hours. Still, this opens a whole new realm of ethical and moral dilemmas that we might not be ready to address by the time this AI technology fully emerges.
Is it morally wrong for an AI to pose as a human and deceive other humans on something as simple as a phone call? How much potential for abuse does human-sounding AI have? What happens when AI becomes too human-sounding and starts showing things like emotion and consciousness? These questions have been major sticking points for many experts in the AI industry and an unsettling future reality for most people.
The problem is that while Google may use human-sounding AI for good, it is difficult to trust that other, less visible companies will have the same good intentions. Can AI and trust work together?
There Are Legitimate Concerns About AI
Most AI apologists would stress that our concerns about AI, and its increasing control of the societal infrastructure, are based upon existential paranoia. Countless sci-fi movies, television shows, novels, and video games have touched on the subject of human-like AI, and they often teach this lesson: Machines will be our downfall.
The entire Black Mirror TV series is a PSA for the dangers of being unprepared for the eventual prevalence of AI in society. (Just ask an AI ethics expert for their take on Black Mirror‘s latest season.) There is also media, like the numerous iterations of Star Trek, which seem to think AI’s benefits will far outweigh the risks. Think the onboard computer and its human-sounding voice or the android Data.
Nevertheless, society has legitimate concerns about AI and its role in modern society. The ride-sharing app company Uber recently came under fire when one of its autonomous cars struck and killed a pedestrian on a street in Tempe, Arizona. The idea that a driverless car, powered by AI, could become errant has been a continuous argument flung against the implementation of AI in vehicles and society at-large.
Like all computer systems, autonomous vehicles can be hacked and put their passengers in danger. Autonomous systems could be modified to become explosive delivery systems, robot assassins, or even digital robbers. It’s hard to put faith in systems that are so easily compromised.
Therein lies the biggest concern for most people: trust.
The Importance of Trust in Society
Trust is the foundation by which civil society is built. One of the biggest issues facing our modern society is the deterioration of “social trust.” A common anecdote from older generations is that they “never used to lock their doors” because there was a level of trust with the people in their community.
Consider, also, our current tumultuous political discourse fomented by competing political groups who do not trust the motivations of each other. When we expect malfeasance to be present, we only look for evidence that confirms our biases.
You can see further evidence of the lack of social trust when you look at poll numbers regarding trust in the U.S. government. Americans’ trust in government institutions went from nearly 80 percent during the Kennedy administration to a historic low of 18 percent in 2017. That’s a big difference! Despite a robust economy and good employment rates, trust is collapsing in America due to the “fake news” cycle and lack of objective facts.
Robots Are Hard to Trust
With the degradation of social trust, it’s no small wonder why society looks upon AI’s encroachment into the human social sphere with skepticism. Uncertainty about the technology coupled with a seeming lack of objectivity causes people to become wary of not only the AI’s intentions but the man behind the curtain as well. When companies are able to create human-sounding AI voices and even modify videos of famous political figures, it’s hard to trust that the technology will be used morally.
Building an AI community is like constructing a society from the ground up every time. You need to establish working principles and codes of conduct that build upon each other without causing too many contradictions or problems. AI communities have often been taught not to rock the apple cart. When combined with larger, integrated systems, the potential for abuse can present unique challenges—like when hackers used old telephone lines to hack the entire emergency weather system of Dallas, Texas.
Studies have shown that people are very uncomfortable with the idea that any kind of AI system could act with moral impunity. Even if AI were to make the right moral decisions, the fact that there would be no emotional or intuitive human element behind the cold calculation of consequentialism is unsettling for most people.
The trust problem is exacerbated by the fact that most AI scientists use the “black box method” when coding their deep-learning machines. Simply, they have little understanding of why the algorithms work or even how they were created. It’s pretty hard to trust something that you don’t understand.
Maybe Trusting AI Is a Moot Point
One reason many AI experts believe that the technology will not surpass us is that it would have no real motivation to do so. Humans are driven by unconscious desires and by a natural “selfish urge” for survival. Neither of these would ever reasonably motivate an AI unless we programmed those characteristics into them. Unsupervised machine learning may create unfathomably complex algorithms that could do harm, but these machines may accidentally destroy us as opposed to purposefully doing so.
The future may look less like the Terminator and more like Star Trek. Like always, it is the humans who have more potential to do harm with their tools than the tools themselves. When it comes to technological progress, it is important to understand why the technology exists and not just make it because we can.
Do we honestly need human-sounding AI to carry out our most mundane conversations? Must AI solve every problem known to man? Probably not. Then again, nobody thought we needed full-blown computers in our pockets. Times change, and maybe soon it won’t be so far-fetched to trust that human-sounding robot on the other side of the phone.