Human-like AI is dangerous for society

Software that mimics humans is a marketing stunt with dangerous potential

Sophia the robot.
(Image credit: ISAAC LAWRENCE/AFP/Getty Images)

The voice on the other end of the phone sounded just a little too human.

In May, Google shocked the world with a demo of Duplex, its AI robocall assistant for accomplishing real-world tasks. The system can do things that you, as a busy person, might have little time or patience for, like booking a hair appointment or a restaurant reservation. But with its authentic-sounding "hmms" and "uhs," the system raised some serious concerns, because the humans who answered the phone calls did not seem to realize they were talking to a piece of software. And indeed, this should worry us. Convincing human-like AI could be deployed for dubious reasons with disastrous consequences.

As more and more people come in contact with autonomous systems like Duplex, the danger is not that these systems will suddenly wake up and take over the world, despite the hysterical portrayals in the media and pop culture. Instead, the real danger is that humans will become but a passive data point in the designing of those systems, to disastrous ends.

Subscribe to The Week

Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

SUBSCRIBE & SAVE
https://cdn.mos.cms.futurecdn.net/flexiimages/jacafc5zvs1692883516.jpg

Sign up for The Week's Free Newsletters

From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.

From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.

Sign up

Artificial intelligence is meant to be a tool for humans, to make our lives easier and find solutions to everyday problems. It is not meant to replace us. And yet, we design it to replicate human-ness with eerie fidelity. We don't do this with other tools — hammers look like hammers, not people — so why do we do this with AI?

The answer is simple: because it makes great marketing.

When machines accommodate and gesture toward the nuances of our own behavior, we are much more willing to integrate them into our lives. Things that look and sound like us trigger our admirable human capacity for empathy. In the case of Duplex, the closer a voice sounds to human, the more reluctant the receiver of a robocall might be to hang up. But the human-ness of artificial intelligence could easily mask a dubious attempt to sell you something. Indeed, it could become all too easy to commoditize our trust. For example, we might be prone to read friendly intent into a bank chatbot that makes warm and witty banter, even if its purpose is to push students toward taking out unnecessary loans.

There are other concerning examples of AI being anthropomorphized and used as a marketing ploy. Last October, Saudi Arabia made headlines by "granting citizenship" to a talking robot named Sophia. This was a marketing stunt meant to signal the country's focus on technological innovation. But if we look more closely, this move should be considered especially cruel in a country that only allowed real human women to drive last year, and where women still require a male guardian to make financial and legal decisions. A robot, it seems, can breezily be granted more rights than half of the population of that country, all for a short-term spot in the news cycle.

Perhaps this seems like an overreaction. But I assure you, it is not. Talk of AI and personhood at the level of nations, even if it starts as a marketing stunt, can have far-reaching repercussions. A proposal now before the European Union would grant robots a distinct status of "electronic persons," a move that attempts to answer the question of who is to blame when AI behavior has lethal consequences. "The more autonomous robots are, the less they can be considered simple tools in the hands of other actors (such as the manufacturer, the owner, the user, etc.)," the proposal reads."This, in turn, makes the ordinary rules on liability insufficient and calls for new rules which focus on how a machine can be held — partly or entirely — responsible for its acts or omissions."

This kind of language, if enacted, is a disastrous misappropriation of human responsibility. When we are unwilling to hold humans accountable for the systems they create and release into the world, and instead allow an "electronic person" to bear the blame, we are no better than children who point the finger at an invisible "friend" for the bowl they themselves have broken. The resulting laws warp people's notion of AI in ways that make society less free, and less able to benefit from the cutting edge technologies that are within our reach.

"Pull back the curtain of AI and there are millions of exploited people," writes the technology pioneer Jaron Lanier in his book, Dawn of the New Everything: "The only way to reduce the harm is to stop believing in AI as a new type of creature, but instead evaluate algorithms as tools to be used by people."

It is time to think more critically about the impact of anthropomorphizing software. The first step is to see anthropomorphized AI for the marketing ploy it is. But the real antidote is to actively look for ways non-technical people can be involved in the design of the systems that affect their lives.

One recent project at Columbia University's Gang Intervention and Computer Science Project provides an example of how this might be done. William Frey and his colleagues describe how formerly gang-involved young people in Chicago were hired as subject matter experts in the development of a system to monitor and prevent gang violence. Such monitoring systems for social media are already used by police — but without the important context and nuance former members of those communities provide, the systems can be wrong and innocent people can be arrested and criminalized based on little more than a tweet.

While a number of big companies have launched internal programs to implement ethical guardrails around the use of AI, these remain largely opaque processes enacted at the executive levels of private corporations. As for Google's Duplex, a second demo in June was explicit in highlighting ways the system might disclose its status as a machine at the outset of a robocall.

Still, a good question to ask of software made to mimic us is: What aspect of the human is being imitated, and to what end? Does the specific product or system have to be anthropomorphic in its design to be effective? How we answer this question could have broad implications for how we interact with AI in the future. We can move through the digital age in deference to our robotic overlords, or we might be savvy enough to realize we are the ones in charge.

To continue reading this article...
Continue reading this article and get limited website access each month.
Get unlimited website access, exclusive newsletters plus much more.
Cancel or pause at any time.
Already a subscriber to The Week?
Not sure which email you used for your subscription? Contact us
Kat Mustatea

Kat Mustatea is a playwright, technologist, and contributor at Forbes, whose work focuses on the intersection of technology and the arts. Her forthcoming TED talk is about the role of artist and audience in the age of machine intelligence.