A.I. experts warn of loss of free will, need for morality

A.I. experts warn of loss of free will, need for morality

Pew Research Center asked 979 technology experts, business and policy leaders, scientists and science-minded activists and the like just how they thought artificial intelligence would impact humans by the year 2030 — and while 63 percent waxed positive, another 37 percent warned of the negatives.

And one of their biggest concerns?

That machines would eventually become so ingrained in society, so enmeshed with daily living, as to swallow the concept of human self-reliance and, eventually, humans’ ability to exercise free will.

It’s really only common sense. Think about it. Convenience is indeed a time-saver. But it can also serve as a lesson in laziness.

As Judith Donath, faculty fellow at Harvard University’s Berkman Klein Center for Internet & Society said, in her response to the Pew survey: “By 2030, most social situations will be facilitated by bots. … At home, parents will engage skilled bots to help kids with homework and catalyze dinner conversations. At work, bots will run meetings. A bot confidant will be considered essential for psychological well-being, and we’ll increasingly turn to such companions for advice ranging from what to wear to whom to marry.”

The convenience comes with a cost, however.

Soon, technology may very well lead us to paths we may not be ready to take. 

The more humans cede decision-making authorities to machines, the more reliant humans become on those machines — leading to an eventual and habitual ceding of self-reliance, individual choice and ultimately, free will.

Human choice will go the way of the washing board.

“Every time we program our environments, we end up programming ourselves and our interactions. Humans have to become more standardized, removing serendipity and ambiguity from our interactions. And this ambiguity and complexity is what is the essence of being human,” said Marina Gorbis, executive director of the Institute for the Future.

Gorbis may not have meant it this way — but think of Smart Growth and zoning policies to illustrate her point. At root, these are government’s means of preserving the environment by simultaneously providing for the needs of humanity. Right?

But the more government implements zoning laws and development restrictions, the more government shepherds humans into areas of preferred and allowed living and working locations, until finally, the less humans have an ability to choose. The less humans have unfettered rights to choose where they live, where they recreate, where and how they shop and transport and commute to work.

Some may say the saving graces of conservation are worth these curbs to human choice. Some may argue otherwise. But the overall point, in the words of Gorbis, is this: Programming the environment results in a standardization of human living.

That’s how Smart Growth works and, as Gorbis noted, that’s what A.I. could very well bring.

And more.

“AI and related technologies … can virtually eliminate global poverty, massively reduce disease and provide better education to almost everyone on the planet,” said Erik Brynjolfsson, director of the MIT Initiative on the Digital Economy, to Pew. “That said, AI and ML [machine learning] can also be used to increasingly concentrate wealth and power, leaving many people behind, and to create even more horrifying weapons.”

Ouch. So — solutions, anyone?

It comes down to humanity’s ability to rein in machinery with morals.

“If we are fortunate,” wrote Barry Chudakov, founder and principal of Sertain Research, “we will … work toward ‘not undirected intelligence but beneficial intelligence.’ Akin to nuclear deterrence stemming from mutually assured destruction, AI and related technology systems constitute a force for a moral renaissance.”

In other words: Either run the A.I. race with an eye on morality first, technological advancement second — or face the gloom and doom outcome of a lost humanity, stripped of much of what makes humans humans.

This is sound logic, and it’s logic that the 37 percent tell Pew can be put into practice in three different ways: Keep “global good” at the forefront of A.I. pursuits; “develop policies to assure AI will be directed at humanness” and concern for the greater good; and “prioritize people” above the quest for money, fame or personal glory.

Or, in a word: Morals.

Scientists, researchers, developers, engineers, A.I. and machine learning masters need to be constrained by solid morals, virtues and values.

Pay attention to these Pew 37 percenters. They bring forth a terrific and timely message, one that goes like this: Science without a moral compass is bound to harm, not help, humanity.

Morality should be the standard by which all A.I. is judged.

First appeared at The Washington Times.

Related posts

One thought on “A.I. experts warn of loss of free will, need for morality

  1. Tom

    Thank you Cheryl, I look forward to reading your writing on the Washington Times!

Leave a Comment