To promote his book about thriving in the age of robots, professor gushes over how great automated weapons are

Reading “Robot Weapons: What’s the Harm?” by Jerry Kaplan, who supposedly is a university teacher of the ethics of artificial intelligence, made me wonder what the heck does the term “ethics of artificial intelligence” mean?

I would think that artificial intelligence—like any other technology—has no ethics, which is to say, the ethics of using artificial intelligence is as dependent on the goals of the user of the technology as are the ethics for using fire, bows and arrows, the wheel, the printing press, nylon, airplanes and nuclear energy. We bring our ethics to the use of technology. If we do something good with the technology, that’s ethical. If we do something that harms others, that’s not ethical. The fact that it’s artificial intelligence doesn’t change that ethical equation.

“Robot Weapons: What’s the Harm?” is by Jerry Kaplan of Stanford University, whose book, Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence, hit the real and electronic book stands less than two weeks before the New York Times published the article, coincidence of coincidence.

Kaplan’s article is a Pollyanna view of automated weapons that seems quaintly old fashioned, as if it were more appropriately made maybe 20 years ago, before governments everywhere, but especially in the United States, started sinking billions of dollars into the development of automated weapons, which are weapons that make decisions to fire, bomb or crush without the intervention of humans. Much of Kaplan’s article takes a futuristic approach, as if automated weapons were only on the drawing board, and not already in production and being tested. He makes all the standard arguments of those who support developing robot weapon systems: automated weapons will protect civilians by enabling the army to target soldiers and make war safer for our soldiers because they will wage it at a long distance.

For someone who studies ethics, Kaplan certainly misses a lot in his discussion. He doesn’t consider whether having automated weapons will make it easier for governments to justify leading their countries into war. He doesn’t discuss how to assign blame when the weapons fail and kill civilians or even our own soldiers by mistake. Blame-placing is one of the central tasks of ethics, so I think someone who studies ethics would explore a situation in which the assignment of blame is an inherently murky matter.

Kaplan’s biggest failing in his encomium to robotic weapons systems is his neglect of the basic issue: is it ethical or moral to develop any technology into weaponry. His one comment is a quote from someone he identifies as a “philosopher.” No, it’s not Plato, Aristotle, Lucretius, Thomas Aquinas, Kant, Clausewitz, Trotsky, or Bertram Russell. These and other luminaries might have had an opinion on whether it is ever ethical to employ technology to kill people. But Kaplan’s philosopher is B. J. Strawser, who is an assistant professional of philosophy at the U.S. Naval Post Graduate School.  Kaplan claims Strawser says that leaders have an ethical duty to do whatever they can to protect their soldiers. This logical stretch to justify robot weaponry ignores the fact that Strawser recently shared authorship of a long paper that seems to conclude that development of automated weapons is immoral.  It’s a particularly odious example of attempting to prove an argument by selecting experts who agree with you, or in this case, one out-of-context statement by a minor expert who actually disagrees with you.

To understand how truly offensive Kaplan’s article justifying the development of automated weapons systems is you have to read the description of his new book on Amazon: “Driverless cars, robotic helpers, and intelligent agents that promote our interests have the potential to usher in a new age of affluence and leisure — but as Kaplan warns, the transition may be protracted and brutal unless we address the two great scourges of the modern developed world: volatile labor markets and income inequality. He proposes innovative, free-market adjustments to our economic system and social policies to avoid an extended period of social turmoil.”

It sounds as if Kaplan is both a believer in using artificial intelligence to continue automating the workplace and home and a humanistic, left-leaning capitalist of the Clinton-Obama-Biden school.

But doesn’t it make you wonder? Other than a book review in the Times, Wall Street Journal or New Yorker, few publication credits help the sales of a new book more than an article by the author on the Times opinion page. Kaplan could have written his short article on social policies that help society adjust to greater automation. He could have launched a discussion of the ethics of government support of the development of technologies that replace jobs.  He could have philosophized on the meaning of work in the new era. He could have painted a gee-whiz Jetson-like world of the future. He could have come out in favor of or against automating any number of industries, such as teaching, healthcare or entertainment.

He and his publicists rejected all these ideas and decided that the most attractive topic for selling books was to make the same tired and morally suspect arguments in favor of developing new ways to kill people more efficiently.

What’s the harm?, the headline asks. As if we don’t already know what harm comes from war: death, maimings, destruction, homelessness, economic ruin and wasted resources that could be dedicated to positive social ends.

Leave a Reply

Your email address will not be published. Required fields are marked *