Is the military the source of A.I.'s greatest dangers?
Or is the nature of society, people and politics?
I’m perplexed by this opinion piece To See One of A.I.’s Greatest Dangers, Look to the Military. Here’s a personal guest link to it from the cache of 10 The New York Times gives me every month. Use it to read beyond the excerpts.
Excerpt:
“Rogue artificial intelligence versus humankind is a common theme in science fiction. It could happen, I suppose. But a more imminent threat is human beings versus human beings, with A.I. used as a lethal weapon by both sides. That threat is growing rapidly because there is an international arms race in militarized A.I.”
The first sentence is a good hook. It exposes a fear many people share, whether they articulate it or not.
The central hook is FUD (often with its cousin, FOMO — Fear Of Missing Out) is like the three-headed dog that the Greeks referred to as Cerebus. The three heads: Fear, Uncertainty, and Doubt with fear is the strongest driver of the three. FUD draws readers.
The fragment “…a more imminent threat is human beings versus human beings, with A.I. used as a lethal weapon by both sides…” would be more on target if it were rephrased, “…a most imminent threat is human versus humans and their proclivities towards warfare, no matter the weapon technologies of choice.” AI is just the latest whipping dog.
Why the escalated fear now? Existential fears have existed for centuries! I wrote about this topic on the Analyst Syndicate website in January 2019. It’s archived here. People have been automating weapons and making them autonomous for decades. See my 2019 note.
Now people are mapping speculation on what the latest generation of A.I. might be able to do to improve automated autonomous weapons. Why stop there? What about quantum’s impact? (I’ve seen some of that too.) We also have ongoing existential fears of nuclear weapons (and of biological weapons too, which emerge from hiding every several years.)
There doesn’t appear to be any way to get people to agree to not build and use autonomous automated weapons. The genies are already escaping from the bottles and will continue to do so virtually forever, feeding the ever escalating potential capabilities of these autonomous automated weapons.
The commercial value of what firms (like OpenAI, Microsoft, Google, Amazon, and hundreds of others) expect of AI development will not be stopped because of the existential fears of automated autonomous weapons.
Commercial drive for AI development will spill over into the enhancement of automated autonomous weapons and that spillover will be far larger than we saw for nuclear or biological weapons (which come with their own existential fears.)
What to do about it?
Open-source? This is an interesting avenue: the more broadly open-source AI technologies are evolved, the easier it is to examine its details (and contemplate military applications) — but the easier it is to copy. Closed-source (proprietary development) will be more obscure (and fear-inducing.)
Regulation of the development? Regulating core AI technologies as a balm for these existential fears won’t stop the development of new, disruptive core AI technologies but it will slow the development of new business entities and patterns of commerce. I am unsure if regulation will deliver more benefits than not.
The real focus should be on investments that augment the social benefits of the diffusion of advanced AI technologies as adjuncts and assistants to people. We should focus there: making people more effective.
The more we focus on those benefits, the more existential risks will shrink in the background … until there’s yet another wave of disruptive innovation that people don’t understand.