Facebook's "Out-of-Control" AI?

Here is a somewhat strange but morally relevant media story that happened in the last few days: 

Many news outlets reported that Facebook had shutdown an A.I. research project because two A.I. systems started talking to each other in a "new language" that only the A.I.'s understood. Many people across the Web interpreted this to mean that the A.I. systems had gone haywire and so Facebook had panicked and quickly pulled the plug

Naturally, this invokes images of killer robots and Hal 9000-like creepy AI systems coming to get us. Unsurprisingly, people on social media reacted to this story with a mix of both comedy and terror. 

But in the ensuing hours, word came out that the headlines had mischaracterized what really happened with the Facebook project. It was an experiment by AI researchers at Facebook in which two chat bots had begun communicating in a garbled sentences, but it turns out that according to researchers this was a fairly predictable and non-scary side effect of the experiment. Also, Facebook apparently stopped the experiment simply because the incomprehensible communication made the chat bots hard to track, not because they were afraid the bots were out of control or nefarious. 

And, it should be noted, that these A.I. systems fall under the category of what researchers call "narrow A.I." — systems focused on a limited range of tasks — rather than "general A.I." — a theoretical yet not-currently-existent level of A.I. wherein a machine can perform any intellectual task that a human can. 

One Facebook A.I. researcher seemed pretty pissed about the way the story was reported, and he accused media outlets of writing clickbait headlines. I think that's a fair point. But I still think the Facebook story, however vaguely mischaracterized, portends a very real moral concern about A.I. safety. Simply put: Out-of-control A.I. systems could cause the end of human civilization within our children's lifetimes. 

There are many possible apocalyptic scenarios involving A.I.—an arms race between governments or corporations as they compete to be the first at major breakthroughs, massive income inequality when most jobs become replaced by A.I., and the general possibility that we could create a superintelligent AI whose goals diverge from our own

When I expressed this concern in a Facebook post, a friend of mine responded by saying that humans don't need biological systems per se, and that a kind of humanness or human legacy could persist in A.I. computer mainframes, making A.I. simply the next step in "evolution." He seemed to be suggesting something like "The Singularity," a wild nerd-fantasy in which people become one in the same with their computer systems. He also, strangely, seemed somewhat unsentimental about human beings. 

But to me, the concern is not "humanness," but consciousness itself. I'm defining consciousness here as a creature's inner subjective experience—something that, as far as we can tell, can only be experienced by humans and animals. In other words, your sense that there is a quality to your experience—the sounds, sensations, thoughts, and moods that you experience as having an internal character. It's possible, maybe, that a sufficiently designed computer system could also have this inner experience we call consciousness, but at this point we simply don't know. 

If AI, either by working in tandem with us as biological creatures or by allowing us to upload our conscious minds to the mainframe to give us eternal life (ie "The Singualrity) or some combination of the two, makes for better quality of experience by conscious systems (biological or computer), then AI could be good and I support that project. 

But on the other hand, if AI development cancels consciousness itself, either because it ends human life or because it is not in fact conscious like we are, then that would be bad. It would be a planet run by pure machinery with no inner life, no subjective experience of joy or sorrow or creativity or bliss or love or any other conscious state worth having. 

That sounds bad, doesn't it? 

So was it wrong for the media to blow up a semi-fake story about Facebook's out-of-control A.I.? Maybe. But being concerned about the safety of these powerful machines is also the just, right, and resoundingly moral thing to do.