[Please Watch] Can we build AI without losing control of it?

Dear friends, 

Our attention spans are increasingly short in this digital carnival, so my job here is to convince you to watch this video. 

Like global warming or nuclear war, certain issues are of civilizational importance. They are, quite literally, about saving the world. And, by definition, that makes them the most important issues we can pay attention to. 

One such issue is the safety risks posed by artificial intelligence. For just a moment, try to set aside your cinematic understanding of robots borne of bad sci-fi movies. The issue here is like those movies only more subtle—and more scary. 

The basic idea is this: We will one day soon, quite possibly in the next few decades, create super intelligent AI. Think of it like 10,000 Stephen Hawkings combined into one mind—except it won’t be human. What would such an entity do? Would we have control of it? We don’t know. 

The ideas presented herein are trippy as hell. But they are worth thinking through, and, I think, need to be part of a global conversation on par with climate change. 

If you have children or grandchildren, or if you work in the tech industry, you should care about this topic. 

Watch this video and let it sink into your bones: 

http://www.ted.com/talks/sam_harris_can_we_build_ai_without_losing_control_over_it?utm_source=tedcomshare&utm_medium=referral&utm_campaign=tedspread