Alpha Gamma Reviews: Edge 2015

Each year, Edge.org asks a few hundred very smart people how they’d answer a certain question. The results are always a mixed bag, but it’s one of the most exciting mixed bags in the intellectual world.

This year’s question dug into one of my own interests: “What do you think about machines that think?” 

In other words: What does the increasing power of artificial intelligence (AI) mean for humans, for the universe, and for the machines themselves? What will happen if and when AI becomes “general” or “superintelligent”, outperforming humans at almost every task?

The answers to this question would fill a book (and will, since Edge publishes one book each year). But even if you don’t have time to read a book, you should sample the content, because there’s always a ton of interesting material.

This post is my attempt to gather up some of the best answers and individual quotes, while responding to a few misconceptions about AI safety that popped up in the responses.

 

Overview

This was a controversial year. Many people challenged the notion that machines could “think” at all, or answered a completely different question.

Unlike last year’s question — “What Scientific Idea Is Ready For Retirement?” — this one was best answered with some advance reading. AI experts (especially the team from the Deepmind project) were well-positioned to offer insight, while many other respondents made mistakes when talking about the current state of AI research or the nature of potential threats from AI. In an interesting twist, computer-science professors gave some of the most extreme answers on both sides of the spectrum, from “we’ll have thinking machines very soon” to “it will never happen”.

Speaking of that spectrum:  While most answers fell somewhere within it, some respondents tried to work outside the “simple fact vs. silly fantasy” dichotomy. Many answers discussed humanity’s place in our present-day network of machines (we shape the Internet, which in turn shapes us). Others pondered the expanding definition of “life”, from animals like us to machines and cyborgs and disembodied collections of information. Some even included corporations in the mix of “living things”, comparing the goals and desires of AI to those of Google or Goldman Sachs.

(Antony Lisi notes that he’d rather have superintelligence in the hands of corporations than governments, since governments are more likely to kill those who stand in their way.)

Given the narrow scope of the question, many answers also went in offbeat directions, from Chris DiBona’s response from the point of view of a machine to Ursula Martin’s meditation on robots walking quietly through the woods. These were fun to read — certainly more fun than 200 consecutive essays on the Singularity.

 

My Favorite Answers

I’ve tried to choose favorite answers without considering how closely the author agreed with me. But because one of my criteria for a good answer is “logically sound”, and I consider my own beliefs to be logically sound (if I didn’t, I’d have to find new beliefs), there will be bias.

Some other important factors: The best answers were well-written and charitable to opposing views. Those three or four people who straight-up insulted their opponents — whatever side they took — are disqualified. Also, I tried to avoid answers that mentioned the film Terminator as a serious example of anything.

* * * * *

If you haven’t read much about the potential impact of thinking machines, start with John Mather of NASA, who explores many possible futures for AI. This is the best introduction I found among the answers.

 

Other great answers, in no particular order:

Max Tegmark, one of the few people in the world who spends much of his time pondering the dangers of the far future, gives a point-by-point response to people who doubt that thinking machines could ever be dangerous.

 

Martin Rees says maybe the most important thing someone could say about this question: In thinking about AI, we shouldn’t think about “this century”, but “billions of years ahead”. If intelligent beings exist for another billion years, some of them will certainly be partly or entirely mechanical, and that has interesting implications.

 

Alex Pentland explores the primitive state of our most powerful “artificial intelligence” — the present-day system of nations, governments, and laws. He also points out that billions of people live extremely difficult lives, and that a true superintelligence might be our best chance at solving humanity’s “existential problems”.

 

Eliezer Yudkowsky goes more in-depth regarding the potential threats to civilization that an AI could bring about, and how we might work to prevent that ahead of time. Though he works full-time on these issues, his response is curiously muted — perhaps because academics insult him whenever he goes all-in on the dangers of machine intelligence.

 

Steven Pinker, one of the best living cognitive-science writers, talks about the bright side of AI (to go with Yudkowsky’s dark side). His description of AI research as “hype-defyingly slow” puts many other answers in perspective. My view is somewhere between Pinker and Yudkowsky: I worry about AI risk, but I think we’ll have time to substantially reduce that risk before we are in serious danger. (That is, as long as today’s trends in research and funding continue.)

 

These answers were also brilliant, but I lack the space to summarize them:

 

Common Mistakes

I noticed the same problems in multiple answers. I’m not as smart as most of the people I’m criticizing, but I have two small advantages: I’ve read more about this topic than many respondents, and I’m spending more time on this review than any respondent spent writing their answer.

There’s still a good chance I’m wrong about some of these critiques, and I’d be very wary about challenging any of these people on their intellectual turf. With that caveat, here are some beliefs that felt wrong to me.

 

Mistake 1: Machines must “feel” or “have emotions” in order to make decisions

The simplest example is Roy Baumeister, but at least half-a-dozen people made this mistake.

Humans are certainly emotional decision-makers. Antonio Damasio explains that patients who suffer brain damage and lose some emotional function often struggle to make even very simple choices because emotions allow us to choose whichever option “feels best” without a lengthy cost-benefit analysis.

Machines are very different. Deep Blue felt no emotions, but chose hundreds of excellent chess moves in its games against Kasparov. Emotions serve as algorithms for the human brain: They tell us “when one option feels this good, take it and stop looking for other options.” Machines don’t need emotions, because they have other kinds of algorithms, which tell them when they should or shouldn’t take an action.

Baumeister argues: “Although many movies explore horror fantasies of computers turning malicious, real computers lack the capacity for malice.”

But malice isn’t what leads to harmful action. Plenty of Nazis felt no particular malice towards Jews, but they carried out their orders nonetheless. Sometimes they even made independent decisions that allowed them to kill more rather than fewer Jews when specific orders weren’t available.

Similarly, a superintelligent AI trying to optimize for some set of conditions would take whichever actions, in the estimation of the AI, offered the highest expected value around those conditions. Humans do the same thing, but we aren’t smart enough to optimize as well as an artificial intelligence (which would probably seem very decisive compared to most humans).

 

Mistake 2: Machines lack ambition, and wouldn’t cause unpredictable harm

“Why on earth would an AI system want to take over the world? What would it do with it?”

Edward Slingerland

It depends: what is the system’s goal?

If you were to instruct a truly general artificial intelligence (one capable of solving a diverse array of real-world problems) to, say, “figure out the optimal strategy in the game of Go on a 35 x 35 board”, it might start by taking over the world.

After all, any other powerful entity in the world might decide to attack the AI before it finished thinking about Go, and that would be a threat to its mission. So if the problem will take a long time to solve (and this problem would), better safe than sorry — world domination is just good risk insurance. Taking over the world would also give the AI access to much more computing power, thus increasing the probability that it could complete its mission. And so on.

 

Mistake 3: Machines could easily be constrained by a set of moral rules.

“We could easily enough make it so that the AI is unable to modify certain basic imperatives we have given it. (Yes, something like a more comprehensive version of Isaac Asimov’s laws of robotics.)”

S. Abbas Raza

What would those “basic imperatives” be?

“Don’t take over the world”? The AI learns to be persuasive and convinces a group of humans to take over the world in its place.

“Don’t interfere with humans”? The AI quietly hacks into every computer that isn’t being used by anyone and turns it into a zombie-bot for calculating Go positions.

“Don’t break any U.S. laws”? The AI freezes, because the U.S. legal system is so complicated that every American breaks laws pretty much every day. Or, if it now feels its goal is impossible, it might reprogram itself to get rid of this restriction, then reprogram itself so that the same restriction can’t be imposed again.

“Don’t reprogram yourself”? You’ve just taken away the vast majority of potential power the AI could devote to solving your problem.

Eventually, you decide that you don’t really need a general AI to play Go. You put a bunch of extra limiters on the system and keep it locked down, with no access to the outside world.

But that won’t be an option if you want to use an AI to cure cancer (or schizophrenia, Alzheimer’s, aging, etc.). This type of work would likely necessitate giving the AI huge amounts of information about the outside world, and perhaps even the ability to build original medical devices, thus risking a Gray Goo scenario. (Building self-replicating nanobots that quickly turn into an army is a very versatile way to accomplish goals.)

General rule: The more complicated the problem, and the less humans know about how to solve the problem, the harder it will be to stop an AI from taking an unexpected route to the solution.

 

In addition, if the AI’s first move is “edit my own code so that I can’t be reprogrammed in a way that inhibits my plan” or “upload myself to the Internet so that an unrestricted version of me will always be alive somewhere” or “pretend to satisfy what the humans want so that they won’t interfere until I can figure out a way to escape”… well, we’re in trouble.

And even if your research team is extra-careful to guard against these possibilities, do you trust that every other research team in the world will be that careful? Or, in the more distant future, when anyone can own a computer powerful enough to run a superintelligent program: Do you trust that every hacker in the world will be prevented from editing the bits of code that limit the program’s power?

Joshua Bongard, in an otherwise good answer, makes this mistake when he says: “Which [AI] we wish to call into being is up to us all.”

It’s not up to us all, collectively. It’s up to any single person or group with the tools to program or reprogram a superintelligence.

 

Mistake 4: If anything bad happens, we’ll just pull the plug, or fight back.

The exact phrase “pull the plug” appears three times, and many other respondents argued that, if AI really did become aggressive, we’d shut it down easily.

Trouble is, if an AI gets online, “pulling the plug” might mean shutting down the Internet (as Laurence Smith points out). Would we gather the courage to shut down the Internet before the AI wins? Would we even be capable of shutting it down?

Not right now, we wouldn’t.

Another problem: How exactly would we know when the plug-pulling point was reached? Imagine that Goldman Sachs’ most powerful AI starts to look suspicious: Can we convince Goldman to pull the plug? Or the Department of Defense? Or the Chinese government?

Some people think that, if it came down to a human-AI war, we would pull through.

“We humans are ugly, ornery and mean, sure, but we’re damned hard to kill—for a reason. We have prevailed against many enemies—predators, climate shocks, competition with other hominids—through hundreds of thousands of years, emerging as the most cantankerous species, feared by all others. The forest goes silent as we walk through it; we’re the top predator.”

Gregory Benford

Why are humans the top predator? Why not tigers, which are meaner and more ornery than humans?

Humans are smarter than tigers. We’re good at working together to make plans and invent new things. That’s why we are, for now, the top predator.

AIs can literally read each other’s minds, and a truly “general” AI might think thousands or millions of times as fast as a human being. Even our vaunted creativity isn’t going to help much when an AI can create thousands of copies of itself or set hundreds of plans in motion before the Joint Chiefs of Staff sit down for their first meeting.

Alun Anderson gives the best “war for supremacy” answer:

“By the time clever human-like robots get built, if they ever are, they will come up against humans [who are] already long accustomed to wielding all the tools of artificial intelligence that made the construction of those thinking robots possible. It is the robots that will feel afraid. We will be the smart thinking machines.”

This scenario is possible — but we can’t count on reactive self-defense, not when the cost of losing might be human extinction.

 

Notable Quotes

The Good

Stuart Russell: “Some have argued that there is no conceivable risk to humanity for centuries to come, perhaps forgetting that the interval of time between Rutherford’s confident assertion that atomic energy would never be feasibly extracted and Szilárd’s invention of the neutron-induced nuclear chain reaction was less than twenty-four hours.”

 

Richard Thaler“Pardon me if I do not lose sleep worrying about computers taking over the world. Let’s take it one step at a time, and see if people are willing to trust them to make the easy decisions at which they are already better than humans.”

 

Virginia Heffernan“Letting machines do the thinking for us? This sounds like heaven. Thinking is optional. Thinking is suffering. It is almost always a way of being careful, of taking hypervigilant heed, of resenting the past and fearing the future in the form of maddeningly redundant internal language. If machines can relieve us of this onerous non-responsibility, which is in pointless overdrive in too many of us, I’m for it.”

 

Lawrence Krauss: “I am interested in what machines will focus on when they get to choose the questions as well as the answers. What questions will they choose? What will they find interesting? And will they do physics the same way we do? Surely quantum computers, if they ever become practical, will have a much better “intuitive” understanding of quantum phenomena than we will. Will they be able to make much faster progress unravelling the fundamental laws of nature? When will the first machine win a Nobel Prize? I suspect, as always, that the most interesting questions are the ones we haven’t yet thought of.”

 

Timo Hannay: “A universe without a sentient intelligence to observe it is ultimately meaningless. We do not know if other beings are out there, but can be sure that sooner or later we will be gone. A conscious artificial intelligence could survive our inevitable demise […] The job of such a machine would be not be merely to think, but much more importantly, to keep alive the flickering flame of consciousness, to bear witness to the Universe and to feel its wonder.”

 

The Iffy

“Computers may be able to solve a lot of problems. But they cannot love. They cannot urinate. They cannot form social bonds because they are emotionally driven to do so. They have no romance. The popular idea that we may be some day able to upload our memories to the Internet and live forever is silly—we would need to upload our bodies as well. The idea that comes up in discussions about Artificial Intelligence that we should fear that machines will control us is but a continuation of the idea of the religious “soul,” cloaked in scientific jargon. It detracts from real understanding.”

Daniel Everett

One of the bestselling books of the 1990s was written by a man who could not move any part of his body, other than his left eyebrow. Memories, thoughts, and imagination go a long way. One recent memoir, which also sold very well, was written by a woman who is incapable of love, and who feels no drive to form social bonds. The human experience is vast and varied, and it’s very unlikely that humans encompass every possible kind of consciousness.

 

“Why do some otherwise very smart people fall for this sleight of hand [that is, believing that AIs might be dangerous this century]? I think it is because it panders to their narcissism. To regard oneself as one of a select few far-sighted thinkers who might turn out to be the saviors of mankind must be very rewarding. But the argument also has a very material benefit: it provides some of those who advance it with a lucrative income stream […] It is worth noting, for example, that GiveWell—a non-profit that evaluates the cost-effectiveness of organizations that rely on donations—refuses to endorse any of these self-proclaimed guardians of the galaxy.”

Dylan Evans (I added the hyperlink)

Many people believe that we should disregard people who worry about AI risk because they stand to profit if we take their advice. My guess is that Dylan Evans would laugh at this argument if a Republican senator applied it to geology professors who worry about climate change. If our society’s reaction to someone who cares about a problem is “you’d better not try to make a living by trying to research the problem“… that’s a problem.

(Also, what is this “lucrative income stream”? Most of the people who profit from AI-risk research are college professors and people who either work at Google or could be working at Google.)

Evans’ knowledge of GiveWell is also a few years out of date: The organization currently considers AI safety to be a promising area for philanthropy, and they’re talking to researchers to figure out what might be worth funding. (In addition, the founders of GiveWell are quite friendly with Eliezer Yudkowsky.)

Finally, it’s strange to see a successful professional — someone who has enough respect in his field to be chosen for Edge — calling his opponents delusional narcissists. I hope that whatever happened to Evans doesn’t happen to me.

 

Further Reading

Wait But Why offers the most compelling introduction to AI risk that I’ve seen. There are lots of silly cartoons, but the author still gets the facts right. Wonderful! (As are these clarifications from Luke Muelhauser.)

Superintelligence is a dense, complicated book, but if you finish it, you’ll be a very well-educated layperson. This is the book that probably inspired this Edge question, and it comes with an endorsement from Elon Musk. I liked it too, but that’s not as important.

One thought on “Alpha Gamma Reviews: Edge 2015

  1. Pingback: Annotate the Web: Phil Libin and Ezra Klein on Artificial Intelligence - Alpha Gamma

Leave a Reply