Annotate the Web: Phil Libin and Ezra Klein on Artificial Intelligence is one of the year’s better inventions.

Right now — right this moment — you can turn any web page into a cross between a Kindle book and a page of lyrics on Rap Genius. Other people can read your annotations alongside the article, and add their own comments.

I plan to use this invention often. It’s the best way to deal with the fact that someone is always wrong on the internet.

Below is the first article I’ve “annotated” in this way:

* * * * *

Ezra Klein and Phil Libin are both smart people. But I think that they make some mistakes in their depiction of how experts on artificial intelligence think about the risks of this powerful technology.

I’ve written about this topic before, and care a lot about it. Here’s a quick summary of my views:

  1. Artificial intelligence has a lot of potential, and will become far more powerful as time goes on. (100% sure)
  2. Eventually, whether it takes 50 years or 150 years, at least one AI will exist that is, in most important dimensions, more powerful than humans. (Really rough definition: Presented with a goal, the AI can achieve that goal more quickly than any human or group of humans.) (~90% sure)
  3. Sometime after (2) happens, an AI will exist that humans cannot interfere with — we simply won’t be capable of stopping it from achieving its goals. (~70% sure)

If an AI ever exists that can pursue its goals without interference, we’ll want to make sure those goals are ones that take our safety and happiness into account. But programming behavior as advanced as “keep humans safe and happy”, and ensuring that the AI never stops following that behavior, will be quite complicated.

This is a problem we can’t afford to get wrong. If an ultra-powerful AI is released into the world and has values that don’t properly support our welfare, we’ll be in a lot of danger.

I think this is a risk we can control, and we still have a lot of time to solve the problem. I’m still not 100% certain that there even will be a problem. But the difference between “good outcome” and “bad outcome” is very dramatic, and the questions we need to answer are very difficult. We should be devoting a lot of money and brainpower to the problem now — much more than we currently are.

Unfortunately, when people hear “AI risk”, they tend to think about ridiculous movies like Terminator or Avengers: Age of Ultron, which have nothing to do with actual AI risk research. It’s very easy for pop culture and snark to dominate the conversation. That’s why I write articles like this one, trying to correct misconceptions and oversimplifications.

(Luke Muehlhauser writes a lot of similar articles, with more detail and precision.)

Leave a Reply