I run a small polling group on Facebook, where ~150 of my friends and followers give snap judgments on questions that interest me, or create their own questions for the group.
(If this sounds like fun, you’re welcome to join!)
Recently, I asked the group:
I’m part of the effective altruism (EA) movement. We’re people who share a few beliefs:
- Value the lives of all people equally, no matter what they look like or where they come from.
- When you do something for the sake of other people, try to do the most good you can.
- Use research and evidence to make decisions. Support causes and programs with a lot of good evidence behind them.
- When you have a choice, compare different options. Don’t just do something because it’s a good idea — make sure there’s no obvious better thing you could be doing instead.
In practice, we give a lot of money to charity. Usually charities that work in countries where people are very poor, like India, Ghana, or Kenya — not the United States or Britain or Japan. We think other people should also do this.
(I’ll skip the complications for now. I’ve been satisfied by the responses I’ve heard to my objections against EA, and I’ll assume that any reader of this piece is at least neutral toward the central ideas of the movement.)
This is a collection of ways to explain EA, or argue that EA is a good idea, in 60 seconds or less. Many are based on real conversations I’ve had. Ideally, you could use them at a party. I plan to, when I move out of Verona to a city with more parties.
Why do we disagree with each other?
This is a stupid question. But it’s not quite as stupid as it sounds. One winner of the Nobel Prize in Economics is famous for proving that people should never disagree with each other.
Okay, okay, it isn’t quite that easy. There are conditions we need to meet first.
The best informal description I’ve heard of Aumann’s Agreement Theorem:
Mutually respectful, honest and rational debaters cannot disagree on any factual matter once they know each other’s [beliefs]. They cannot “agree to disagree”, they can only agree to agree.
Sadly, when Robert Aumann says “rational”, he refers to a formal definition of rationality that applies to zero real humans.
But I think we can make his theory simpler: Instead of “both people are perfectly rational”, we can say that “both people have the same value system”.
“Hello. You’ve reached the disembodied voice of Aaron Gertler. Aaron’s body isn’t here right now, but if you leave a message, it will get back to you soon.”
It is not easy to make me angry, and it is harder still to make me angry enough that I feel the need to write about how angry I am. This is, I think, the first time I’ve written anything angry on this blog.
But GQ recently did a really good job of making me angry.
Not the entire magazine, but this story, which has inspired me to write my first post with a tag of “outrage”:
I annotated the story with the Genius Web Annotator, so you can see my notes in the original context, though the context doesn’t make the story any less terrible.
I think that the year’s most amazing invention is Genius.it.
Right now — right this moment — you can turn any web page into a cross between a Kindle book and a page of lyrics on Rap Genius. Other people can read your annotations alongside the article, and add their own comments.
I plan to use this invention a lot. It’s the best way to deal with the fact that someone is always wrong on the internet.
Below is the first article I’ve “annotated” in this way. Read, upvote, and comment!
* * * * *
Ezra Klein and Phil Libin are both remarkably smart people. But I think that they make some mistakes in their depiction of how experts on artificial intelligence think about the risk posed by this powerful technology.