I run a small polling group on Facebook, where ~150 of my friends and followers give snap judgments on questions that interest me, or create their own questions for the group.
(If this sounds like fun, you’re welcome to join!)
Recently, I asked the group:
It’s very cheap to experiment on people these days.
For ~$100 and ~5 hours of my time, I used Google Consumer Surveys (GCS) to collect over 800 responses to the following question:
Famine threatens Ethiopia. Thousands of lives are at risk, but U.S. help could save them. How important is it that the U.S. give $9 million in food aid to these Ethiopians?
This wasn’t just curiosity. This was an experiment. My question had three possible endings:
- …food aid to these Ethiopians?
- …food aid to these men and women?
- …food aid to these human beings?
We all know that Ethiopians are human beings, of course. But do our actions reflect that knowledge?
At least one study found evidence that we’ll donate more money to help rescue someone from our country than someone from another country. (Kogut & Ritov, 2007)
I have a similar question: Are we more willing to help foreigners when they are framed as our fellow humans, rather than as people from some other country?
This post attempts to answer two questions:
If you could spend a few weeks being Barack Obama, what would you learn about his life and the world in which he lives?
How would this experience change the way you think about the man, his policies, and the American presidency?
Are You Smarter Than a Coin-Flipping Monkey?
30 years ago, a man named Philip Tetlock decided to figure out whether the people we pay to make predictions about politics were actually good at predicting things.
He picked two hundred and eighty-four people who made their living “commenting or offering advice on political and economic trends,” and he started asking them to assess the probability that various things would or would not come to pass, both in the areas of the world in which they specialized and in areas about which they were not expert. Would there be a nonviolent end to apartheid in South Africa? Would Gorbachev be ousted in a coup? Would the United States go to war in the Persian Gulf?
–Louis Menand, Everybody’s An Expert
Tetlock’s discovery: On average, the commentators were slightly less accurate than a monkey flipping a coin with “yes” printed on one face and “no” on the other. They’d have been better off if they’d made completely random predictions!
What’s more, being an expert on a topic didn’t help much. At some point, more expertise even led to more faulty predictions.
Can We Do Any Better?
There are lots of reasons we make bad guesses about the future. But Philip Tetlock’s particular interest was in figuring out how to do better.
Prediction, after all, is one of the most important things a person can ever do: Will I divorce this person if I marry them? Will I be happy in a year if I accept this job offer? It’s also an important skill for governments: How much will the Iraq War cost? Will this gun-control bill really lower the crime rate?
But if political experts aren’t good at prediction, who is?
Update: Charity Science, an organization whose work I admire, has added my thesis to their page on charitable giving research. I highly recommend their site for more information on the topics discussed here.
* * * * *
I haven’t written a blog post for nearly a full season.
One-third of this phenomenon is the fault of my senior thesis:
Charitable Fundraising and Smart Giving: How can charities use behavioral science to drive donations?
It’s a very long thesis, and you probably shouldn’t read the whole thing. I conducted my final round of editing over the course of 38 hours in late April, during which I did not sleep. It’s kind of a slog.
Here’s a PDF of the five pages where I summarize everything I learned and make recommendations to charities:
The Part of the Thesis You Should Actually Read
In the rest of this post, I’ve explained my motivation for actually writing this thing, and squeezed my key findings into a pair of summaries: One that’s a hundred words long, one that’s quite a bit longer.
Each year, Edge.org asks a few hundred very smart people how they’d answer a certain question. The results are always a mixed bag, but it’s one of the most exciting mixed bags in the intellectual world.
This year’s question dug into one of my own interests: “What do you think about machines that think?”
In other words: What does the increasing power of artificial intelligence (AI) mean for humans, for the universe, and for the machines themselves? What will happen if and when AI becomes “general” or “superintelligent”, outperforming humans at almost every task?
The answers to this question would fill a book (and will, since Edge publishes one book each year). But even if you don’t have time to read a book, you should sample the content, because there’s always a ton of interesting material.
This post is my attempt to gather up some of the best answers and individual quotes, while addressing some of the mistakes that many different thinkers made.
I recently got the chance to interview Joshua Greene, Harvard philosopher and author of Moral Tribes, one of the more interesting pop-psychology books I’ve seen. Greene gets interviewed a lot, so I tried to ask questions he hadn’t heard before: It worked out pretty well!