Imagine that an all-knowing genie manifests in your bedroom.
The genie tells you that sometime in the next ten years, you will have a chance to save a total stranger from dying by performing CPR.
But you don’t know when it will happen, and there’s no guarantee you’ll succeed when the time comes.
How would you respond? How would your life change, from that moment?
I’m part of the effective altruism (EA) movement. We’re people who share a few beliefs:
- Value the lives of all people equally, no matter what they look like or where they come from.
- When you do something for the sake of other people, try to do the most good you can.
- Use research and evidence to make decisions. Support causes and programs with a lot of good evidence behind them.
- When you have a choice, compare different options. Don’t just do something because it’s a good idea — make sure there’s no obvious better thing you could be doing instead.
In practice, we give a lot of money to charity. Usually charities that work in countries where people are very poor, like India, Ghana, or Kenya — not the United States or Britain or Japan. We think other people should also do this.
(I’ll skip the complications for now. I’ve been satisfied by the responses I’ve heard to my objections against EA, and I’ll assume that any reader of this piece is at least neutral toward the central ideas of the movement.)
This is a collection of ways to explain EA, or argue that EA is a good idea, in 60 seconds or less. Many are based on real conversations I’ve had. Ideally, you could use them at a party. I plan to, when I move out of Verona to a city with more parties.
It’s very cheap to experiment on people these days.
For ~$100 and ~5 hours of my time, I used Google Consumer Surveys (GCS) to collect over 800 responses to the following question:
Famine threatens Ethiopia. Thousands of lives are at risk, but U.S. help could save them. How important is it that the U.S. give $9 million in food aid to these Ethiopians?
This wasn’t just curiosity. This was an experiment. My question had three possible endings:
- …food aid to these Ethiopians?
- …food aid to these men and women?
- …food aid to these human beings?
We all know that Ethiopians are human beings, of course. But do our actions reflect that knowledge?
At least one study found evidence that we’ll donate more money to help rescue someone from our country than someone from another country. (Kogut & Ritov, 2007)
I have a similar question: Are we more willing to help foreigners when they are framed as our fellow humans, rather than as people from some other country?
(Faithful readers: You can now subscribe to this blog!)
My last two posts for Applied Sentience are up:
Within, I discuss some thoughts I’ve had recently on the problems with empathy, and how we need another layer of moral feeling on top of empathy — for which I borrow the term “heroic responsibility” from Eliezer Yudkowsky — if we want to do good in difficult situations.
The posts total about 2500 words, but this post provides a brief summary.
Update: Charity Science, an organization whose work I admire, has added my thesis to their page on charitable giving research. I highly recommend their site for more information on the topics discussed here.
* * * * *
I haven’t written a blog post for nearly a full season.
One-third of this phenomenon is the fault of my senior thesis:
Charitable Fundraising and Smart Giving: How can charities use behavioral science to drive donations?
It’s a very long thesis, and you probably shouldn’t read the whole thing. I conducted my final round of editing over the course of 38 hours in late April, during which I did not sleep. It’s kind of a slog.
Here’s a PDF of the five pages where I summarize everything I learned and make recommendations to charities:
The Part of the Thesis You Should Actually Read
In the rest of this post, I’ve explained my motivation for actually writing this thing, and squeezed my key findings into a pair of summaries: One that’s a hundred words long, one that’s quite a bit longer.
I recently got the chance to interview Joshua Greene, Harvard philosopher and author of Moral Tribes, one of the more interesting pop-psychology books I’ve seen. Greene gets interviewed a lot, so I tried to ask questions he hadn’t heard before: It worked out pretty well!
I’m currently enrolled in a moral psychology class. We spend a lot of time talking about human moral instincts — the ways we think about moral situations when we haven’t had time to reflect on the consequences.
Sometimes, our instincts are excellent; they help us save people from oncoming trains when there’s no time to think about alternatives. But other times, they lead us down strange paths.