Imagine that an all-knowing genie manifests in your bedroom.
The genie tells you that sometime in the next ten years, you will have a chance to save a total stranger from dying by performing CPR.
But you don’t know when it will happen, and there’s no guarantee you’ll succeed when the time comes.
How would you respond? How would your life change, from that moment?
I’m part of the effective altruism (EA) movement. We’re people who share a few beliefs:
- Value the lives of all people equally, no matter what they look like or where they come from.
- When you do something for the sake of other people, try to do the most good you can.
- Use research and evidence to make decisions. Support causes and programs with a lot of good evidence behind them.
- When you have a choice, compare different options. Don’t just do something because it’s a good idea — make sure there’s no obvious better thing you could be doing instead.
In practice, we give a lot of money to charity. Usually charities that work in countries where people are very poor, like India, Ghana, or Kenya — not the United States or Britain or Japan. We think other people should also do this.
(I’ll skip the complications for now. I’ve been satisfied by the responses I’ve heard to my objections against EA, and I’ll assume that any reader of this piece is at least neutral toward the central ideas of the movement.)
This is a collection of ways to explain EA, or argue that EA is a good idea, in 60 seconds or less. Many are based on real conversations I’ve had. Ideally, you could use them at a party. I plan to, when I move out of Verona to a city with more parties.
It’s very cheap to experiment on people these days.
For ~$100 and ~5 hours of my time, I used Google Consumer Surveys (GCS) to collect over 800 responses to the following question:
Famine threatens Ethiopia. Thousands of lives are at risk, but U.S. help could save them. How important is it that the U.S. give $9 million in food aid to these Ethiopians?
This wasn’t just curiosity. This was an experiment. My question had three possible endings:
- …food aid to these Ethiopians?
- …food aid to these men and women?
- …food aid to these human beings?
We all know that Ethiopians are human beings, of course. But do our actions reflect that knowledge?
At least one study found evidence that we’ll donate more money to help rescue someone from our country than someone from another country. (Kogut & Ritov, 2007)
I have a similar question: Are we more willing to help foreigners when they are framed as our fellow humans, rather than as people from some other country?
(Faithful readers: You can now subscribe to this blog!)
My last two posts for Applied Sentience are up:
Within, I discuss some thoughts I’ve had recently on the problems with empathy, and how we need another layer of moral feeling on top of empathy — for which I borrow the term “heroic responsibility” from Eliezer Yudkowsky — if we want to do good in difficult situations.
The posts total about 2500 words, but this post provides a brief summary.
Update: Charity Science, an organization whose work I admire, has added my thesis to their page on charitable giving research. I highly recommend their site for more information on the topics discussed here.
* * * * *
I haven’t written a blog post for nearly a full season.
One-third of this phenomenon is the fault of my senior thesis:
Charitable Fundraising and Smart Giving: How can charities use behavioral science to drive donations?
It’s a very long thesis, and you probably shouldn’t read the whole thing. I conducted my final round of editing over the course of 38 hours in late April, during which I did not sleep. It’s kind of a slog.
Here’s a PDF of the five pages where I summarize everything I learned and make recommendations to charities:
The Part of the Thesis You Should Actually Read
In the rest of this post, I’ve explained my motivation for actually writing this thing, and squeezed my key findings into a pair of summaries: One that’s a hundred words long, one that’s quite a bit longer.
I recently got the chance to interview Joshua Greene, Harvard philosopher and author of Moral Tribes, one of the more interesting pop-psychology books I’ve seen. Greene gets interviewed a lot, so I tried to ask questions he hadn’t heard before: It worked out pretty well!
I’m currently enrolled in a moral psychology class. We spend a lot of time talking about human moral instincts — the ways we think about moral situations when we haven’t had time to reflect on the consequences.
Sometimes, our instincts are excellent; they help us save people from oncoming trains when there’s no time to think about alternatives. But other times, they lead us down strange paths.
Last December, I wrote a post about a concept I call “belated philanthropy”.
In summary: When someone solicits me on the street, asking for money, I don’t give it. Instead, I make a note of the incident in my mind. Later, I donate to a charity based on how many people have asked me for money since my last “belated” donation.
Update: This post is out-of-date. YEA now has its own website, where updates will be posted on various things we do. The website is also out-of-date, but to a lesser extent.
I’m starting a club!
The name of the club is “Yale Effective Altruists”, or “YEA”. It exists for three big reasons:
- To help college students use their time to make other people’s lives better in a manner as effective as possible.
- To introduce more college students to the ideas and methods of the “effective altruism” (EA) movement.
- To help the wider EA movement complete more projects and put more ideas into practice, for the good of humanity.
Members of YEA will:
- Meet to discuss the current state of the world, and realistic ways we might improve it
- Plan and develop projects that might improve the world (more on that later)
- Talk to cool people who like improving the world, some of whom might be famous
- Learn how to persuade people (useful in general) and get expert advice on choosing classes, careers, and more
There will be one recommended meeting each week (30 minutes or less), plus a variety of projects to work on and talks to attend if you’d like to be more involved. We’ll also hang out together (for more, see “good parties” below).
If you’re already curious, you can sign up to learn more!
(I’ll also give you the link at the end of this post.)
Note: This brief report reflects the way I felt shortly after the CFAR workshop. My feelings haven’t changed much since then, but if you’d like an update — or have questions this post doesn’t answer — please let me know! I’m always happy to talk about applied rationality.
In April 2014, I spent four days working to improve my life with the help of the Center for Applied Rationality (CFAR). It was a good experience, and I’d recommend it highly for most of the people reading this post.
If you’d rather skip the summary, or have questions afterwards, send me an email and tell me what you want to know.
CFAR teaches participants to better understand their minds, plan their actions, and achieve their goals. It does so through a series of small, hands-on seminars, run by some of the best teachers I’ve ever seen at work. It also introduces you to a community of other self-improvement-minded people, many of whom will become your friends.
The workshop is a lot like your best semester of college, but it happens in four days, costs a lot less, and is more likely to give you knowledge that will help you ten years down the road.
Some representative moments of my CFAR experience: