Google Experiment #1: Shared Humanity and Willingness to Help

It’s very cheap to experiment on people these days.

For ~$100 and ~5 hours of my time, I used Google Consumer Surveys (GCS) to collect over 800 responses to the following question:

Famine threatens Ethiopia. Thousands of lives are at risk, but U.S. help could save them. How important is it that the U.S. give $9 million in food aid to these Ethiopians?

This wasn’t just curiosity. This was an experiment. My question had three possible endings:

  1. …food aid to these Ethiopians?
  2. …food aid to these men and women?
  3. …food aid to these human beings?

 

We all know that Ethiopians are human beings, of course. But do our actions reflect that knowledge?

At least one study found evidence that we’ll donate more money to help rescue someone from our country than someone from another country. (Kogut & Ritov, 2007)

I have a similar question: Are we more willing to help foreigners when they are framed as our fellow humans, rather than as people from some other country?

 

The commonsense answer is “yes”. Advocates of global philanthropy often encourage donors to think of themselves as members of a single species, rather than as members of groups separated by national borders or cultural norms.

(I’m thinking especially of writers in the effective altruism movement, though I’d guess the same trend exists in other movements with similar goals.)

But in the course of writing a thesis on experimental attempts to increase charitable giving, I didn’t see any direct study of this effect. The study I cited above found that Israelis were more willing to help fellow Israelis than they were to help Indians, but it didn’t ask the Israelis about just helping “people” or “human beings”.

What if you could make people more generous towards “foreigners” by encouraging them to see those people as similar to themselves — connected by membership in the human species?

This is an important question for anyone who wants to save lives or reduce suffering. We can often help more people by giving to poor countries, where supplies are cheaper and people have less money to solve their own problems. But to many people in wealthy countries, this isn’t as appealing as “helping your neighbors first” or “helping out at home”. If we learn to increase the appeal of foreign aid and foreign charity by framing foreigners as “fellow humans”, we could use this knowledge to help a lot of people in desperate need.

 

Thanks to Google, I was able to see how a diverse group of Americans answered each version of my question. No one saw more than one version, so I can be reasonably sure that any difference in the answers was caused by the difference in wording.

Which brings us to the question: Was there a difference in the answers?

 

The Results

Each question had five possible answers:

How important is it that the U.S. give $9 million in food aid to these ______?

  1. Extremely important
  2. Very important
  3. Moderately important
  4. Slightly important
  5. Not at all important

I scored the first answer as “5 points”, the second as “4 points”, and so on. The higher a question’s average score, the better it was at prompting people to support foreign aid.

(This is a very rough measure, and may not be sound statistical methodology, but it’s the best I could do given the limits of Google Surveys.)

Here are the answers and the “average scores”:

“Ethiopians” (331 responses)

  1. Extremely important: 44
  2. Very important: 84
  3. Moderately important: 74
  4. Slightly important: 42
  5. Not at all important: 87

Average score: 2.87

“Human Beings” (177 responses)

  1. Extremely important: 27
  2. Very important: 53
  3. Moderately important: 32
  4. Slightly important: 20
  5. Not at all important: 45

Average score: 2.98

“Men and Women” (239 responses)

  1. Extremely important: 55
  2. Very important: 65
  3. Moderately important: 54
  4. Slightly important: 30
  5. Not at all important: 35

Average score: 3.31

Average score (“Human Beings” + “Men and Women”): 3.17

 

I’ll add some actual math soon, but in this basic data, we see two possible effects:

First, bringing up the foreignness of the people you want to help may be a bad idea. The average score for the “Ethiopian” prompt was ~10% lower than the average score for the other versions. And this is despite the fact that we’d mentioned the country of Ethiopia two sentences before!

Second, “men and women” outperformed the other two prompts by a fair margin. I included this prompt in the first place because “human beings” is an unusual, formal way to refer to a group of people. In case respondents got hung up on the strangeness of that phrase, “men and women” was my backup — evoking our shared humanity without sounding clinical.

If this effect is real, and not a statistical fluke, I can think of two reasons that participants cared more about “men and women” than “human beings”:

  1. “Men and women” sounds more natural and (ironically) human than “human beings”.
  2. “Men and women” is a visual phrase. When I hear “men and women”, I have an easier time forming a mental picture of that group than when I hear “human beings”.

I tried to limit the impact of mental visualization by including an actual photo below each question:

Survey Photo

But the photo included men and women, which may have been a mistake. I wouldn’t be surprised if participants subconsciously assumed that the phrase “these men and women” applied to the specific people in the photo.

 

A Brief Guide to Google Consumer Surveys

Google has a one-minute introductory video you should watch.

With GCS, I’m hiring websites to give my questions to their readers. I pay $0.10 for each response (I’d pay more for a longer survey), and buy a certain number of responses at a time. When I’ve gotten the responses I bought for a given question, Google sends me a spreadsheet with all the data.

The respondents are a solid mix of Americans from every region of the country and every income group. Glancing through the detailed survey data, I didn’t see any obvious difference between the demographics of the people answering each question.

 

Be Suspicious of These Results

This isn’t a very good experiment. Any results we see are going to be weak, for the following reasons:

Sample size: 747 participants over three groups isn’t terrible, but it’s not enough people to give us much confidence in the results. Another thousand answers could erase the effect with ease.

Skewed sample: Our participants were a balanced mix of men and women, and they took the survey in every part of the U.S. They also had a wide range of incomes. This is good: We avoided the problems that can arise when you only survey college students.

On the other hand, all our participants were willing to answer a question about U.S. foreign policy as “payment” to read an article. They may have been more interested in global issues than the average web user, or different in other ways from the 80% of people who chose to ignore the survey and skip the article.

Suspect scale: The possible answers ranged from “extremely important’ to “not at all important’. This is totally arbitrary — I could have chosen four answers, or six, and gotten a split of responses leading to a different effect. I also could’ve scored the answers differently; it might have been better to use a 0-to-4 scale instead of 1-to-5.

This scoring system is the only one I’ve used. I didn’t commit a cardinal sin of science and fiddle with the scores to get better results. Still, there’s only a very weak link from these scores to “people actually helping Ethiopians”. Speaking of which…

Silly Scenario: I wrote my senior thesis on studies similar to this one. I was dubious about the value of studies that measured “willingness to help”, rather than actual monetary donations. Talk is cheap, and people who say they’d like the U.S. to fund foreign aid may not be thrilled about paying taxes to make it happen. The answers participants gave to my questions may have no bearing at all on how a “shared humanity” frame would affect their personal giving (or willingness to call their Congressperson and rant about famine relief).

But even if you see participants’ answers as just a loose measure of how “friendly” or “generous” they feel toward Ethiopians, this study could still be a very small step toward a world where Americans give 50% of their charitable donations to overseas causes, rather than 5%.

 

Other notes

I tried to buy the same number of responses for each question, but somehow I got very different numbers. I’m not sure why — maybe this happened because the response rates differed for each question.

I chose “Ethiopia” because it is a country with a history of famine but no recent famines in the news. I wanted people to understand the question easily, but I didn’t want them to be influenced by news stories about an actual aid program.

I chose “$9 million” because I wanted a number that would sound like “enough money to make a difference, but not a lot of money by U.S. government standards”.

 

If you know of any existing research on this effect, or a similar effect, I’d love to hear about it! Leave a comment or send me an email.

I began this research for a class I took in the Yale School of Management (“Consumer Behavior”, with Shane Frederick). Another student, Tom Levene, helped me word the questions and paid for some of the survey responses.

 

Raw Data

Results for “Ethiopians” prompt

Results for “Men and Women” Prompt

Results for “Human Beings” Prompt

 

 

Leave a Reply