Talking About Effective Altruism at Parties

I’m part of the effective altruism (EA) movement. We’re people who share a few beliefs:

  1. Value the lives of all people equally, no matter what they look like or where they come from.
  2. When you do something for the sake of other people, try to do the most good you can.
  3. Use research and evidence to make decisions. Support causes and programs with a lot of good evidence behind them.
  4. When you have a choice, compare different options. Don’t just do something because it’s a good idea — make sure there’s no obvious better thing you could be doing instead.

In practice, we give a lot of money to charity. Usually charities that work in countries where people are very poor, like India, Ghana, or Kenya — not the United States or Britain or Japan. We think other people should also do this.

(I’ll skip the complications for now. I’ve been satisfied by the responses I’ve heard to my objections against EA, and I’ll assume that any reader of this piece is at least neutral toward the central ideas of the movement.)

 

Party Conversation

This is a collection of ways to explain EA, or argue that EA is a good idea, in 60 seconds or less. Many are based on real conversations I’ve had. Ideally, you could use them at a party. I plan to, when I move out of Verona to a city with more parties.

Continue reading

Aaron’s Disagreement Principle

Summary

This post is too long, so before you read the rest, here’s a two-sentence summary:

“We like to think that people who disagree with us know less than we do. But we should be careful to remember that they may know more than we do, or simply have different value systems for generating opinions from beliefs.”

Why do we disagree with each other?

This is a stupid question. But it’s not quite as stupid as it sounds. One winner of the Nobel Prize in Economics is famous for proving that people should never disagree with each other.

Okay, okay, it isn’t quite that easy. There are conditions we need to meet first.

The best informal description I’ve heard of Aumann’s Agreement Theorem:

Mutually respectful, honest and rational debaters cannot disagree on any factual matter once they know each other’s [beliefs]. They cannot “agree to disagree”, they can only agree to agree.

Sadly, when Robert Aumann says “rational”, he refers to a formal definition of rationality that applies to zero real humans.

But I think we can make his theory simpler: Instead of “both people are perfectly rational”, we can say that “both people have the same value system”.

 

Value systems: A “value system” is a strategy for turning information into opinions.

A rational person (in the Aumann sense) uses a mathematical formula to form new opinions. But even if most of us don’t use math, we do have value systems. Two Yankees fans, for example, might have similar systems for turning “information about the Yankees’ record” into “opinions about the state of this world”.

With rationality off the table, you could create an absurd value system — e.g. “I value never changing my opinion” — that disproves the simplified theory. But I think it holds for most real value systems.

Beliefs: I’ll use “beliefs”, “information”, and “knowledge” in this essay. They all mean “stuff you think is true”. This is also kind of what “opinions” means. So when a value system “turns information into opinions”, it really “takes stuff you think is true and uses it to generate more true-seeming stuff”.

The language here isn’t very consistent, but I think it’s understandable. Let me know if you disagree!

 

What does it look like when two people with the same value system and set of beliefs disagree with one another? Let’s find out!

 

Hillary Clinton and the Two Monks

This story shows that two people with the same information, and the same value system, cannot disagree.

Two Buddhist monks, Yong and Zong, get into an argument. The monks are twin brothers. They share all the same values. You could ask them an endless series of moral questions, and they wouldn’t disagree on a single answer.

So what are they arguing about? In this case, it’s the Democratic primary elections. Yong plans to vote for Hillary. Zong supports Bernie.

Why are the brothers disagreeing? If they have exactly the same value system, whatever drives Yong to support Hillary should have the same effect on Zong. But at the same time, the fire Berning in Zong’s heart should also be present in the heart of Yong!

The only explanation, says Aumann (well, my Aumann-shaped sock puppet), is that Yong believes something Zong doesn’t believe, or vice-versa.

Here’s what happened: The brothers were watching TV. Zong went to the bathroom. While he was gone, Yong watched a Hillary Clinton campaign commercial. He learned something about Hillary’s time in the Senate, and decided he’d vote for her in the Minnesota primary.

(Yong and Zong live in Minnesota.)

The brothers are no longer in perfect agreement. Discord has crept into their relationship. How can they fix the problem?

Fortunately, the brothers abide by Aumann’s other rules: They are honest and respectful. Yong will not lie to Zong, nor Zong to Yong. And when one brother speaks, the other pays close attention.

As Yong lists his beliefs, one by one, Zong soon discovers what happened:

Yong: Did you know that Hillary Clinton was a senator once?

Zong: No, I did not!

Yong: Ah! I see that we had different knowledge. Do you believe me when I tell you this?

Zong: Of course! We do not lie to each other.

Yong: Will you now vote for Hillary?

Zong: Yes, I will.

A value system is like a machine for turning beliefs into opinions.

Zong had a collection of beliefs about Hillary Clinton that, when fed into the machine, turned into the opinion: “Vote for Bernie!” When Yong added a new belief, the machine did something new and created a pro-Hillary opinion. Since the brothers have the same value system (the same “machine”), they’ll always deal with new beliefs the same way (by forming the same set of opinions).

 

Again: Why do we disagree with each other?

Now we can answer the question. If two people disagree, they must have different knowledge, different values, or both.

They might also have the same knowledge and the same values, but disagree because they lie to each other or simply don’t listen. This is very sad when it happens, but it doesn’t happen very often.

 

The Terrible Education Debate

I started to think about disagreement because of an argument I watched online. It was a one-sided argument: Vinod Khosla wrote an essay about education, and Kieren McCarthy mocked him.

Neither essay was very good, and I don’t recommend them. Here’s a simple summary:

  • Khosla thinks that education should generally focus on math, science, and current events.
  • McCarthy thinks that education should generally focus on literature and history.

The “sciences vs. humanities” debate is very old, and is one of the best examples I’ve seen of two sides simply talking past one another. It often goes like this:

Sciences: “Einstein is cool! You need science to understand the world! Therefore, children should learn more about math and science. Those Humanities people don’t know that science is important, or else they’d agree with us.”

Humanities: “Shakespeare is cool! You need history to understand the world! Therefore, children should learn more about history and literature. Those Sciences people don’t know that history is important, or else they’d agree with us.”

Most of the loudest voices in this debate belong to reasonable college professors, so I think that nearly everyone on both sides would agree that Shakespeare and Einstein are both cool, and that you need both history and science to understand the world.

So what’s happening? My theory: the two sides simply have different values. On the whole, the scientists believe that a rational/scientific approach to the world is more conducive to students’ well-being than a more humanities-driven approach. The humanities people believe otherwise.

Perhaps Khosla would genuinely prefer a world filled with young scientists to a world filled with young historians, while McCarthy would shudder at the very thought of such a future. If they knew that they had, over the course of their long, full lives, developed totally different worldviews, perhaps they’d simply agree to disagree.

(Not that it’s fair to assume that McCarthy doesn’t know that Khosla has different values. I’m sure he does. But I wouldn’t be surprised if McCarthy thought that Khosla’s values only differ from his own because Khosla didn’t read enough Shakespeare as a child.)

 

Notably, both Khosla and McCarthy were writing essays meant to be read by a collection of (presumably neutral) readers. They weren’t trying to persuade their opponents — they were trying to persuade strangers.

And if the point of your argument is to persuade some neutral third party, it’s a really sharp tactic to pretend you know something the other side doesn’t.

People who know less than you are ignorant fools, and who wants to agree with an ignorant fool? Besides, the ignorant fools must agree with you that school should teach important subjects. If you could only get them into a (history/science) class, they’d learn how important (history/science) is, and then they’d agree with you!

 

More Knowledge, Better Values

There are two good ways to convince a third party that you are on the right side of an argument:

  1. Persuade them that you know more than the other side.
  2. Persuade them that you have “better values” that the other side.

The second one is hard to do, because “better values” are subjective, especially when you don’t know the values of the third party. You don’t want to claim that your opponent is motivated by selfishness if there’s a risk your third party thinks Atlas Shrugged is the greatest book of all time.

The first one is easy to do, because “more knowledge” is generally objective. There are a lot of “value A vs. value B” debates where both sides have a lot of supporters. A debate between “more knowledge” and “less knowledge” tends to be rather one-sided.

I saw this all over the place when I was in college, especially during debates about abortion.

I’d thought of the two sides of that debate as very value-driven: “Sanctity of life” vs. “freedom of choice”. But the students I knew were very thoughtful people, and they knew that pro-choice advocates did not hate babies. They knew that pro-life advocates did not hate freedom.

So instead, I’d see arguments about knowledge.

A pro-choicer would post a link to a study from the Guttmacher Institute with lots of happy numbers about pro-choice healthcare policy. “You can’t argue with the facts!”

Then they would get comments from pro-life friends linking to studies from the Family Research Council with very different numbers: “Facts? What facts were you talking about? Now, these facts here, these are facts.

It was as though both sides were standing on the roof of the dining hall, shouting: “We know more than they do! They are ignorant fools! If they only knew more, they would surely join us!”

 

The Cheeseburger Mystery

This even happens when people argue about personal habits.

“Did you know that beef production is responsible for (enormous number) percent of our greenhouse gas emissions?”

“Yep.” (Takes bite of cheeseburger)

“Did you know that cows are smarter than (friendly household pet, plural)”?

“Yep.” (Sips from glass of milk, takes bite of cheeseburger)

“Did you know that cows are basically tortured until they die before you eat them?”

“Mhm.” (Finishes chewing) “You know, I was a vegetarian for two years, until I ran into some really serious health issues that went away when I started eating a little bit of red meat each week.”

This is a clear case of a difference in values (personal health vs. sustainability vs. animal suffering). We also had a difference in knowledge — but the vegetarian, in this (hypothetical) case, didn’t get the right difference in knowledge. The meat-eater knew just as much about cows as they did, and they also had some extra knowledge (that not eating cows made them sick).

 

Aaron’s Disagreement Principle

No two people will ever know exactly the same things. And no two people will ever hold exactly the same value system.

Thanks to Aumann, we now know that no two people will ever agree about everything. But if we’re going to disagree, we should at least know why we are disagreeing. Are we really that much smarter, more knowledgeable, better-read than the people who disagree with us? Or have we, over the course of our lives, just developed different values, different “machines” for processing our beliefs?

This leads me to what I’ll call Aaron’s Disagreement Principle:

Just because you disagree with someone, don’t assume you know more than they do.

 

Of course, if we read over that early description of Aumann again, we’ll see something we almost ignored the first time around:

Mutually respectful, honest and rational debaters cannot disagree on any factual matter once they know each other’s opinions. They cannot “agree to disagree”, they can only agree to agree.

If “rational” means “having exactly the same values”, we can’t do it. But we can be respectful and honest when we disagree with someone. If we listen hard enough, and lie seldom enough, we might even start agreeing more.

In my value system, that’s a good thing.

 

GQ Magazine Made Me Very Angry And Now I Am Complaining

It is not easy to make me angry, and it is harder still to make me angry enough that I feel the need to write about how angry I am. This is, I think, the first time I’ve written anything angry on this blog.

But GQ recently did a really good job of making me angry.

Not the entire magazine, but this story, which has inspired me to write my first post with a tag of “outrage”:

https://genius.it/www.gq.com/story/sugar-daddies-explained?

I annotated the story with the Genius Web Annotator, so you can see my notes in the original context, though the context doesn’t make the story any less terrible.

Continue reading

Annotate the Web: Phil Libin and Ezra Klein on Artificial Intelligence

Genius.it is one of the year’s better inventions.

Right now — right this moment — you can turn any web page into a cross between a Kindle book and a page of lyrics on Rap Genius. Other people can read your annotations alongside the article, and add their own comments.

I plan to use this invention often. It’s the best way to deal with the fact that someone is always wrong on the internet.

Below is the first article I’ve “annotated” in this way:

http://genius.it/8074392/www.vox.com/2015/8/12/9143071/evernote-artificial-intelligence?

* * * * *

Ezra Klein and Phil Libin are both smart people. But I think that they make some mistakes in their depiction of how experts on artificial intelligence think about the risks of this powerful technology.

Continue reading

Privileging the Story (Or: Do I Trust Journalism?)

My friend Jack Newshama reporter for The Boston Globe, asked a good question on Facebook the other day:

Question for my non-journalist friends: why don’t you trust us? (“Us” being journalists in general. Because poll after poll shows that the overwhelming majority of you don’t.)

My answer turned out long enough for a blog post.

I trust journalists. That is, I trust most people, and I don’t see journalists as being very different from most people on average. I would trust a journalist to watch my laptop in a cafe while I used the bathroom or water my plants when I went on vacation.

Journalism isn’t a person. It is a product, produced by journalists. And as it is practiced, I only half-trust journalism.

Continue reading

Twenty-Four Quotations About The Yale Book Of Quotations

The Yale Daily News Magazine just published my glowing review of The Yale Book of QuotationsI also profiled the book’s creator, Fred Shapiro. This is my last piece of original journalism for any Yale publication.

The article includes an interesting call to action. Fred needs help writing the next edition. If you’d like your favorite quote to end up in a book that sells tens of thousands of copies, read until the end, or just read the pitch right now.

 

Twenty-Four Quotations About the Yale Book of Quotations

“Some books are to be tasted, others to be swallowed, and some few to be chewed and digested.”

 –Francis Bacon, Of Studies

“Dictionary, n. A malevolent literary device for cramping the growth of language and making it hard and inelastic. This dictionary, however, is a most useful work.”

–Ambrose Bierce, The Cynic’s Word Book

 

The Yale Book of Quotations (YBQ) is a magnificent beast of a tome, a rare creature found only in libraries and the homes of the most devoted litterateurs. Most books have one or two quotable lines. The YBQ has over twelve thousand. And though it is 1100 pages long, it remains, fundamentally, the project of a single man: Fred Shapiro, a librarian in the Yale Law School.

Continue reading

Teach To The Future

I’ve started a new series of blog posts on Applied Sentience: “Teach To The Future”.

Through these posts, I cover subjects like teaching people (especially kids) to write for an online audience:

http://appliedsentience.com/2015/01/09/teach-to-the-future-part-1-how-to-write-for-the-internet

Or teaching people to see through the eyes of other people, in a rigorous and practical way:

http://appliedsentience.com/2015/03/09/school-of-the-future-pt-2-seeing-through-other-eyes/

I care a lot about education, especially since I’ve just received 17 straight years of the stuff. But I think we spend too much time on some subjects and not enough on… well, the subjects I cover in these posts. I don’t know much about pedagogy, but I try to stick to skills I do know. As always, let me know if you have thoughts on how to develop these ideas further.

Bonus: If you teach children and want help figuring out a curriculum based on any of the subjects or lesson plans I describe, I’m happy to help!