Aaron’s Disagreement Principle

Summary

This post is too long, so before you read the rest, here’s a two-sentence summary:

“We like to think that people who disagree with us know less than we do. But we should be careful to remember that they may know more than we do, or simply have different value systems for generating opinions from beliefs.”

Why do we disagree with each other?

This is a stupid question. But it’s not quite as stupid as it sounds. One winner of the Nobel Prize in Economics is famous for proving that people should never disagree with each other.

Okay, okay, it isn’t quite that easy. There are conditions we need to meet first.

The best informal description I’ve heard of Aumann’s Agreement Theorem:

Mutually respectful, honest and rational debaters cannot disagree on any factual matter once they know each other’s [beliefs]. They cannot “agree to disagree”, they can only agree to agree.

Sadly, when Robert Aumann says “rational”, he refers to a formal definition of rationality that applies to zero real humans.

But I think we can make his theory simpler: Instead of “both people are perfectly rational”, we can say that “both people have the same value system”.

 

Value systems: A “value system” is a strategy for turning information into opinions.

A rational person (in the Aumann sense) uses a mathematical formula to form new opinions. But even if most of us don’t use math, we do have value systems. Two Yankees fans, for example, might have similar systems for turning “information about the Yankees’ record” into “opinions about the state of this world”.

With rationality off the table, you could create an absurd value system — e.g. “I value never changing my opinion” — that disproves the simplified theory. But I think it holds for most real value systems.

Beliefs: I’ll use “beliefs”, “information”, and “knowledge” in this essay. They all mean “stuff you think is true”. This is also kind of what “opinions” means. So when a value system “turns information into opinions”, it really “takes stuff you think is true and uses it to generate more true-seeming stuff”.

The language here isn’t very consistent, but I think it’s understandable. Let me know if you disagree!

 

What does it look like when two people with the same value system and set of beliefs disagree with one another? Let’s find out!

 

Hillary Clinton and the Two Monks

This story shows that two people with the same information, and the same value system, cannot disagree.

Two Buddhist monks, Yong and Zong, get into an argument. The monks are twin brothers. They share all the same values. You could ask them an endless series of moral questions, and they wouldn’t disagree on a single answer.

So what are they arguing about? In this case, it’s the Democratic primary elections. Yong plans to vote for Hillary. Zong supports Bernie.

Why are the brothers disagreeing? If they have exactly the same value system, whatever drives Yong to support Hillary should have the same effect on Zong. But at the same time, the fire Berning in Zong’s heart should also be present in the heart of Yong!

The only explanation, says Aumann (well, my Aumann-shaped sock puppet), is that Yong believes something Zong doesn’t believe, or vice-versa.

Here’s what happened: The brothers were watching TV. Zong went to the bathroom. While he was gone, Yong watched a Hillary Clinton campaign commercial. He learned something about Hillary’s time in the Senate, and decided he’d vote for her in the Minnesota primary.

(Yong and Zong live in Minnesota.)

The brothers are no longer in perfect agreement. Discord has crept into their relationship. How can they fix the problem?

Fortunately, the brothers abide by Aumann’s other rules: They are honest and respectful. Yong will not lie to Zong, nor Zong to Yong. And when one brother speaks, the other pays close attention.

As Yong lists his beliefs, one by one, Zong soon discovers what happened:

Yong: Did you know that Hillary Clinton was a senator once?

Zong: No, I did not!

Yong: Ah! I see that we had different knowledge. Do you believe me when I tell you this?

Zong: Of course! We do not lie to each other.

Yong: Will you now vote for Hillary?

Zong: Yes, I will.

A value system is like a machine for turning beliefs into opinions.

Zong had a collection of beliefs about Hillary Clinton that, when fed into the machine, turned into the opinion: “Vote for Bernie!” When Yong added a new belief, the machine did something new and created a pro-Hillary opinion. Since the brothers have the same value system (the same “machine”), they’ll always deal with new beliefs the same way (by forming the same set of opinions).

 

Again: Why do we disagree with each other?

Now we can answer the question. If two people disagree, they must have different knowledge, different values, or both.

They might also have the same knowledge and the same values, but disagree because they lie to each other or simply don’t listen. This is very sad when it happens, but it doesn’t happen very often.

 

The Terrible Education Debate

I started to think about disagreement because of an argument I watched online. It was a one-sided argument: Vinod Khosla wrote an essay about education, and Kieren McCarthy mocked him.

Neither essay was very good, and I don’t recommend them. Here’s a simple summary:

  • Khosla thinks that education should generally focus on math, science, and current events.
  • McCarthy thinks that education should generally focus on literature and history.

The “sciences vs. humanities” debate is very old, and is one of the best examples I’ve seen of two sides simply talking past one another. It often goes like this:

Sciences: “Einstein is cool! You need science to understand the world! Therefore, children should learn more about math and science. Those Humanities people don’t know that science is important, or else they’d agree with us.”

Humanities: “Shakespeare is cool! You need history to understand the world! Therefore, children should learn more about history and literature. Those Sciences people don’t know that history is important, or else they’d agree with us.”

Most of the loudest voices in this debate belong to reasonable college professors, so I think that nearly everyone on both sides would agree that Shakespeare and Einstein are both cool, and that you need both history and science to understand the world.

So what’s happening? My theory: the two sides simply have different values. On the whole, the scientists believe that a rational/scientific approach to the world is more conducive to students’ well-being than a more humanities-driven approach. The humanities people believe otherwise.

Perhaps Khosla would genuinely prefer a world filled with young scientists to a world filled with young historians, while McCarthy would shudder at the very thought of such a future. If they knew that they had, over the course of their long, full lives, developed totally different worldviews, perhaps they’d simply agree to disagree.

(Not that it’s fair to assume that McCarthy doesn’t know that Khosla has different values. I’m sure he does. But I wouldn’t be surprised if McCarthy thought that Khosla’s values only differ from his own because Khosla didn’t read enough Shakespeare as a child.)

 

Notably, both Khosla and McCarthy were writing essays meant to be read by a collection of (presumably neutral) readers. They weren’t trying to persuade their opponents — they were trying to persuade strangers.

And if the point of your argument is to persuade some neutral third party, it’s a really sharp tactic to pretend you know something the other side doesn’t.

People who know less than you are ignorant fools, and who wants to agree with an ignorant fool? Besides, the ignorant fools must agree with you that school should teach important subjects. If you could only get them into a (history/science) class, they’d learn how important (history/science) is, and then they’d agree with you!

 

More Knowledge, Better Values

There are two good ways to convince a third party that you are on the right side of an argument:

  1. Persuade them that you know more than the other side.
  2. Persuade them that you have “better values” that the other side.

The second one is hard to do, because “better values” are subjective, especially when you don’t know the values of the third party. You don’t want to claim that your opponent is motivated by selfishness if there’s a risk your third party thinks Atlas Shrugged is the greatest book of all time.

The first one is easy to do, because “more knowledge” is generally objective. There are a lot of “value A vs. value B” debates where both sides have a lot of supporters. A debate between “more knowledge” and “less knowledge” tends to be rather one-sided.

I saw this all over the place when I was in college, especially during debates about abortion.

I’d thought of the two sides of that debate as very value-driven: “Sanctity of life” vs. “freedom of choice”. But the students I knew were very thoughtful people, and they knew that pro-choice advocates did not hate babies. They knew that pro-life advocates did not hate freedom.

So instead, I’d see arguments about knowledge.

A pro-choicer would post a link to a study from the Guttmacher Institute with lots of happy numbers about pro-choice healthcare policy. “You can’t argue with the facts!”

Then they would get comments from pro-life friends linking to studies from the Family Research Council with very different numbers: “Facts? What facts were you talking about? Now, these facts here, these are facts.

It was as though both sides were standing on the roof of the dining hall, shouting: “We know more than they do! They are ignorant fools! If they only knew more, they would surely join us!”

 

The Cheeseburger Mystery

This even happens when people argue about personal habits.

“Did you know that beef production is responsible for (enormous number) percent of our greenhouse gas emissions?”

“Yep.” (Takes bite of cheeseburger)

“Did you know that cows are smarter than (friendly household pet, plural)”?

“Yep.” (Sips from glass of milk, takes bite of cheeseburger)

“Did you know that cows are basically tortured until they die before you eat them?”

“Mhm.” (Finishes chewing) “You know, I was a vegetarian for two years, until I ran into some really serious health issues that went away when I started eating a little bit of red meat each week.”

This is a clear case of a difference in values (personal health vs. sustainability vs. animal suffering). We also had a difference in knowledge — but the vegetarian, in this (hypothetical) case, didn’t get the right difference in knowledge. The meat-eater knew just as much about cows as they did, and they also had some extra knowledge (that not eating cows made them sick).

 

Aaron’s Disagreement Principle

No two people will ever know exactly the same things. And no two people will ever hold exactly the same value system.

Thanks to Aumann, we now know that no two people will ever agree about everything. But if we’re going to disagree, we should at least know why we are disagreeing. Are we really that much smarter, more knowledgeable, better-read than the people who disagree with us? Or have we, over the course of our lives, just developed different values, different “machines” for processing our beliefs?

This leads me to what I’ll call Aaron’s Disagreement Principle:

Just because you disagree with someone, don’t assume you know more than they do.

 

Of course, if we read over that early description of Aumann again, we’ll see something we almost ignored the first time around:

Mutually respectful, honest and rational debaters cannot disagree on any factual matter once they know each other’s opinions. They cannot “agree to disagree”, they can only agree to agree.

If “rational” means “having exactly the same values”, we can’t do it. But we can be respectful and honest when we disagree with someone. If we listen hard enough, and lie seldom enough, we might even start agreeing more.

In my value system, that’s a good thing.

 

Leave a Reply