A Futile Attempt To Review The Book of Disquiet

“This is my most-highlighted book of the year. It is about a man who avoids interacting with other people whenever possible, lives for the sake of his daydreams, and would rather not be alive at all — less because he feels depressed than because life is boring.

“I… still don’t understand why I like this book as much as I do.”

Aaron Gertler, The Best Books of My 2015

 

The Book of Disquiet is remarkably difficult to talk about. And yet, when a stranger messaged me on Facebook because they’d seen that I was a fan, we wound up talking about it for an hour, stumbling around in circles trying to explain the way we felt.

(Reviewing the book is like trying to make up a new language in the middle of a conversation.)

 

The book’s Goodreads entry features nothing but four-and-five-star reviews on the first page. The second page, along with lots of additional praise, contains:

  • A single one-star review, which appears to be ironic (“it is the very fact of its valuelessness that gives it its value”).
  • A three-star review where the reviewer becomes furious at Pessoa for writing only half of a brilliant book, when — like a loving parent — they know he could have done better.

It would seem that, for any common definition of “hate”, The Book of Disquiet is almost impossible to hate. And that seems right. Can you hate the air you breathe? Can you hate the ground on which you walk? Can you hate sleep?

Continue reading

CPR: A Heroic Thought Experiment

Imagine that an all-knowing genie manifests in your bedroom.

The genie tells you that sometime in the next ten years, you will have a chance to save a total stranger from dying by performing CPR.

But you don’t know when it will happen, and there’s no guarantee you’ll succeed when the time comes.

How would you respond? How would your life change, from that moment?

Continue reading

Roseites and Bostromites

Epistemic status: Speculation. Grasping at a distinction that might or might not be useful. Playing around with dichotomy to see what happens.

 

The venture capitalist David Rose once told a group of students (I was there: I don’t think the speech was published) to think about things that “will have to happen” as technology develops, and to create businesses that will enable those things.

For example: If the Internet allows a store to have a near-infinite selection, someone will have to found Amazon.

I recently realized that Rose’s way of thinking parallels the way philosopher Nick Bostrom thinks about the future. As an expert on global catastrophic risk, he asks people to figure out which things will have to not happen in order for humanity to develop, and to create organizations that will prevent those things from happening.

For example: If nuclear war would wipe out civilization, someone (or many someones) will have to ensure that no two nuclear-armed groups ever engage in all-out war.

 

If you were to divide people into two groups — the followers of David Rose, and those of Nick Bostrom — you’d get what I call “Roseites” and “Bostromites”.

Roseites try to make new things exist, to grow the economy, and to enhance civilization.

Bostromites try to study the impact of new things, to prevent the economy’s collapse, and to preserve civilization.

Continue reading

Aaron’s Disagreement Principle

Summary

This post is too long, so before you read the rest, here’s a two-sentence summary:

“We like to think that people who disagree with us know less than we do. But we should be careful to remember that they may know more than we do, or simply have different value systems for generating opinions from beliefs.”

Why do we disagree with each other?

This is a stupid question. But it’s not quite as stupid as it sounds. One winner of the Nobel Prize in Economics is famous for proving that people should never disagree with each other.

Okay, okay, it isn’t quite that easy. There are conditions we need to meet first.

The best informal description I’ve heard of Aumann’s Agreement Theorem:

Mutually respectful, honest and rational debaters cannot disagree on any factual matter once they know each other’s [beliefs]. They cannot “agree to disagree”, they can only agree to agree.

Sadly, when Robert Aumann says “rational”, he refers to a formal definition of rationality that applies to zero real humans.

But I think we can make his theory simpler: Instead of “both people are perfectly rational”, we can say that “both people have the same value system”.

 

Value systems: A “value system” is a strategy for turning information into opinions.

A rational person (in the Aumann sense) uses a mathematical formula to form new opinions. But even if most of us don’t use math, we do have value systems. Two Yankees fans, for example, might have similar systems for turning “information about the Yankees’ record” into “opinions about the state of this world”.

With rationality off the table, you could create an absurd value system — e.g. “I value never changing my opinion” — that disproves the simplified theory. But I think it holds for most real value systems.

Beliefs: I’ll use “beliefs”, “information”, and “knowledge” in this essay. They all mean “stuff you think is true”. This is also kind of what “opinions” means. So when a value system “turns information into opinions”, it really “takes stuff you think is true and uses it to generate more true-seeming stuff”.

The language here isn’t very consistent, but I think it’s understandable. Let me know if you disagree!

 

What does it look like when two people with the same value system and set of beliefs disagree with one another? Let’s find out!

 

Hillary Clinton and the Two Monks

This story shows that two people with the same information, and the same value system, cannot disagree.

Two Buddhist monks, Yong and Zong, get into an argument. The monks are twin brothers. They share all the same values. You could ask them an endless series of moral questions, and they wouldn’t disagree on a single answer.

So what are they arguing about? In this case, it’s the Democratic primary elections. Yong plans to vote for Hillary. Zong supports Bernie.

Why are the brothers disagreeing? If they have exactly the same value system, whatever drives Yong to support Hillary should have the same effect on Zong. But at the same time, the fire Berning in Zong’s heart should also be present in the heart of Yong!

The only explanation, says Aumann (well, my Aumann-shaped sock puppet), is that Yong believes something Zong doesn’t believe, or vice-versa.

Here’s what happened: The brothers were watching TV. Zong went to the bathroom. While he was gone, Yong watched a Hillary Clinton campaign commercial. He learned something about Hillary’s time in the Senate, and decided he’d vote for her in the Minnesota primary.

(Yong and Zong live in Minnesota.)

The brothers are no longer in perfect agreement. Discord has crept into their relationship. How can they fix the problem?

Fortunately, the brothers abide by Aumann’s other rules: They are honest and respectful. Yong will not lie to Zong, nor Zong to Yong. And when one brother speaks, the other pays close attention.

As Yong lists his beliefs, one by one, Zong soon discovers what happened:

Yong: Did you know that Hillary Clinton was a senator once?

Zong: No, I did not!

Yong: Ah! I see that we had different knowledge. Do you believe me when I tell you this?

Zong: Of course! We do not lie to each other.

Yong: Will you now vote for Hillary?

Zong: Yes, I will.

A value system is like a machine for turning beliefs into opinions.

Zong had a collection of beliefs about Hillary Clinton that, when fed into the machine, turned into the opinion: “Vote for Bernie!” When Yong added a new belief, the machine did something new and created a pro-Hillary opinion. Since the brothers have the same value system (the same “machine”), they’ll always deal with new beliefs the same way (by forming the same set of opinions).

 

Again: Why do we disagree with each other?

Now we can answer the question. If two people disagree, they must have different knowledge, different values, or both.

They might also have the same knowledge and the same values, but disagree because they lie to each other or simply don’t listen. This is very sad when it happens, but it doesn’t happen very often.

 

The Terrible Education Debate

I started to think about disagreement because of an argument I watched online. It was a one-sided argument: Vinod Khosla wrote an essay about education, and Kieren McCarthy mocked him.

Neither essay was very good, and I don’t recommend them. Here’s a simple summary:

  • Khosla thinks that education should generally focus on math, science, and current events.
  • McCarthy thinks that education should generally focus on literature and history.

The “sciences vs. humanities” debate is very old, and is one of the best examples I’ve seen of two sides simply talking past one another. It often goes like this:

Sciences: “Einstein is cool! You need science to understand the world! Therefore, children should learn more about math and science. Those Humanities people don’t know that science is important, or else they’d agree with us.”

Humanities: “Shakespeare is cool! You need history to understand the world! Therefore, children should learn more about history and literature. Those Sciences people don’t know that history is important, or else they’d agree with us.”

Most of the loudest voices in this debate belong to reasonable college professors, so I think that nearly everyone on both sides would agree that Shakespeare and Einstein are both cool, and that you need both history and science to understand the world.

So what’s happening? My theory: the two sides simply have different values. On the whole, the scientists believe that a rational/scientific approach to the world is more conducive to students’ well-being than a more humanities-driven approach. The humanities people believe otherwise.

Perhaps Khosla would genuinely prefer a world filled with young scientists to a world filled with young historians, while McCarthy would shudder at the very thought of such a future. If they knew that they had, over the course of their long, full lives, developed totally different worldviews, perhaps they’d simply agree to disagree.

(Not that it’s fair to assume that McCarthy doesn’t know that Khosla has different values. I’m sure he does. But I wouldn’t be surprised if McCarthy thought that Khosla’s values only differ from his own because Khosla didn’t read enough Shakespeare as a child.)

 

Notably, both Khosla and McCarthy were writing essays meant to be read by a collection of (presumably neutral) readers. They weren’t trying to persuade their opponents — they were trying to persuade strangers.

And if the point of your argument is to persuade some neutral third party, it’s a really sharp tactic to pretend you know something the other side doesn’t.

People who know less than you are ignorant fools, and who wants to agree with an ignorant fool? Besides, the ignorant fools must agree with you that school should teach important subjects. If you could only get them into a (history/science) class, they’d learn how important (history/science) is, and then they’d agree with you!

 

More Knowledge, Better Values

There are two good ways to convince a third party that you are on the right side of an argument:

  1. Persuade them that you know more than the other side.
  2. Persuade them that you have “better values” that the other side.

The second one is hard to do, because “better values” are subjective, especially when you don’t know the values of the third party. You don’t want to claim that your opponent is motivated by selfishness if there’s a risk your third party thinks Atlas Shrugged is the greatest book of all time.

The first one is easy to do, because “more knowledge” is generally objective. There are a lot of “value A vs. value B” debates where both sides have a lot of supporters. A debate between “more knowledge” and “less knowledge” tends to be rather one-sided.

I saw this all over the place when I was in college, especially during debates about abortion.

I’d thought of the two sides of that debate as very value-driven: “Sanctity of life” vs. “freedom of choice”. But the students I knew were very thoughtful people, and they knew that pro-choice advocates did not hate babies. They knew that pro-life advocates did not hate freedom.

So instead, I’d see arguments about knowledge.

A pro-choicer would post a link to a study from the Guttmacher Institute with lots of happy numbers about pro-choice healthcare policy. “You can’t argue with the facts!”

Then they would get comments from pro-life friends linking to studies from the Family Research Council with very different numbers: “Facts? What facts were you talking about? Now, these facts here, these are facts.

It was as though both sides were standing on the roof of the dining hall, shouting: “We know more than they do! They are ignorant fools! If they only knew more, they would surely join us!”

 

The Cheeseburger Mystery

This even happens when people argue about personal habits.

“Did you know that beef production is responsible for (enormous number) percent of our greenhouse gas emissions?”

“Yep.” (Takes bite of cheeseburger)

“Did you know that cows are smarter than (friendly household pet, plural)”?

“Yep.” (Sips from glass of milk, takes bite of cheeseburger)

“Did you know that cows are basically tortured until they die before you eat them?”

“Mhm.” (Finishes chewing) “You know, I was a vegetarian for two years, until I ran into some really serious health issues that went away when I started eating a little bit of red meat each week.”

This is a clear case of a difference in values (personal health vs. sustainability vs. animal suffering). We also had a difference in knowledge — but the vegetarian, in this (hypothetical) case, didn’t get the right difference in knowledge. The meat-eater knew just as much about cows as they did, and they also had some extra knowledge (that not eating cows made them sick).

 

Aaron’s Disagreement Principle

No two people will ever know exactly the same things. And no two people will ever hold exactly the same value system.

Thanks to Aumann, we now know that no two people will ever agree about everything. But if we’re going to disagree, we should at least know why we are disagreeing. Are we really that much smarter, more knowledgeable, better-read than the people who disagree with us? Or have we, over the course of our lives, just developed different values, different “machines” for processing our beliefs?

This leads me to what I’ll call Aaron’s Disagreement Principle:

Just because you disagree with someone, don’t assume you know more than they do.

 

Of course, if we read over that early description of Aumann again, we’ll see something we almost ignored the first time around:

Mutually respectful, honest and rational debaters cannot disagree on any factual matter once they know each other’s opinions. They cannot “agree to disagree”, they can only agree to agree.

If “rational” means “having exactly the same values”, we can’t do it. But we can be respectful and honest when we disagree with someone. If we listen hard enough, and lie seldom enough, we might even start agreeing more.

In my value system, that’s a good thing.

 

Alpha Gamma Gives A Tour Of The Metropolitan Museum Of Art

Alpha Gamma Gives A Tour Of The Metropolitan Museum Of Art

Welcome to the Met! My name is Aaron, and I’ll be your tour guide today.

Come again?

Oh! It’s kind of a funny story, actually. I was supervising Finger-Painting Day last week, and this four-year-old spilled yellow paint all over my uniform! It’s still at the cleaners.

Of course they have spare uniforms. But they don’t fit me very well. I have an unusual hip-to-waist ratio. Also, broad shoulders.

Anyway, let’s get started!

Continue reading

Dogs and Existentialism

I have a Tumblr now! I’m still experimenting with using the platform for short essays and thought nuggets. Here’s an essay cross-posted from that Tumblr:

 

The Melancholy of Retrievers

(Wandering philosophy. Not attached to most of these opinions.)

I’m staying for a few weeks in the home of relatives who own a Labrador Retriever. I’ve spent a lot of time around this dog in the last few weeks, after many years of not living with a pet. As a result, everything about the notion of “owning a dog” – or the very existence of domesticated dogs – has become strange to me.

The dog, Jasper, lives to play fetch. When he isn’t sleeping or eating or drinking, he picks up anything he can find and brings it to you so that you can throw it. If you don’t throw it, he’ll try another person. If no one else is around, he’ll pant and whine at you and shove his head between your legs to stare sadly into your eyes until you give up and play fetch.

I’m sure this is normal dog behavior, and it’s the sort of silly thing that people love about dogs. But it makes me wonder how it feels to be Jasper.

Continue reading

Empathy and Heroic Responsibility

(Faithful readers: You can now subscribe to this blog!)

 

My last two posts for Applied Sentience are up:

http://appliedsentience.com/2015/05/29/moral-heroism-pt-1-empathys-faults-heroism-to-the-rescue/

http://appliedsentience.com/2015/07/06/moral-heroism-pt-ii-how-to-become-a-hero-or-at-least-get-started/

Within, I discuss some thoughts I’ve had recently on the problems with empathy, and how we need another layer of moral feeling on top of empathy — for which I borrow the term “heroic responsibility” from Eliezer Yudkowsky — if we want to do good in difficult situations.

The posts total about 2500 words, but this post provides a brief summary.

Continue reading

Alpha Gamma Reviews: Edge 2015

Each year, Edge.org asks a few hundred very smart people how they’d answer a certain question. The results are always a mixed bag, but it’s one of the most exciting mixed bags in the intellectual world.

This year’s question dug into one of my own interests: “What do you think about machines that think?” 

In other words: What does the increasing power of artificial intelligence (AI) mean for humans, for the universe, and for the machines themselves? What will happen if and when AI becomes “general” or “superintelligent”, outperforming humans at almost every task?

The answers to this question would fill a book (and will, since Edge publishes one book each year). But even if you don’t have time to read a book, you should sample the content, because there’s always a ton of interesting material.

This post is my attempt to gather up some of the best answers and individual quotes, while responding to a few misconceptions about AI safety that popped up in the responses.

Continue reading

Interview: Joshua Greene, Moral Tribes

I recently got the chance to interview Joshua Greene, Harvard philosopher and author of Moral Tribesone of the more interesting pop-psychology books I’ve seen. Greene gets interviewed a lot, so I tried to ask questions he hadn’t heard before: It worked out pretty well!

http://appliedsentience.com/2015/02/20/exploring-our-moral-tribes-interview-w-harvard-psychologist-joshua-greene/

Continue reading