CPR: A Heroic Thought Experiment

Imagine that an all-knowing genie manifests in your bedroom.

The genie tells you that sometime in the next ten years, you will have a chance to save a total stranger from dying by performing CPR.

But you don’t know when it will happen, and there’s no guarantee you’ll succeed when the time comes.

How would you respond? How would your life change, from that moment?

Continue reading

Hacking LinkedIn For Fun (But Not Profit)

In the summer of 2014, I worked at a recruiting firm. This meant that I was on LinkedIn for most of the day, reading thousands of profiles.

LinkedIn profiles aren’t much fun, unless they’re the profile of someone you can’t hire.

(Exhibit 1: The programmer who is so confident and secure in his job that he’s formatted his profile as a Dungeons and Dragons character sheet.)

 

I can be hired. Sometimes, I even want to be hired. So I can’t totally sabotage my own profile. Still, I wanted to have some fun with LinkedIn.

Continue reading

Roseites and Bostromites

Epistemic status: Speculation. Grasping at a distinction that might or might not be useful. Playing around with dichotomy to see what happens.

 

The venture capitalist David Rose once told a group of students (I was there: I don’t think the speech was published) to think about things that “will have to happen” as technology develops, and to create businesses that will enable those things.

For example: If the Internet allows a store to have a near-infinite selection, someone will have to found Amazon.

I recently realized that Rose’s way of thinking parallels the way philosopher Nick Bostrom thinks about the future. As an expert on global catastrophic risk, he asks people to figure out which things will have to not happen in order for humanity to develop, and to create organizations that will prevent those things from happening.

For example: If nuclear war would wipe out civilization, someone (or many someones) will have to ensure that no two nuclear-armed groups ever engage in all-out war.

 

If you were to divide people into two groups — the followers of David Rose, and those of Nick Bostrom — you’d get what I call “Roseites” and “Bostromites”.

Roseites try to make new things exist, to grow the economy, and to enhance civilization.

Bostromites try to study the impact of new things, to prevent the economy’s collapse, and to preserve civilization.

Continue reading

How To Journal Every Day

I’ve been keeping a journal for the last eight years.

This is one of my best habits: The journal compensates for my awful memory and helps me feel like a complete person with a deep and meaningful history. It reminds me that I’ve spent the last 24 years actually existing, 24 hours at a time. It shows me all the friends I’ve ever had, and all the bad days I’ve put behind me. It’s also fun to read (once enough time has passed, and transient emotions like embarrassment are mostly gone).

 

Until recently, it was also a pain in the ass.

 

The Problem

The Microsoft Word file that stores one-sixth of all the words I’ve ever written is called “Daily Journal”. But it’s been a long time since I’ve really kept a daily journal.

Why? It’s not that my life is boring. Well, it is — objectively speaking — but find it exciting.

One problem is Microsoft Word, which doesn’t perform well with 750,000-word, 1000-page documents, at least on my old machine.

The bigger problem is motivation. Without some kind of external prompt, I found myself forgetting the journal, or skipping it in favor of something more fun — sometimes for weeks at a time.

 

The Solution

Last year, I switched to an email system. This eliminates the loading times and makes it very easy to finish daily entries. I’ve also begun to ask myself questions, to mitigate the menace of the blank page.

If you’ve ever wanted to journal, or to resume journaling, you can set up this hyper-efficient, automatic system yourself. In ten minutes.

Continue reading

Self-Congratulation and Self-Criticism

Sometimes, I do a good thing. Not a great act of heroism, but a simple, fundamentally decent thing that helps someone else.

When that happens, I congratulate myself for doing the right thing.

Then I criticize myself, since I don’t deserve congratulation for doing the “right thing”. After all, everyone should do the right thing.

Then I congratulate myself for being so humble and morally strict.

Then I criticize myself for bragging about my own humility.

My record for this is four cycles. I almost always stop on self-criticism.

Perhaps there are two kinds of people in the world: People who usually stop at self-congratulation, and people who usually stop at self-criticism.

Which kind of person are you?

20 Alternatives to Punching Nazis

20 Alternatives to Punching Nazis

I won’t rehash the Nazi-punching debate that rolled over America last week. Good sources include thisthis, and this.

After reading way too many articles on the topic, I still don’t endorse Nazi-punching.

When punching “the right people” becomes an option, the punchers often end up punching a lot of other people. And punching Richard Spencer in particular gives him much more publicity — even sympathy, in some cases — than he’d receive otherwise.

But it’s not helpful just to claim people shouldn’t do something to Nazis. Or to certain other groups of people who endorse ideas they see as existential threats.*

My views here are closest to those of Darth Oktavia, a longtime anti-fascist who writes:

“The nazis love getting into fights with antifas, because that’s their home territory. What nazis hate is parody […] they could save face with a traditional fight, but they cannot save face by starting a fight with people who are only showing what huge jokes they are.”

So, in the spirit of parody: here are some ideas for bothering Nazis, turning Nazis into laughingstocks, and making Nazis feel terrible — all without leaving bruises, and hopefully without running the risk of a felony assault charge.**


Continue reading

Talking About Effective Altruism at Parties

I’m part of the effective altruism (EA) movement. We’re people who share a few beliefs:

  1. Value the lives of all people equally, no matter what they look like or where they come from.
  2. When you do something for the sake of other people, try to do the most good you can.
  3. Use research and evidence to make decisions. Support causes and programs with a lot of good evidence behind them.
  4. When you have a choice, compare different options. Don’t just do something because it’s a good idea — make sure there’s no obvious better thing you could be doing instead.

In practice, we give a lot of money to charity. Usually charities that work in countries where people are very poor, like India, Ghana, or Kenya — not the United States or Britain or Japan. We think other people should also do this.

(I’ll skip the complications for now. I’ve been satisfied by the responses I’ve heard to my objections against EA, and I’ll assume that any reader of this piece is at least neutral toward the central ideas of the movement.)

 

Party Conversation

This is a collection of ways to explain EA, or argue that EA is a good idea, in 60 seconds or less. Many are based on real conversations I’ve had. Ideally, you could use them at a party. I plan to, when I move out of Verona to a city with more parties.

Continue reading

Aaron’s Disagreement Principle

Summary

This post is too long, so before you read the rest, here’s a two-sentence summary:

“We like to think that people who disagree with us know less than we do. But we should be careful to remember that they may know more than we do, or simply have different value systems for generating opinions from beliefs.”

Why do we disagree with each other?

This is a stupid question. But it’s not quite as stupid as it sounds. One winner of the Nobel Prize in Economics is famous for proving that people should never disagree with each other.

Okay, okay, it isn’t quite that easy. There are conditions we need to meet first.

The best informal description I’ve heard of Aumann’s Agreement Theorem:

Mutually respectful, honest and rational debaters cannot disagree on any factual matter once they know each other’s [beliefs]. They cannot “agree to disagree”, they can only agree to agree.

Sadly, when Robert Aumann says “rational”, he refers to a formal definition of rationality that applies to zero real humans.

But I think we can make his theory simpler: Instead of “both people are perfectly rational”, we can say that “both people have the same value system”.

 

Value systems: A “value system” is a strategy for turning information into opinions.

A rational person (in the Aumann sense) uses a mathematical formula to form new opinions. But even if most of us don’t use math, we do have value systems. Two Yankees fans, for example, might have similar systems for turning “information about the Yankees’ record” into “opinions about the state of this world”.

With rationality off the table, you could create an absurd value system — e.g. “I value never changing my opinion” — that disproves the simplified theory. But I think it holds for most real value systems.

Beliefs: I’ll use “beliefs”, “information”, and “knowledge” in this essay. They all mean “stuff you think is true”. This is also kind of what “opinions” means. So when a value system “turns information into opinions”, it really “takes stuff you think is true and uses it to generate more true-seeming stuff”.

The language here isn’t very consistent, but I think it’s understandable. Let me know if you disagree!

 

What does it look like when two people with the same value system and set of beliefs disagree with one another? Let’s find out!

 

Hillary Clinton and the Two Monks

This story shows that two people with the same information, and the same value system, cannot disagree.

Two Buddhist monks, Yong and Zong, get into an argument. The monks are twin brothers. They share all the same values. You could ask them an endless series of moral questions, and they wouldn’t disagree on a single answer.

So what are they arguing about? In this case, it’s the Democratic primary elections. Yong plans to vote for Hillary. Zong supports Bernie.

Why are the brothers disagreeing? If they have exactly the same value system, whatever drives Yong to support Hillary should have the same effect on Zong. But at the same time, the fire Berning in Zong’s heart should also be present in the heart of Yong!

The only explanation, says Aumann (well, my Aumann-shaped sock puppet), is that Yong believes something Zong doesn’t believe, or vice-versa.

Here’s what happened: The brothers were watching TV. Zong went to the bathroom. While he was gone, Yong watched a Hillary Clinton campaign commercial. He learned something about Hillary’s time in the Senate, and decided he’d vote for her in the Minnesota primary.

(Yong and Zong live in Minnesota.)

The brothers are no longer in perfect agreement. Discord has crept into their relationship. How can they fix the problem?

Fortunately, the brothers abide by Aumann’s other rules: They are honest and respectful. Yong will not lie to Zong, nor Zong to Yong. And when one brother speaks, the other pays close attention.

As Yong lists his beliefs, one by one, Zong soon discovers what happened:

Yong: Did you know that Hillary Clinton was a senator once?

Zong: No, I did not!

Yong: Ah! I see that we had different knowledge. Do you believe me when I tell you this?

Zong: Of course! We do not lie to each other.

Yong: Will you now vote for Hillary?

Zong: Yes, I will.

A value system is like a machine for turning beliefs into opinions.

Zong had a collection of beliefs about Hillary Clinton that, when fed into the machine, turned into the opinion: “Vote for Bernie!” When Yong added a new belief, the machine did something new and created a pro-Hillary opinion. Since the brothers have the same value system (the same “machine”), they’ll always deal with new beliefs the same way (by forming the same set of opinions).

 

Again: Why do we disagree with each other?

Now we can answer the question. If two people disagree, they must have different knowledge, different values, or both.

They might also have the same knowledge and the same values, but disagree because they lie to each other or simply don’t listen. This is very sad when it happens, but it doesn’t happen very often.

 

The Terrible Education Debate

I started to think about disagreement because of an argument I watched online. It was a one-sided argument: Vinod Khosla wrote an essay about education, and Kieren McCarthy mocked him.

Neither essay was very good, and I don’t recommend them. Here’s a simple summary:

  • Khosla thinks that education should generally focus on math, science, and current events.
  • McCarthy thinks that education should generally focus on literature and history.

The “sciences vs. humanities” debate is very old, and is one of the best examples I’ve seen of two sides simply talking past one another. It often goes like this:

Sciences: “Einstein is cool! You need science to understand the world! Therefore, children should learn more about math and science. Those Humanities people don’t know that science is important, or else they’d agree with us.”

Humanities: “Shakespeare is cool! You need history to understand the world! Therefore, children should learn more about history and literature. Those Sciences people don’t know that history is important, or else they’d agree with us.”

Most of the loudest voices in this debate belong to reasonable college professors, so I think that nearly everyone on both sides would agree that Shakespeare and Einstein are both cool, and that you need both history and science to understand the world.

So what’s happening? My theory: the two sides simply have different values. On the whole, the scientists believe that a rational/scientific approach to the world is more conducive to students’ well-being than a more humanities-driven approach. The humanities people believe otherwise.

Perhaps Khosla would genuinely prefer a world filled with young scientists to a world filled with young historians, while McCarthy would shudder at the very thought of such a future. If they knew that they had, over the course of their long, full lives, developed totally different worldviews, perhaps they’d simply agree to disagree.

(Not that it’s fair to assume that McCarthy doesn’t know that Khosla has different values. I’m sure he does. But I wouldn’t be surprised if McCarthy thought that Khosla’s values only differ from his own because Khosla didn’t read enough Shakespeare as a child.)

 

Notably, both Khosla and McCarthy were writing essays meant to be read by a collection of (presumably neutral) readers. They weren’t trying to persuade their opponents — they were trying to persuade strangers.

And if the point of your argument is to persuade some neutral third party, it’s a really sharp tactic to pretend you know something the other side doesn’t.

People who know less than you are ignorant fools, and who wants to agree with an ignorant fool? Besides, the ignorant fools must agree with you that school should teach important subjects. If you could only get them into a (history/science) class, they’d learn how important (history/science) is, and then they’d agree with you!

 

More Knowledge, Better Values

There are two good ways to convince a third party that you are on the right side of an argument:

  1. Persuade them that you know more than the other side.
  2. Persuade them that you have “better values” that the other side.

The second one is hard to do, because “better values” are subjective, especially when you don’t know the values of the third party. You don’t want to claim that your opponent is motivated by selfishness if there’s a risk your third party thinks Atlas Shrugged is the greatest book of all time.

The first one is easy to do, because “more knowledge” is generally objective. There are a lot of “value A vs. value B” debates where both sides have a lot of supporters. A debate between “more knowledge” and “less knowledge” tends to be rather one-sided.

I saw this all over the place when I was in college, especially during debates about abortion.

I’d thought of the two sides of that debate as very value-driven: “Sanctity of life” vs. “freedom of choice”. But the students I knew were very thoughtful people, and they knew that pro-choice advocates did not hate babies. They knew that pro-life advocates did not hate freedom.

So instead, I’d see arguments about knowledge.

A pro-choicer would post a link to a study from the Guttmacher Institute with lots of happy numbers about pro-choice healthcare policy. “You can’t argue with the facts!”

Then they would get comments from pro-life friends linking to studies from the Family Research Council with very different numbers: “Facts? What facts were you talking about? Now, these facts here, these are facts.

It was as though both sides were standing on the roof of the dining hall, shouting: “We know more than they do! They are ignorant fools! If they only knew more, they would surely join us!”

 

The Cheeseburger Mystery

This even happens when people argue about personal habits.

“Did you know that beef production is responsible for (enormous number) percent of our greenhouse gas emissions?”

“Yep.” (Takes bite of cheeseburger)

“Did you know that cows are smarter than (friendly household pet, plural)”?

“Yep.” (Sips from glass of milk, takes bite of cheeseburger)

“Did you know that cows are basically tortured until they die before you eat them?”

“Mhm.” (Finishes chewing) “You know, I was a vegetarian for two years, until I ran into some really serious health issues that went away when I started eating a little bit of red meat each week.”

This is a clear case of a difference in values (personal health vs. sustainability vs. animal suffering). We also had a difference in knowledge — but the vegetarian, in this (hypothetical) case, didn’t get the right difference in knowledge. The meat-eater knew just as much about cows as they did, and they also had some extra knowledge (that not eating cows made them sick).

 

Aaron’s Disagreement Principle

No two people will ever know exactly the same things. And no two people will ever hold exactly the same value system.

Thanks to Aumann, we now know that no two people will ever agree about everything. But if we’re going to disagree, we should at least know why we are disagreeing. Are we really that much smarter, more knowledgeable, better-read than the people who disagree with us? Or have we, over the course of our lives, just developed different values, different “machines” for processing our beliefs?

This leads me to what I’ll call Aaron’s Disagreement Principle:

Just because you disagree with someone, don’t assume you know more than they do.

 

Of course, if we read over that early description of Aumann again, we’ll see something we almost ignored the first time around:

Mutually respectful, honest and rational debaters cannot disagree on any factual matter once they know each other’s opinions. They cannot “agree to disagree”, they can only agree to agree.

If “rational” means “having exactly the same values”, we can’t do it. But we can be respectful and honest when we disagree with someone. If we listen hard enough, and lie seldom enough, we might even start agreeing more.

In my value system, that’s a good thing.