20 Alternatives to Punching Nazis

20 Alternatives to Punching Nazis

I won’t rehash the Nazi-punching debate that rolled over America last week. Good sources include thisthis, and this.

After reading way too many articles on the topic, I still don’t endorse Nazi-punching.

When punching “the right people” becomes an option, the punchers often end up punching a lot of other people. And punching Richard Spencer in particular gives him much more publicity — even sympathy, in some cases — than he’d receive otherwise.

But it’s not helpful just to claim people shouldn’t do something to Nazis. Or to certain other groups of people who endorse ideas they see as existential threats.*

My views here are closest to those of Darth Oktavia, a longtime anti-fascist who writes:

“The nazis love getting into fights with antifas, because that’s their home territory. What nazis hate is parody […] they could save face with a traditional fight, but they cannot save face by starting a fight with people who are only showing what huge jokes they are.”

So, in the spirit of parody: here are some ideas for bothering Nazis, turning Nazis into laughingstocks, and making Nazis feel terrible — all without leaving bruises, and hopefully without running the risk of a felony assault charge.**


Continue reading

The Best Books of My 2016

This was a good year for reading, since I spent it sitting with my Kindle on airplanes. (Kindles are great — like tablets, but without all those fussy little apps that distract you from reading.)

Of the ~150 books I read this year, these are the ones that come to mind when I think of the word “best”. They are very different, and you won’t like all of them, but they all do something well.

For a list of every book I remember reading, check my Goodreads account.

Best List of All the Books

In no particular order, save for the first four, which I liked most of all.

  1. Rememberance of Earth’s Past (series, all three books)
  2. The Steerswoman (series, all four books)
  3. Chasing the Scream
  4. Rationality: From AI to Zombies
  5. The Last Samurai
  6. Axiomatic
  7. The Fifth Season
  8. The Found and the Lost
  9. The Future and its Enemies
  10. Evicted
  11. On the Run
  12. Conundrum
  13. The Thrilling Adventures of Lovelace and Babbage
  14. The Partly Cloudy Patriot
  15. Sustainable Energy – Without the Hot Air
  16. Machete Season
  17. How to Get Filthy Rich in Rising Asia

 

Continue reading

Daniel Radcliffe Memorizes the Lyrics to “Alphabet Aerobics”

This is a work of fiction. All characters appearing in this work are fictitious. Any resemblance to persons living or dead is completely intentional. Except for Emma Watson, who seems like a perfectly nice woman. Inspired by One More Thing.

Azkaban

“Artificial amateurs aren’t at all amazing. Analytically, I assault and amaze…”

Daniel Radcliffe pressed “pause”, then “back”. He glared balefully at his iPod.

“No! That’s not right.”

He pressed “play”. The song began again:

“Now it’s time for our wrap-up. Let’s give it everything we’ve got!”

Daniel nodded in time with the beat. This time, he thought, I’ll get past “D”.

Continue reading

Talking About Effective Altruism at Parties

I’m part of the effective altruism (EA) movement. We’re people who share a few beliefs:

  1. Value the lives of all people equally, no matter what they look like or where they come from.
  2. When you do something for the sake of other people, try to do the most good you can.
  3. Use research and evidence to make decisions. Support causes and programs with a lot of good evidence behind them.
  4. When you have a choice, compare different options. Don’t just do something because it’s a good idea — make sure there’s no obvious better thing you could be doing instead.

In practice, we give a lot of money to charity. Usually charities that work in countries where people are very poor, like India, Ghana, or Kenya — not the United States or Britain or Japan. We think other people should also do this.

(I’ll skip the complications for now. I’ve been satisfied by the responses I’ve heard to my objections against EA, and I’ll assume that any reader of this piece is at least neutral toward the central ideas of the movement.)

 

Party Conversation

This is a collection of ways to explain EA, or argue that EA is a good idea, in 60 seconds or less. Many are based on real conversations I’ve had. Ideally, you could use them at a party. I plan to, when I move out of Verona to a city with more parties.

Continue reading

Annotate the Web: March 2016

I use Genius to add comments and context to the articles I read. This is a monthly round-up of articles I did the most Genius-ing on. To see all my annotations, follow me on Genius!

If you like to think while you read, you should get an account and add the Chrome extension. The Internet needs thoughtful people like you!

(Also, without the extension, you may not see the annotations on these articles.)

 

Articles of Note

80 years ago, Harvard had a “Jewish quota”. They used rhetoric about “character” to limit the number of Jews they admitted, in favor of students who weren’t as book-smart but fit the Harvard ideal. Today, the same thing is happening to Asians, for the same reasons.

Controlling for other variables […] Asians need SAT scores 140 points higher than whites, 270 points higher than Hispanics, and an incredible 450 points higher than blacks (out of 1,600 points) to get into these schools. 

If you want to see some ridiculously offensive statements from MIT’s Dean of Admissions, this is the article for you!

Continue reading

Aaron’s Disagreement Principle

Summary

This post is too long, so before you read the rest, here’s a two-sentence summary:

“We like to think that people who disagree with us know less than we do. But we should be careful to remember that they may know more than we do, or simply have different value systems for generating opinions from beliefs.”

Why do we disagree with each other?

This is a stupid question. But it’s not quite as stupid as it sounds. One winner of the Nobel Prize in Economics is famous for proving that people should never disagree with each other.

Okay, okay, it isn’t quite that easy. There are conditions we need to meet first.

The best informal description I’ve heard of Aumann’s Agreement Theorem:

Mutually respectful, honest and rational debaters cannot disagree on any factual matter once they know each other’s [beliefs]. They cannot “agree to disagree”, they can only agree to agree.

Sadly, when Robert Aumann says “rational”, he refers to a formal definition of rationality that applies to zero real humans.

But I think we can make his theory simpler: Instead of “both people are perfectly rational”, we can say that “both people have the same value system”.

 

Value systems: A “value system” is a strategy for turning information into opinions.

A rational person (in the Aumann sense) uses a mathematical formula to form new opinions. But even if most of us don’t use math, we do have value systems. Two Yankees fans, for example, might have similar systems for turning “information about the Yankees’ record” into “opinions about the state of this world”.

With rationality off the table, you could create an absurd value system — e.g. “I value never changing my opinion” — that disproves the simplified theory. But I think it holds for most real value systems.

Beliefs: I’ll use “beliefs”, “information”, and “knowledge” in this essay. They all mean “stuff you think is true”. This is also kind of what “opinions” means. So when a value system “turns information into opinions”, it really “takes stuff you think is true and uses it to generate more true-seeming stuff”.

The language here isn’t very consistent, but I think it’s understandable. Let me know if you disagree!

 

What does it look like when two people with the same value system and set of beliefs disagree with one another? Let’s find out!

 

Hillary Clinton and the Two Monks

This story shows that two people with the same information, and the same value system, cannot disagree.

Two Buddhist monks, Yong and Zong, get into an argument. The monks are twin brothers. They share all the same values. You could ask them an endless series of moral questions, and they wouldn’t disagree on a single answer.

So what are they arguing about? In this case, it’s the Democratic primary elections. Yong plans to vote for Hillary. Zong supports Bernie.

Why are the brothers disagreeing? If they have exactly the same value system, whatever drives Yong to support Hillary should have the same effect on Zong. But at the same time, the fire Berning in Zong’s heart should also be present in the heart of Yong!

The only explanation, says Aumann (well, my Aumann-shaped sock puppet), is that Yong believes something Zong doesn’t believe, or vice-versa.

Here’s what happened: The brothers were watching TV. Zong went to the bathroom. While he was gone, Yong watched a Hillary Clinton campaign commercial. He learned something about Hillary’s time in the Senate, and decided he’d vote for her in the Minnesota primary.

(Yong and Zong live in Minnesota.)

The brothers are no longer in perfect agreement. Discord has crept into their relationship. How can they fix the problem?

Fortunately, the brothers abide by Aumann’s other rules: They are honest and respectful. Yong will not lie to Zong, nor Zong to Yong. And when one brother speaks, the other pays close attention.

As Yong lists his beliefs, one by one, Zong soon discovers what happened:

Yong: Did you know that Hillary Clinton was a senator once?

Zong: No, I did not!

Yong: Ah! I see that we had different knowledge. Do you believe me when I tell you this?

Zong: Of course! We do not lie to each other.

Yong: Will you now vote for Hillary?

Zong: Yes, I will.

A value system is like a machine for turning beliefs into opinions.

Zong had a collection of beliefs about Hillary Clinton that, when fed into the machine, turned into the opinion: “Vote for Bernie!” When Yong added a new belief, the machine did something new and created a pro-Hillary opinion. Since the brothers have the same value system (the same “machine”), they’ll always deal with new beliefs the same way (by forming the same set of opinions).

 

Again: Why do we disagree with each other?

Now we can answer the question. If two people disagree, they must have different knowledge, different values, or both.

They might also have the same knowledge and the same values, but disagree because they lie to each other or simply don’t listen. This is very sad when it happens, but it doesn’t happen very often.

 

The Terrible Education Debate

I started to think about disagreement because of an argument I watched online. It was a one-sided argument: Vinod Khosla wrote an essay about education, and Kieren McCarthy mocked him.

Neither essay was very good, and I don’t recommend them. Here’s a simple summary:

  • Khosla thinks that education should generally focus on math, science, and current events.
  • McCarthy thinks that education should generally focus on literature and history.

The “sciences vs. humanities” debate is very old, and is one of the best examples I’ve seen of two sides simply talking past one another. It often goes like this:

Sciences: “Einstein is cool! You need science to understand the world! Therefore, children should learn more about math and science. Those Humanities people don’t know that science is important, or else they’d agree with us.”

Humanities: “Shakespeare is cool! You need history to understand the world! Therefore, children should learn more about history and literature. Those Sciences people don’t know that history is important, or else they’d agree with us.”

Most of the loudest voices in this debate belong to reasonable college professors, so I think that nearly everyone on both sides would agree that Shakespeare and Einstein are both cool, and that you need both history and science to understand the world.

So what’s happening? My theory: the two sides simply have different values. On the whole, the scientists believe that a rational/scientific approach to the world is more conducive to students’ well-being than a more humanities-driven approach. The humanities people believe otherwise.

Perhaps Khosla would genuinely prefer a world filled with young scientists to a world filled with young historians, while McCarthy would shudder at the very thought of such a future. If they knew that they had, over the course of their long, full lives, developed totally different worldviews, perhaps they’d simply agree to disagree.

(Not that it’s fair to assume that McCarthy doesn’t know that Khosla has different values. I’m sure he does. But I wouldn’t be surprised if McCarthy thought that Khosla’s values only differ from his own because Khosla didn’t read enough Shakespeare as a child.)

 

Notably, both Khosla and McCarthy were writing essays meant to be read by a collection of (presumably neutral) readers. They weren’t trying to persuade their opponents — they were trying to persuade strangers.

And if the point of your argument is to persuade some neutral third party, it’s a really sharp tactic to pretend you know something the other side doesn’t.

People who know less than you are ignorant fools, and who wants to agree with an ignorant fool? Besides, the ignorant fools must agree with you that school should teach important subjects. If you could only get them into a (history/science) class, they’d learn how important (history/science) is, and then they’d agree with you!

 

More Knowledge, Better Values

There are two good ways to convince a third party that you are on the right side of an argument:

  1. Persuade them that you know more than the other side.
  2. Persuade them that you have “better values” that the other side.

The second one is hard to do, because “better values” are subjective, especially when you don’t know the values of the third party. You don’t want to claim that your opponent is motivated by selfishness if there’s a risk your third party thinks Atlas Shrugged is the greatest book of all time.

The first one is easy to do, because “more knowledge” is generally objective. There are a lot of “value A vs. value B” debates where both sides have a lot of supporters. A debate between “more knowledge” and “less knowledge” tends to be rather one-sided.

I saw this all over the place when I was in college, especially during debates about abortion.

I’d thought of the two sides of that debate as very value-driven: “Sanctity of life” vs. “freedom of choice”. But the students I knew were very thoughtful people, and they knew that pro-choice advocates did not hate babies. They knew that pro-life advocates did not hate freedom.

So instead, I’d see arguments about knowledge.

A pro-choicer would post a link to a study from the Guttmacher Institute with lots of happy numbers about pro-choice healthcare policy. “You can’t argue with the facts!”

Then they would get comments from pro-life friends linking to studies from the Family Research Council with very different numbers: “Facts? What facts were you talking about? Now, these facts here, these are facts.

It was as though both sides were standing on the roof of the dining hall, shouting: “We know more than they do! They are ignorant fools! If they only knew more, they would surely join us!”

 

The Cheeseburger Mystery

This even happens when people argue about personal habits.

“Did you know that beef production is responsible for (enormous number) percent of our greenhouse gas emissions?”

“Yep.” (Takes bite of cheeseburger)

“Did you know that cows are smarter than (friendly household pet, plural)”?

“Yep.” (Sips from glass of milk, takes bite of cheeseburger)

“Did you know that cows are basically tortured until they die before you eat them?”

“Mhm.” (Finishes chewing) “You know, I was a vegetarian for two years, until I ran into some really serious health issues that went away when I started eating a little bit of red meat each week.”

This is a clear case of a difference in values (personal health vs. sustainability vs. animal suffering). We also had a difference in knowledge — but the vegetarian, in this (hypothetical) case, didn’t get the right difference in knowledge. The meat-eater knew just as much about cows as they did, and they also had some extra knowledge (that not eating cows made them sick).

 

Aaron’s Disagreement Principle

No two people will ever know exactly the same things. And no two people will ever hold exactly the same value system.

Thanks to Aumann, we now know that no two people will ever agree about everything. But if we’re going to disagree, we should at least know why we are disagreeing. Are we really that much smarter, more knowledgeable, better-read than the people who disagree with us? Or have we, over the course of our lives, just developed different values, different “machines” for processing our beliefs?

This leads me to what I’ll call Aaron’s Disagreement Principle:

Just because you disagree with someone, don’t assume you know more than they do.

 

Of course, if we read over that early description of Aumann again, we’ll see something we almost ignored the first time around:

Mutually respectful, honest and rational debaters cannot disagree on any factual matter once they know each other’s opinions. They cannot “agree to disagree”, they can only agree to agree.

If “rational” means “having exactly the same values”, we can’t do it. But we can be respectful and honest when we disagree with someone. If we listen hard enough, and lie seldom enough, we might even start agreeing more.

In my value system, that’s a good thing.

 

Google Experiment #1: Shared Humanity and Willingness to Help

It’s very cheap to experiment on people these days.

For ~$100 and ~5 hours of my time, I used Google Consumer Surveys (GCS) to collect over 800 responses to the following question:

Famine threatens Ethiopia. Thousands of lives are at risk, but U.S. help could save them. How important is it that the U.S. give $9 million in food aid to these Ethiopians?

This wasn’t just curiosity. This was an experiment. My question had three possible endings:

  1. …food aid to these Ethiopians?
  2. …food aid to these men and women?
  3. …food aid to these human beings?

 

We all know that Ethiopians are human beings, of course. But do our actions reflect that knowledge?

At least one study found evidence that we’ll donate more money to help rescue someone from our country than someone from another country. (Kogut & Ritov, 2007)

I have a similar question: Are we more willing to help foreigners when they are framed as our fellow humans, rather than as people from some other country?

Continue reading