Imagine that an all-knowing genie manifests in your bedroom.
The genie tells you that sometime in the next ten years, you will have a chance to save a total stranger from dying by performing CPR.
But you don’t know when it will happen, and there’s no guarantee you’ll succeed when the time comes.
How would you respond? How would your life change, from that moment?
I don’t get cold anymore.
Epistemic status: Speculation. Grasping at a distinction that might or might not be useful. Playing around with dichotomy to see what happens.
The venture capitalist David Rose once told a group of students (I was there: I don’t think the speech was published) to think about things that “will have to happen” as technology develops, and to create businesses that will enable those things.
For example: If the Internet allows a store to have a near-infinite selection, someone will have to found Amazon.
I recently realized that Rose’s way of thinking parallels the way philosopher Nick Bostrom thinks about the future. As an expert on global catastrophic risk, he asks people to figure out which things will have to not happen in order for humanity to develop, and to create organizations that will prevent those things from happening.
For example: If nuclear war would wipe out civilization, someone (or many someones) will have to ensure that no two nuclear-armed groups ever engage in all-out war.
If you were to divide people into two groups — the followers of David Rose, and those of Nick Bostrom — you’d get what I call “Roseites” and “Bostromites”.
Roseites try to make new things exist, to grow the economy, and to enhance civilization.
Bostromites try to study the impact of new things, to prevent the economy’s collapse, and to preserve civilization.
Why do we disagree with each other?
This is a stupid question. But it’s not quite as stupid as it sounds. One winner of the Nobel Prize in Economics is famous for proving that people should never disagree with each other.
Okay, okay, it isn’t quite that easy. There are conditions we need to meet first.
The best informal description I’ve heard of Aumann’s Agreement Theorem:
Mutually respectful, honest and rational debaters cannot disagree on any factual matter once they know each other’s [beliefs]. They cannot “agree to disagree”, they can only agree to agree.
Sadly, when Robert Aumann says “rational”, he refers to a formal definition of rationality that applies to zero real humans.
But I think we can make his theory simpler: Instead of “both people are perfectly rational”, we can say that “both people have the same value system”.
Welcome to the Met! My name is Aaron, and I’ll be your tour guide today.
Oh! It’s kind of a funny story, actually. I was supervising Finger-Painting Day last week, and this four-year-old spilled yellow paint all over my uniform! It’s still at the cleaners.
Of course they have spare uniforms. But they don’t fit me very well. I have an unusual hip-to-waist ratio. Also, broad shoulders.
Anyway, let’s get started!
I have a Tumblr now! I’m still experimenting with using the platform for short essays and thought nuggets. Here’s an essay cross-posted from that Tumblr:
The Melancholy of Retrievers
(Wandering philosophy. Not attached to most of these opinions.)
I’m staying for a few weeks in the home of relatives who own a Labrador Retriever. I’ve spent a lot of time around this dog in the last few weeks, after many years of not living with a pet. As a result, everything about the notion of “owning a dog” – or the very existence of domesticated dogs – has become strange to me.
The dog, Jasper, lives to play fetch. When he isn’t sleeping or eating or drinking, he picks up anything he can find and brings it to you so that you can throw it. If you don’t throw it, he’ll try another person. If no one else is around, he’ll pant and whine at you and shove his head between your legs to stare sadly into your eyes until you give up and play fetch.
I’m sure this is normal dog behavior, and it’s the sort of silly thing that people love about dogs. But it makes me wonder how it feels to be Jasper.
(Faithful readers: You can now subscribe to this blog!)
My last two posts for Applied Sentience are up:
Within, I discuss some thoughts I’ve had recently on the problems with empathy, and how we need another layer of moral feeling on top of empathy — for which I borrow the term “heroic responsibility” from Eliezer Yudkowsky — if we want to do good in difficult situations.
The posts total about 2500 words, but this post provides a brief summary.