Roseites and Bostromites

Epistemic status: Speculation. Grasping at a distinction that might or might not be useful. Playing around with dichotomy to see what happens.

 

The venture capitalist David Rose once told a group of students (I was there: I don’t think the speech was published) to think about things that “will have to happen” as technology develops, and to create businesses that will enable those things.

For example: If the Internet allows a store to have a near-infinite selection, someone will have to found Amazon.

I recently realized that Rose’s way of thinking parallels the way philosopher Nick Bostrom thinks about the future. As an expert on global catastrophic risk, he asks people to figure out which things will have to not happen in order for humanity to develop, and to create organizations that will prevent those things from happening.

For example: If nuclear war would wipe out civilization, someone (or many someones) will have to ensure that no two nuclear-armed groups ever engage in all-out war.

 

If you were to divide people into two groups — the followers of David Rose, and those of Nick Bostrom — you’d get what I call “Roseites” and “Bostromites”.

Roseites try to make new things exist, to grow the economy, and to enhance civilization.

Bostromites try to study the impact of new things, to prevent the economy’s collapse, and to preserve civilization.

Continue reading

Annotate the Web: March 2016

I use Genius to add comments and context to the articles I read. This is a monthly round-up of articles I did the most Genius-ing on. To see all my annotations, follow me on Genius!

If you like to think while you read, you should get an account and add the Chrome extension. The Internet needs thoughtful people like you!

(Also, without the extension, you may not see the annotations on these articles.)

 

Articles of Note

80 years ago, Harvard had a “Jewish quota”. They used rhetoric about “character” to limit the number of Jews they admitted, in favor of students who weren’t as book-smart but fit the Harvard ideal. Today, the same thing is happening to Asians, for the same reasons.

Controlling for other variables […] Asians need SAT scores 140 points higher than whites, 270 points higher than Hispanics, and an incredible 450 points higher than blacks (out of 1,600 points) to get into these schools. 

If you want to see some ridiculously offensive statements from MIT’s Dean of Admissions, this is the article for you!

Continue reading

Annotate the Web: Phil Libin and Ezra Klein on Artificial Intelligence

Genius.it is one of the year’s better inventions.

Right now — right this moment — you can turn any web page into a cross between a Kindle book and a page of lyrics on Rap Genius. Other people can read your annotations alongside the article, and add their own comments.

I plan to use this invention often. It’s the best way to deal with the fact that someone is always wrong on the internet.

Below is the first article I’ve “annotated” in this way:

http://genius.it/8074392/www.vox.com/2015/8/12/9143071/evernote-artificial-intelligence?

* * * * *

Ezra Klein and Phil Libin are both smart people. But I think that they make some mistakes in their depiction of how experts on artificial intelligence think about the risks of this powerful technology.

Continue reading

Empathy and Heroic Responsibility

(Faithful readers: You can now subscribe to this blog!)

 

My last two posts for Applied Sentience are up:

http://appliedsentience.com/2015/05/29/moral-heroism-pt-1-empathys-faults-heroism-to-the-rescue/

http://appliedsentience.com/2015/07/06/moral-heroism-pt-ii-how-to-become-a-hero-or-at-least-get-started/

Within, I discuss some thoughts I’ve had recently on the problems with empathy, and how we need another layer of moral feeling on top of empathy — for which I borrow the term “heroic responsibility” from Eliezer Yudkowsky — if we want to do good in difficult situations.

The posts total about 2500 words, but this post provides a brief summary.

Continue reading

Alpha Gamma Reviews: Edge 2015

Each year, Edge.org asks a few hundred very smart people how they’d answer a certain question. The results are always a mixed bag, but it’s one of the most exciting mixed bags in the intellectual world.

This year’s question dug into one of my own interests: “What do you think about machines that think?” 

In other words: What does the increasing power of artificial intelligence (AI) mean for humans, for the universe, and for the machines themselves? What will happen if and when AI becomes “general” or “superintelligent”, outperforming humans at almost every task?

The answers to this question would fill a book (and will, since Edge publishes one book each year). But even if you don’t have time to read a book, you should sample the content, because there’s always a ton of interesting material.

This post is my attempt to gather up some of the best answers and individual quotes, while responding to a few misconceptions about AI safety that popped up in the responses.

Continue reading

Ten Big Questions

At a recent symposium, social scientists gathered to create a list of “big questions” that might serve as a driving focus for academics in the years to come—inspired in part by David Hilbert’s (largely successful) use of this technique to guide mathematicians.

More on the symposium here. The final list of questions is highly informal, but gives us a good idea of what problems are on the minds of very smart people:

1. How can we induce people to look after their health?

2. How do societies create effective and resilient institutions, such as governments?

3. How can humanity increase its collective wisdom?

Continue reading