I use Genius to add comments and context to the articles I read. This is a monthly round-up of articles I did the most Genius-ing on. To see all my annotations, follow me on Genius!
If you like to think while you read, you should get an account and add the Chrome extension. The Internet needs thoughtful people like you!
(Also, without the extension, you may not see the annotations on these articles.)
Articles of Note
80 years ago, Harvard had a “Jewish quota”. They used rhetoric about “character” to limit the number of Jews they admitted, in favor of students who weren’t as book-smart but fit the Harvard ideal. Today, the same thing is happening to Asians, for the same reasons.
Controlling for other variables […] Asians need SAT scores 140 points higher than whites, 270 points higher than Hispanics, and an incredible 450 points higher than blacks (out of 1,600 points) to get into these schools.
If you want to see some ridiculously offensive statements from MIT’s Dean of Admissions, this is the article for you!
I think that the year’s most amazing invention is Genius.it.
Right now — right this moment — you can turn any web page into a cross between a Kindle book and a page of lyrics on Rap Genius. Other people can read your annotations alongside the article, and add their own comments.
I plan to use this invention a lot. It’s the best way to deal with the fact that someone is always wrong on the internet.
Below is the first article I’ve “annotated” in this way. Read, upvote, and comment!
* * * * *
Ezra Klein and Phil Libin are both remarkably smart people. But I think that they make some mistakes in their depiction of how experts on artificial intelligence think about the risk posed by this powerful technology.
(Faithful readers: You can now subscribe to this blog!)
My last two posts for Applied Sentience are up:
Within, I discuss some thoughts I’ve had recently on the problems with empathy, and how we need another layer of moral feeling on top of empathy — for which I borrow the term “heroic responsibility” from Eliezer Yudkowsky — if we want to do good in difficult situations.
The posts total about 2500 words, but this post provides a brief summary.
Each year, Edge.org asks a few hundred very smart people how they’d answer a certain question. The results are always a mixed bag, but it’s one of the most exciting mixed bags in the intellectual world.
This year’s question dug into one of my own interests: “What do you think about machines that think?”
In other words: What does the increasing power of artificial intelligence (AI) mean for humans, for the universe, and for the machines themselves? What will happen if and when AI becomes “general” or “superintelligent”, outperforming humans at almost every task?
The answers to this question would fill a book (and will, since Edge publishes one book each year). But even if you don’t have time to read a book, you should sample the content, because there’s always a ton of interesting material.
This post is my attempt to gather up some of the best answers and individual quotes, while addressing some of the mistakes that many different thinkers made.
At a recent symposium, social scientists gathered to create a list of “big questions” that might serve as a driving focus for academics in the years to come—inspired in part by David Hilbert’s (largely successful) use of this technique to guide mathematicians.
More on the symposium here. The final list of questions is highly informal, but gives us a good idea of what problems are on the minds of very smart people:
1. How can we induce people to look after their health?
2. How do societies create effective and resilient institutions, such as governments?
3. How can humanity increase its collective wisdom?