If you like to think while you read, you should get an account and add the Chrome extension. The Internet needs thoughtful people like you!
(Also, without the extension, you may not see the annotations on these articles.)
Articles of Note
80 years ago, Harvard had a “Jewish quota”. They used rhetoric about “character” to limit the number of Jews they admitted, in favor of students who weren’t as book-smart but fit the Harvard ideal. Today, the same thing is happening to Asians, for the same reasons.
Controlling for other variables […] Asians need SAT scores 140 points higher than whites, 270 points higher than Hispanics, and an incredible 450 points higher than blacks (out of 1,600 points) to get into these schools.
If you want to see some ridiculously offensive statements from MIT’s Dean of Admissions, this is the article for you!
It’s very cheap to experiment on people these days.
For ~$100 and ~5 hours of my time, I used Google Consumer Surveys (GCS) to collect over 800 responses to the following question:
Famine threatens Ethiopia. Thousands of lives are at risk, but U.S. help could save them. How important is it that the U.S. give $9 million in food aid to these Ethiopians?
This wasn’t just curiosity. This was an experiment. My question had three possible endings:
- …food aid to these Ethiopians?
- …food aid to these men and women?
- …food aid to these human beings?
We all know that Ethiopians are human beings, of course. But do our actions reflect that knowledge?
At least one study found evidence that we’ll donate more money to help rescue someone from our country than someone from another country. (Kogut & Ritov, 2007)
I have a similar question: Are we more willing to help foreigners when they are framed as our fellow humans, rather than as people from some other country?
Each year, Edge.org asks a few hundred very smart people how they’d answer a certain question. The results are always a mixed bag, but it’s one of the most exciting mixed bags in the intellectual world.
This year’s question dug into one of my own interests: “What do you think about machines that think?”
In other words: What does the increasing power of artificial intelligence (AI) mean for humans, for the universe, and for the machines themselves? What will happen if and when AI becomes “general” or “superintelligent”, outperforming humans at almost every task?
The answers to this question would fill a book (and will, since Edge publishes one book each year). But even if you don’t have time to read a book, you should sample the content, because there’s always a ton of interesting material.
This post is my attempt to gather up some of the best answers and individual quotes, while responding to a few misconceptions about AI safety that popped up in the responses.
Sometimes, it’s not just math. It’s personal.
I try to use raw statistics to get a sense of what life is like in other places. This helps me avoid the selective nature of stories, though stories have their place after the numbers are in.
Here, a startling overview from Chris Blattman et al, in a survey of young Liberian men thought to be engaged in criminal behavior:
“On average the men were age 25, had nearly eight years of schooling, earned about $40 in the past month working 46 hours per week (mainly in low skill labor and illicit work), and had $34 saved. 38% were members of an armed group during the two civil wars that ravaged the country between 1989 and 2003. 20% reported selling drugs, 44% reported daily marijuana use, 15% reported daily use of hard drugs, 53% reported stealing something in the past two weeks, and 24% reported they were homeless.”
The entire paper is worth reading, and quite readable. Turns out that people are very honest in answering survey questions about “sensitive” behaviors when those behaviors are the norm within their social groups.
(The paper also provides a good lens for looking at cash transfers. In the hands of a man with $34 in the bank, who earns $40 a month, $500 might be enough to prevent multiple acts of theft or purchase a stable home. On the other hand, I’d guess that these men are more likely to spend some of the money on hard drugs than are families in rural villages.)
At a recent symposium, social scientists gathered to create a list of “big questions” that might serve as a driving focus for academics in the years to come—inspired in part by David Hilbert’s (largely successful) use of this technique to guide mathematicians.
More on the symposium here. The final list of questions is highly informal, but gives us a good idea of what problems are on the minds of very smart people:
1. How can we induce people to look after their health?
2. How do societies create effective and resilient institutions, such as governments?
3. How can humanity increase its collective wisdom?
This is mostly a plug for the wonderful but seemingly abandoned blog Ten Hundred Words of Science, which features academics explaining everything from volcanoes to advanced mathematics using only the thousand most common words in the English language. (“Thousand” is not one of those words.) The whole thing is based on this webcomic.
I recently submitted a new entry, but I don’t think it will ever be published, so I’ve posted it here instead. These 191 words of science are brought to you by the Clinical and Affective Neuroscience Lab.
I’ve been part of the Clinical and Affective Neuroscience Lab for the last 14 months.
In that time, I’ve made lots of mistakes—and most of them weren’t even unique, interesting mistakes like discovering penicillin or inventing the chocolate-chip cookie. Mostly they were “should’ve asked more questions”-type mistakes.
That’s kind of embarrassing, so I’ve embarked upon my typical response to mistakes: writing an 18-page guide (unnecessary warning: 18 pages long) to avoiding them, filled with footnotes and jokes and sub-par MS Word design choices.
I also wrote out a one-page version that gives you the most useful information much faster.
I’d like to update both of these documents at some point, because I think it’s likely that a great deal of time is wasted on science that doesn’t work because newbies have a tough time adjusting to the laboratory environment, and it would be nice if we had a collection of stories from young researchers explaining how to avoid the most avoidable mistakes.
But for now, the guide is extremely specific to my own limited lab experience, and is mostly about filtering through papers rather than conducting physical science. Read it if you’re curious, and stop reading if you stop being curious.
Meanwhile: If you’ve ever done research in any kind of lab, from computer science to chemistry to canine cognition, you should email me and tell me about all the mistakes you made, so I can add them to the next version! (Especially canine cognition. There are no puppies in the current version of the Guide, and there should be at least three.)
You can also tell me about someone else’s mistakes! I will attach no names to anything unless the person who made the mistake wants their name attached for some reason.