My latest post for the humanist blog Applied Sentience is up:
It’s a pretty strange post, but I think that the issues I raise around the utility monster problem are important. If you care more about a randomly selected human than a randomly selected chicken (and I think you should), you accept the existence of utility monsters — thinking beings which are worthy of greater moral consideration than other thinking beings.
Right now, humans are the world’s reigning utility monsters. That may not be true forever.
I think we are likely to eventually create machines which possess a kind of consciousness that is deeper and richer in certain ways than our own. Whatever metrics we can use to measure the “value” of a human life (and we all have them), we know of no reason that advanced computers will not eventually score higher on said metrics than we do, whether it’s in 50 years or 500.
And before we can make decisions about how to react to this situation — or whether we should work to prevent it in the first place — I think that we should do our best to understand what it might be like to be a superhuman utility monster. Empathy shouldn’t just extend to beings with lesser mental capabilities than our own.