Tuesday, May 8, 2012

How Will Robots Treat Us?

I've been thinking a lot about robot intelligence lately, and a quote from Pete Mandik brought something to the forefront of my mind and crystallized my thoughts for me. You can read the full interview here but the part I'm interested in is this:
I think our best guide to what we should think about any future beings that surpass us is to think about our current attitudes to beings that already surpass us. On the individual level, I’m not bothered, that is, I don’t feel the value sucked out of my life, by the knowledge that there are lots of individuals that are smarter than me. On the species level, I don’t feel that humans are devalued by the knowledge that other species are faster runners, better swimmers, etc. I think, then, by analogy, we should try to take similar attitudes to any post-humans (mechanical or biological) that outperform us. We should continue to value our own lives on our own terms. And also, you know, root for them, since they’ll be our children.
 I think his suggestion, that in considering superior beings potential relationships with us we should think about our relationships with the beings already around us, is great. But Pete seems a bit too eager to meet those superior beings for my tastes. I'd like to turn his suggestion in a different direction. Let's think about how those superior beings might treat us, their inferiors, based on examining how we treat the inferior species we're surrounded with. Yeah, you can see where this is going real fast, right? We squish many types of bugs without a second thought. We subject cows, pigs, and chickens to conditions tantamount to torture in preparation to eat them. Our closest and most respected relatives, the apes, we send to space and subject to various medical procedures we wouldn't dream of performing on other humans. Perhaps the best we could hope for, I think, is that whatever superior beings we encounter treat us like we treat cats and dogs, as curiosities to be domesticated and pampered.

I don't think the possibilities are all bad, however. It's possible, especially if our superiors are robots we've programmed, that they actually won't be able to do anything other than what we ask them to do. They also might recognize our strong points and wish to cooperate with us. But I don't like the odds. And I think anyone who longs to hurry along the creation of robots that rival or exceed our abilities is a fool who we ought to try with all our power to stop. Unfortunately, I have to agree with Pete when he suggests that it's going to be very, very hard for us as a society to muster the collective wisdom to slow down the research that is leading in this direction. Especially when that research is backed by a lot of people (and corporations, or are they people too?) with very deep pockets.