A couple of days ago, I was toying around with my DVD player, watching a rather wonderful movie titled 2010. For those who haven’t seen it, I heartily recommend it. It is a sort of hard Science Fiction that is very rarely seen on the screen in this day and age.
What caught my attention this time, however, was a particular scene involving HAL (a sentient computer). Two spaceships are in danger, and the Discovery (the vessel HAL is on) is going to be used as a booster rocket to get the other ship to safety. Right beforehand, the crew has an involved debate over whether or not to tell HAL what is really going on. As one character, Doctor Chandra, puts it, just because HAL is a synthetic life form doesn’t mean he doesn’t have rights (no doubt the fact that HAL also controls the life support systems adds some weight to the matter).
At the time the film was made, this seemed like just speculation. After all, who would be able to create a true artificial life form? The technology seemed decades into the future, if it was even possible.
Until a couple of months ago, I agreed with that assessment. Now, I’m not so certain. You see, I had the opportunity to play a new game that has taken strategy gamers by storm: Black and White. The premise is quite simple. You are a god, and you have to impress your worshipers (if there has ever been a concept for the egomaniac, this must be it). Early into the game, you get a pet creature to do your bidding.
When I actually saw what this creature could do, I was frightened. The coding for simulating its AI was flawless. For all intents and purposes, it acted like a real pet. It started out needing training, which was accomplished by applications of pain and pleasure. It was unpredictable, doing the sorts of things that a real pet would do. It ate, it slept, it even defecated. It was as close to being a true Artificial Intelligence as I have ever seen. I half expected it to wake me up late at night looking for treats.
I think we are now standing on a line that has only been dreamed of. And, I fear what happens should we cross it.
The big question is this: at what point does a simulated AI become so real that it crosses the threshold into being a true AI, an actual artificial life form? And, once that line is crossed, what rights does that life form have?
Imagine a role playing game with such an AI. All the monsters are truly alive and self-aware. It might be an RPGer’s greatest dream; after all, think of the challenge! Every creature having the intelligence of the player! But, it would also be a moralist’s worst nightmare.
If such a creature was created, even though it would consist of ones and zeroes, would it have a soul? And, would the slaughter of such creatures constitute a mass murder? What rights would we have of holding another intelligent being in computerized slavery?
I would like to think that these questions are still matters for SF authors such as Gregory Benford and Robert J. Sawyer to tinker with. But, the existence of the pet creature from Black and White leaves me uncertain. With the computer game industry’s relentless drive towards a perfect AI to challenge the player, such a threshold might easily be crossed, causing the deaths of monsters in a game like Diablo II to become honest-to-God homicide.
However, we haven’t crossed that bridge yet, and hopefully game designers will shy away from leaping over the thin line of AI sentience. For the very first time, I am actually happy that every now and then one of the monsters in Diablo II acts like a computerized puppet. After all, the alternative is truly frightening.
Disclaimer: Garwulf’s Corner is written by Robert B. Marks and hosted by Diii.net. The views expressed in this column are those of the author, and are not necessarily the opinions of Diii.net.