In the early years of modern science fiction, the creation of artifical people was almost always presented as an act of foolish arrogance and hubris, “meddling in things Man was not meant to know”. Sometimes the resulting entities would merely destroy their creators; sometimes they would kill everyone surrounding them, or even all of humanity. This ‘Frankenstein Complex’ dominated the imagination of artificial and mechanical life. There were some stories in which artificial organisms were not considered to be malevolent abominations, but they were few and far between.
A young man named Isaac Asimov grew weary of this state of affairs. He wrote a short story about a robot named ‘Robbie’ that was made for the sole purpose of taking care of a child, a robot “infinitely more to be trusted than a human being”.
“His entire ‘mentality’ has been created for the purpose. He just can’t help being faithful and loving and kind. He’s a machine — made so.”
Asimov eventually established a set of protective principles that his hypothetical robots would have built directly into them, principles that they would be incapable of disobeying. As he stated:
I began to write a series of stories in which robots were presented sympathetically, as machines that were carefully designed to perform given tasks, with ample safeguards built into them to make them benign.
He expressed those safeguards in three laws:
1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human being except where those orders would conflict with the First Law.
3. A robot must protect its own existence except where such protection would conflict with the First or Second Law.
As Asimov pointed out in his essay “The Laws of Robotics”, such rules have been in use since the dawn of time, but are considered so self-evident that no one feels the need to state them. Reworded, they become: 1. A tool must be safe to use. 2. A tool must perform its function, provided it does so safely. 3. A tool must remain intact during its use unless its destruction is required for safety or its destruction is part of its function.
What made Asimov’s work so unusual was that he explicitly stated these rules and postulated fictional worlds in which they were directly built into the foundations of artificial minds; as a result, his hypothetical robots would be incapable of violating them. The implications of the laws and the nuances of their implementation formed the basis for much of Asimov’s writing.
And they will be the foundation of this ongoing series of posts.