Implementation and Extrapolation
Asimov did not specify how the Laws could be encoded in robotic brains, nor did he explain how an artificial mind could be made such that it could not violate some precept built into it; for the purposes of fiction, he didn’t need to. If we expected science fiction authors to be able to fully justify their imaginative creations, we’d have to demand that they produce work worthy of multiple Nobel Prizes in a variety of scientific fields before they could write anything. And that’s at a minimum. Even so common a literary device as faster-than-light travel would completely revolutionize physics at a single gasp if it were shown to be possible, and such demonstrations are rarer than hen’s teeth.
So rather than delving into applied questions in cybernetics and electrical engineering, Asimov concerned himself with conceptual exploration of the possibility of his Laws, the higher-level implications of their potential reality, extrapolations of what could be true. In other words, thought experiments.
If such Laws were made, where would their weaknesses be? The most obvious degrees of freedom in the Laws — the vulnerabilities in their protective functions — consist of the problems inherent to establishing precise meanings of the concepts ‘human’ and ‘harm’.
What exactly constitutes ‘harm’? Throughout the stories, robots were motivated to act one way or another by their various perceptions of outcomes as harmful, and the limits of their capacities to project consequences through time. One story, “Liar!”, involved an unexpectedly-telepathic robot whose unanticipated abilities let him perceive the emotional pain of humans confronting unpleasant truths as a form of harm. As a consequence, he was obligated to deceive them, telling them outright lies that they wanted to hear, despite such behavior normally being forbidden by the Three Laws. He was ultimately destroyed when a robopsychologist forced him to simultaneously confront two humans with mutually-opposed emotional needs. There are myriad additional examples of the problems inherent in any set specification of ‘harm’ that occur in Asimov’s work.
What exactly constitutes ‘human’? This is an ancient problem, one that was old when the classic Greek philosophers tried and failed to solve it.
As with the concept of harm, robots were programmed with criteria they could use to judge whether something was human or not, criteria that most people would intuitively agree with. This accomplishment alone is far beyond anything we in the real world are capable of. Still, even such a monumental triumph contains flaws. In one of the stories, two robots were made for the special purpose of solving a particular applied philosophical problem, and so possessed extremely sophisticated reasoning capabilities and great functional knowledge of certain branches of mathematics we would call ‘logical’. After producing the desired solution, the robots were left nearly-dormant. The inattention of their owners let them think about other matters, and they diverted themselves in speculative exploration of philosophy. Eventually, each concluded that the most ‘human’ entity they were aware of was the other philosophical robot.
Would they have been correct? There is no way for us to answer this question. But the fictional society of humans, despite superficially accepting the definition of ‘human’ they have programmed into the robots, would certainly have been surprised by their conclusion, and would likely have rejected it no matter how thoroughly it was demonstrated to logically arise from the provided premises.
This sort of unexpected consequence comes closest to the true danger of the application of the Three Laws.