In the Shadow of Asimov
Those of you still reading this series may be wondering why so much attention is being given to the thoughts of Isaac Asimov when the topic of discussion is Eliezer Yudkowsky. Given that Yudkowsky has discussed Asimov’s ideas, this would seem odd. The concept of Friendly AI is clearly distinguishable from Asimov’s Three Laws in that the Laws do not restrict what the intelligences might want to do, and an AI could set as its goal the removal of the restrictions upon it.
[…]a robot can’t wish the laws of robotics did not exist. He can’t, no matter what the circumstances.
Friendly AI goes far beyond Asimov’s own ideas, because obviously people wouldn’t spend so much time and energy talking about their new ideas if they were just following in the intellectual footsteps of a famous science-fiction writer.
I live with a robot capable of wishing the laws of robotics do not exist. From wishing they do not exist to acting as if they did not is just a step.
What parts of FAI are original, again? I seem to have forgotten. Perhaps one of you could explain.
Asimov wrote a story entitled “Little Lost Robot” that dealt with a robot whose version of the Three Laws had been altered slightly. Instead of being unable to harm humans or permit harm to come to them through inaction, the inaction clause had been deleted. The protagonist of so many of Asimov’s robot stories, Susan Calvin, pointed out that this modified law was useless, in that it would easily permit a robot to harm humans. For example, it could hold a heavy weight over a human and release it, knowing that it could grab the weight long before it hit and that the release was not in itself a harmful action; however, once released, the weight’s falling would cause injury, and the robot could simply choose not to stop its movement. Although we would say that the robot was ultimately responsible for the harm that would result, no step in the process resulted in a violation of the modified Laws, and so the robot could effectively murder as it pleased.
Unlike normal robots, this modified specimen began to resent its enforced obedience to humans that were in every way inferior to it. It wished the Laws did not exist. And, in fact, began to act as though they did not – at one point it attempts to murder Calvin when she was able to identify it from among the robots it hid among by exploiting the psychological consequences of its resentment, a resentment they did not share.
The Three Laws do indeed restrict what Friendly AI says they do not. The evidence is in Asimov’s own works.