Archive for October, 2008

The Laws of Robotics

Posted in GIGO, Science Fiction with tags , , on October, 2008 by melendwyr

In the early years of modern science fiction, the creation of artifical people was almost always presented as an act of foolish arrogance and hubris, “meddling in things Man was not meant to know”. Sometimes the resulting entities would merely destroy their creators; sometimes they would kill everyone surrounding them, or even all of humanity. This ‘Frankenstein Complex’ dominated the imagination of artificial and mechanical life. There were some stories in which artificial organisms were not considered to be malevolent abominations, but they were few and far between.

A young man named Isaac Asimov grew weary of this state of affairs. He wrote a short story about a robot named ‘Robbie’ that was made for the sole purpose of taking care of a child, a robot “infinitely more to be trusted than a human being”.

“His entire ‘mentality’ has been created for the purpose. He just can’t help being faithful and loving and kind. He’s a machine — made so.”

Asimov eventually established a set of protective principles that his hypothetical robots would have built directly into them, principles that they would be incapable of disobeying. As he stated:

I began to write a series of stories in which robots were presented sympathetically, as machines that were carefully designed to perform given tasks, with ample safeguards built into them to make them benign.

He expressed those safeguards in three laws:

1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given to it by human being except where those orders would conflict with the First Law.

3. A robot must protect its own existence except where such protection would conflict with the First or Second Law.

As Asimov pointed out in his essay “The Laws of Robotics”, such rules have been in use since the dawn of time, but are considered so self-evident that no one feels the need to state them. Reworded, they become: 1. A tool must be safe to use. 2. A tool must perform its function, provided it does so safely. 3. A tool must remain intact during its use unless its destruction is required for safety or its destruction is part of its function.

What made Asimov’s work so unusual was that he explicitly stated these rules and postulated fictional worlds in which they were directly built into the foundations of artificial minds; as a result, his hypothetical robots would be incapable of violating them. The implications of the laws and the nuances of their implementation formed the basis for much of Asimov’s writing.

And they will be the foundation of this ongoing series of posts.

Work in Progress

Posted in Reviews with tags , , on October, 2008 by melendwyr

I’m working on a critique of Eliezer’s “Friendly Artificial Intelligence”, and hopefully I’ll have some things worth posting in the next few days.

Among other things, I’m trying to decide whether it would be better to hunt down the actual citations from things he’s written publically, or merely to refer to uncontentious points that can be easily verified with little research.

I also have a lot of Golden Age science-fiction reading to do. Locating the specific works that Eliezer cribbed his ideas off of isn’t as easy as you might think, even if the primary source is one of your favorite authors.

I am leaning towards citing only the things that Eliezer cannot censor or rewrite, even in principle. He’s ‘corrected’ too much of his own content for me to feel confident about linking any particular statement.

Don’t Agree to Agree

Posted in GIGO with tags on October, 2008 by melendwyr

On occasion, I am asked “Can’t we agree that (insert statement here)?”, and I can never quite shake the feeling that the questioners aren’t getting the point.

Of course we can agree on that. We can agree on anything. The only thing needed for agreement is for two people to assert the same thing. It’s utterly trivial.

The real questions are: Can we disagree? More specifically, can we reasonably disagree? If we maintain rational standards, are there still grounds for argument? Does your point necessarily arise from premises we both accept, and are we capable of justifying those premises to ourselves and others?

Computers are Stupid

Posted in GIGO on October, 2008 by melendwyr

Do not take umbrage, please. I mean something very particular. Electronic computers can process massive amounts of data. They can perform calculations far faster than any human conscious mind, with remarkably little error and tremendous consistency. They do tedious and repetitive tasks more quickly than we can intuitively grasp, and they can do so more proficiently than we can credit. Human brains outperform them only because they’re massively parallel process machines with billions upon billions of neurons, each a tiny computer unto itself. But there are certain tasks that we expect even the dimmest and least capable of human beings to do, without conscious effort, that computers presently cannot. Most especially, they do not process the meaning and content of human communication. They are not capable of understanding what is called ‘natural language’. A feature of computers is that they do what they are told. The hard lesson that every computer programmer learns is that they do EXACTLY as they are told. They do not care what you meant, they do not infer what your intentions were, they have no background of experience of human desires that would let them guess what you wanted. They do what you tell them to. The responsibility for telling them properly is yours alone. When trying to instruct the machine, programmers must understand the problem down to a rudimentary level. They must be able to define what results they want. They must be able to describe a series of instructions, a sequence of atomic operations, that the machine can understand and carry out, and they must be able to understand the implications of those instructions to such a degree that they know the machine will produce the desired outcome and none other. Put a character in the wrong place, or leave out a needed one, and your instructions become garbage. Either they are meaningless, and the machine cannot recognize them, or they are functional but define undesired operations which the machine will mindlessly carry out. The latter case is far more dangerous — it’s not always easy to recognize a result that doesn’t match what you wanted. The responsibility for the outcome is yours. If the program does something, you are the one who did it: the computer is a very sophisticated tool carrying out your commands. Maybe the program even does what you wanted it to do. Mistakes are yours, successes are yours, failures are yours — and yours alone. Programmers spend far more time analyzing problems and determining precisely how they should be solved than constructing the actual programs. Trying to skip the hard, tedious grunt-work of thinking beforehand is usually a quick and speedy road to disaster, or at least failure. Every minute of forethought invested avoids an hour of searching for errors or backtracking from conceptual dead-ends. Writing the program is the easy part. Knowing what program to write, and how to write it, is the difficult part. People who cannot take their natural language understanding of concepts and, through a process of analysis, break it into its most basic constituents, make terrible programmers. I’ve seen such people try, and I’ve seen them fail. Such individuals often impress others with verbal fluency and interpersonal charisma, and so are considered ‘smart’, but their arguments lack logic, and they cannot perceive the logic inherent in the problems they face and the arguments they oppose. You can’t impress a computer. You cannot charm it. You cannot dazzle it. You cannot blind it with reasonable-sounding nonsense. The computer sees only the rigorous mathematics in your commands. Give it imprecise or ambigous orders, you’ll get undesirable results. It’s called the GIGO principle: garbage in, garbage out.

A Rose by Any Other Name

Posted in In the Same Package on October, 2008 by melendwyr

Isak made a comment that contains good points and plenty of discussion fodder, so I decided to respond to it with a full post. (My apologies for so singling you out, Isak.) He said:

It seems to me that there are concepts that are difficult or maybe even impossible to define, yet we still use, and there is a great degree of interpersonal agreement.

There are a great many concepts that we possess implicit definitions for, but not explicit ones. We cannot describe the definition or say how it is that we come to any conclusion involving it; for all practical purposes, they’re “black boxes” whose workings we can’t perceive. Data goes in, conclusions come out, and the only way we can guess at what goes on inside is to study the relationships we find between the inputs and outputs.

A good example is the famous statement by Supreme Court Justice Potter Stewart that “I can’t define [hard-core pornography], but I know it when I see it.” There have been entire books written on why that is not an acceptable legal principle, and why the standards for ‘acceptability’ are what they are. Chief among them is the simple point that no one could anticipate what Stewart would and would not ‘know when he saw it’, and so there is no way that people could direct their actions in accordance with the law. Who could tell whether any given act of speech would be appropriate?

Before we can toe the line, we must be shown where it is drawn. If the standard is not defined, not even the genuinely compliant can know how to obey.

I’m sure there were a finite number of principles, embedded in the structure of Potter’s mind, that determined how he would categorize stimuli presented to him as ‘acceptable’ or ‘not-acceptable’. But Potter was not aware of those principles any more than he knew how to regulate his blood salinity or how his liver acted to break down toxins. Parts of Potter ‘know’ those things, and act in complicated ways to maintain health. But the parts of Potter that speak, that communicate, that monitor parts of his mind and send the observations to the parts that communicate, they did not know those things. And so no one else could know them either.

In principle, a sufficiently detailed examination of Potter would reveal those operational definitions, in the same way that a close examination of the structure of a computer would reveal the nature of the program it implements. The information is not truly private. In practice, though, we don’t possess the technology or understanding necessary to do that, and even if we did it would probably kill Potter in the process. His applied standards were, in practice, private and subjective. See the following link for a more formal discussion of the problem of ‘obsenity’ applied to free speech.

What is a bird? A common practice problem given to students of cognitive psychology is the task of explaining what a bird is. Are penguins birds? Chickens? Chickens that have lost certain portions of their anatomy? Is a corpse a bird?

Everyone ‘knows’ what a bird is, and in everyday life, everyone seems to agree. But when you get down to the actual details, the arguments begin. People ‘know’ different things. And they have a great deal of difficulty expressing what they believe they know. The act of categorizing something as ‘bird’ or ‘not-bird’ requires little effort — people learned ‘what birds are’ as young children, and if doing so was difficult, they generally do not remember the difficulty. But explaining how they do so, especially in a way that would let others do the same, is almost impossible.

The hardest part of designing a scientific experiment isn’t coming up with a hypothesis, or thinking of a way to test it.  Those aspects, although difficult, pale in comparison to the true obstacle:  finding a way to operationalize the test.  Defining precisely what observations would constitute an invalidation of the hypothesis, and being able to justify that definition, is what scientists struggle with most, at least in my experience.

In short, Isak:  we are only capable of using concepts that we possess definitions for; a ‘concept’ without a definition is meaningless.  We as individuals possess innate and implicit definitions for ideas that we developed without the aid of conscious design.  These definitions are often an obstacle to communication, as they are idiosyncratic and not shared.  People can use the same words to refer to very different things, and if this is not recognized, it can reduce the attempt to convey information to a frustrating and fruitless hash.

In conversation and discourse, the only concepts that can be used are those with meanings that can be given explicitly, that can be described in terms all sides recognize and accept as representing known values.  Without this shared foundation, nothing can be built, nothing expressed, nothing conveyed.

In the Same Package

Posted in Uncategorized, Useful Aphorisms with tags , on October, 2008 by melendwyr

Eliezer, again:

[…]you’d need to try far more than a trillion random reorderings of the letters in a book, to produce a play of quality equalling or exceeding Shakespeare.

Oh? How do you operationalize the concept of ‘quality’?

Computer programmers have a saying: “if you don’t understand it well enough to tell the machine how to do it, you don’t understand it at all.” I very much doubt that Eliezer understands evaluations of literary quality well enough to explain to us whether a given text is better than Shakespeare, much less program a computer to perform the function.

A Message From Eliezer Yudkowski

Posted in Uncategorized on October, 2008 by melendwyr

I recently received an email from Eliezer Yudkowski, in reference to a comment of mine which he had deleted. The message, and the content of the comment that precipitated it, follows:

You’re welcome to repost if you criticize Coherent Extrapolated Volition specifically, rather than talking as if the document doesn’t exist. And leave off the snark at the end, of course.

———- Forwarded message ———-
From: TypePad <noreply@sixapart.com>
Date: Wed, Oct 15, 2008 at 3:01 PM
Subject: [Overcoming Bias] Caledonian submitted a comment to ‘Ends
Don’t Justify Means (Among Humans)’
To: sentience@pobox.com

A new comment from “Caledonian” was received on the post “Ends Don’t Justify Means (Among Humans)” of the weblog “Overcoming Bias”.

Comment: “Eliezer: If you create a friendly AI, do you think it will shortly
thereafter kill you? If not, why not?”

At present, Eliezer cannot functionally describe what ‘Friendliness’ would actually entail. It is likely that any outcome he views as being undesirable (including, presumably, his murder) would be claimed to be impermissible for a Friendly AI. Imagine if Isaac Asimov not only lacked the ability to specify *how* the Laws of Robotics were to be implanted in artificial brains, but couldn’t specify what those Laws
were supposed to be. You would essentially have Eliezer. Asimov specified his Laws enough for himself and others to be able to analyze them and examine their consequences, strengths, and weaknesses, critically. ‘Friendly AI’ is not so specified and cannot be analyzed.
No one can find problems with the concept because it’s not substantive enough – it is essentially nothing but one huge, undefined problem.

The last sentence — the one that Eliezer took particular offense to — concisely sums up the reality of Eliezer’s ‘work’ on Artificial Intelligence.

Over the next few days, I’m going to demonstrate why. If you’d like to argue the point, feel free — I’m not afraid of criticism.  No comments addressing the subject of this thread will be deleted, regardless of their content, as long as they can be legally displayed in the United States to minors.  I reserve the right to delete content incompatible with filtering programs.  Everything else is fair game.