A Message From Eliezer Yudkowski

I recently received an email from Eliezer Yudkowski, in reference to a comment of mine which he had deleted. The message, and the content of the comment that precipitated it, follows:

You’re welcome to repost if you criticize Coherent Extrapolated Volition specifically, rather than talking as if the document doesn’t exist. And leave off the snark at the end, of course.

———- Forwarded message ———-
From: TypePad <noreply@sixapart.com>
Date: Wed, Oct 15, 2008 at 3:01 PM
Subject: [Overcoming Bias] Caledonian submitted a comment to ‘Ends
Don’t Justify Means (Among Humans)’
To: sentience@pobox.com

A new comment from “Caledonian” was received on the post “Ends Don’t Justify Means (Among Humans)” of the weblog “Overcoming Bias”.

Comment: “Eliezer: If you create a friendly AI, do you think it will shortly
thereafter kill you? If not, why not?”

At present, Eliezer cannot functionally describe what ‘Friendliness’ would actually entail. It is likely that any outcome he views as being undesirable (including, presumably, his murder) would be claimed to be impermissible for a Friendly AI. Imagine if Isaac Asimov not only lacked the ability to specify *how* the Laws of Robotics were to be implanted in artificial brains, but couldn’t specify what those Laws
were supposed to be. You would essentially have Eliezer. Asimov specified his Laws enough for himself and others to be able to analyze them and examine their consequences, strengths, and weaknesses, critically. ‘Friendly AI’ is not so specified and cannot be analyzed.
No one can find problems with the concept because it’s not substantive enough – it is essentially nothing but one huge, undefined problem.

The last sentence — the one that Eliezer took particular offense to — concisely sums up the reality of Eliezer’s ‘work’ on Artificial Intelligence.

Over the next few days, I’m going to demonstrate why. If you’d like to argue the point, feel free — I’m not afraid of criticism.  No comments addressing the subject of this thread will be deleted, regardless of their content, as long as they can be legally displayed in the United States to minors.  I reserve the right to delete content incompatible with filtering programs.  Everything else is fair game.


8 Responses to “A Message From Eliezer Yudkowski”

  1. Wow… it seems things are worse than I had thought over at OB. I often want to respond, at length, to Eliezer’s crazy arguments and streams of thought but have decided that it would require too much effort – being as though he somehow manages to write lengthy ones quite rapidly.

    That and OB would never ever post a refutation of Eliezer’s arguments, under the guise that the commenting threads are enough and that they don’t want to confuse their readers. So it would essentially require me to start a blog titled Overcoming Eliezer just to combat his campaign. And while what he’s doing is messed up (cloaking psychotic ideas in “rationality”, deliberately and admittedly targeting young people), I just don’t have the effort and feel it would be fruitless anyway. My thoughts about the future of humanity are not as superficially appealing as his.

    Anyway, what I’d really like to see are posts actually *on* Overcoming Bias, not just in the comments thread, in opposition to Eliezer’s views. The site is like a personal Captain’s Log for him, as is.

  2. Robin Hanson has disagreed with Eliezer in some posts.

    There was already a commenter who went by the title “Overcoming Cryonics”.

  3. There are often people complaining about Eliezer’s posts — either they keep changing their names, or they rarely stay long enough to comment more than a few times.

    But if you do more than complain or state that the arguments are silly — if you actually point out the holes in the reasoning and show *why* they’re silly — your comments will be deleted.

    There are far more comments of mine that have been erased than that I’ve been able to archive elsewhere; far more I’ve found edited after the fact. And the more I stay away from mere ‘snark’ and try to offer serious, thoughtful criticism, the more frequently the comments are edited or deleted.

  4. Z. M. Davis Says:

    Peter, you write that Overcoming Bias would never post a refutation of Eliezer–but I don’t think that’s true. OB is technically a group blog–why not write up one post, email it to Robin, and see what happens? Personally, I’ve found Eliezer’s ideas about rationality and the Singularity very enlightening, but if you think you have a compelling case that they’re actually “psychotic,” I’d be glad to take a look at it, and I’m sure many other readers feel the same.

    Caledonian, I agree that your comment should not have been deleted, but you have such a long history of (you will forgive me, but this is how you are perceived, by myself and many others:) being an asshole that it’s not exactly surprising that you’re not given much leeway, even when you’re not being that much of an asshole. Again, sorry, but that’s really how you in fact come off.

    Here’s what I would have written:

    “CEV is underspecified. Without the actual code, human verbal philosophy about what we would want if we thought faster, &c., is not productive. It may be said that the mathematical formalization is a work in progress, but that being the case, serious discussion of what a Friendly AI would or would not do can only take place after the work is completed.”

    Also, it’s “Yudkowsky.” With a y.

  5. Z. M. Davis Says:

    The next day, it occurs to me that I really shouldn’t post while sleepy. I’m sorry.

  6. If the software permitted me to do so, I would embellish that final ‘i’. Perhaps a tiny heart in place of the dot?

  7. I am completely indifferent to how you feel about me personally. I care only whether you evaluate my arguments accurately.

    In fact, I am liable to respect you more if you strongly dislike me but acknowledge the validity of my points. That requires that you actively commit yourself to rationality. Agreeing with me if you liked me demonstrates nothing.

  8. Of course, if you *really* wanted to impress me, you would hate me personally, but locate a logical flaw in my arguments and explain it in such a matter that no one, including myself, could deny the validity of your reasoning.

    There’s nothing I love more than being corrected *correctly*. I’ve never been unable to understand (on an emotional level) why people hate for their errors to be made right. I dislike my errors and want to get rid of them, and learning about an error I knew nothing about is extremely valuable.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: