Specific Criticism: Watchmakers

Overcoming Bias criticism not specifically related to the problems I have with ‘Friendly AI’ for a change.

Regarding this thread:

I have no idea what the machine is doing.  I don’t even have a hypothesis as to what it’s doing.  Yet I have recognized the machine as the product of an alien intelligence.

That’s dangerously close to the “Watchmaker Argument” in favor of the existence of God.

Are beaches the product of an alien intelligence?  Some of them are – the ones artificially constructed and maintained by humans.  What about the ‘naturally-occurring’ ones, constructed and maintained by entropy?  Are they evidence for intelligence?  Those grains of sand don’t wear down, and they’re often close to spherical.  Would a visiting UFO pause in awe to recognize beaches as machines with unknown purposes?

My previous comment on the thread was deleted immediately.  We’ll have to see if this version stays up.

Advertisements

15 Responses to “Specific Criticism: Watchmakers”

  1. Z. M. Davis Says:

    Artificial beaches are specifically created to imitate natural beaches; whereas natural geological processes don’t create anything imitative of skyscrapers.

    The case of artificial beaches only demonstrates that you can’t tell the difference between emergent, evolved and engineered things (PDF) with certainty. But you can still make a pretty good guess.

  2. You’re link is broken. It has an extra http.

  3. whereas natural geological processes don’t create anything imitative of skyscrapers.

    Not *precisely* true. Both the Giant’s Causeway and geothermal vent towers are rather skyscraper-like in shape if not exactly in size.

    The skyscrapers are a sign of intelligence only because we already know a lot about Earth’s geological processes, and we know there are no unintelligent organisms that make structures out of relatively pure metal alloys.

  4. And *again* the section of my comment pointing out how this reasoning is dangerously close to the Watchmaker Argument for God is deleted.

    Rationalists have acknowledged for a long time that order and improbability is not an argument for design. There is no way to tell that something is made by ‘intelligence’ merely by looking at it – it takes an extensive collection of knowledge about its environment to determine whether something is likely to have arisen through simple processes.

    A pile of garbage seems obviously unnatural to us only because we know a lot about Earth nature. Even so, it’s not a machine. Aliens concluding that it is a machine with an unknown purpose would be mistaken.

  5. I think the beach example is a little obscure, unless you were trying to allude to the argument Eliezer made earlier about being able to distinguish a pebble in Half Moon Bay from another pebble from somewhere else.

    Shame about the deletions.

  6. I haven’t had the time yet to read that whole post, so correct me if I’m wrong, but it seems that he was able to make a judgment that it appeared to be borne of alien intelligence, and that he wasn’t claiming to have any proof that it was certainly so.

  7. It doesn’t have to be a beach. Any process that increases entropy seems to fit his definition of ‘intelligence’.

    Which means there are a great many things that we would have to conclude were produced by ‘intelligence’, making the term virtually useless.

    “but it seems that he was able to make a judgment that it appeared to be borne of alien intelligence”

    He claims to be able to detect whether the thing is a machine, without speculating as to what the machine was intended to do. Which is, quite frankly, nonsense.

    See: “Can I recognize this machine as being in any sense well-designed, if I have no idea what the machine is intended to accomplish?”

    No, you can’t. “Intelligent” processes (by his standards) are responsible for the generation of dungpiles. Dungpiles are not generally designed, and are not commonly intended to accomplish anything.

    To speculate about whether some arbitrarily-chosen thing accomplishes something well, we’d have to first speculate about what it was intended to do. If you didn’t know that the object of a game of darts is to hit the center of the target, you wouldn’t be able to determine whether any dart stuck in the board was well-thrown or not. If the darts were instead intended to mimic the shape of the Big Dipper, clustering them at the center would indicate a failure.

  8. Eliezer: “Consider yourself lucky that he’s still on the blog. I’m tired of putting up with his Stephen J. Gould-like attempts to pretend that various issues have never been discussed here and that he’s inventing them all on his own.”

    I’m certainly not the first person to object to these points. I am possibly the only person stubborn enough to continue to do so after all criticism has been deleted.

    The argument is fallacious. Preventing people from pointing this out does not make it valid, any more than killing anyone who notices the Emperor is a bit underdressed grants him a magical new wardrobe.

    It is impossible to determine whether something was well-designed without speculating as to its intended function. Bombs are machines, machines whose function is to fly apart; they generally do not last particularly long when they are used. Does that make them poorly-made?

    If the purpose of a collection of gears was to fly apart and transmit force that way, sticking together would be a sign of bad design. Saying that the gears must have been well-designed *because* they stick together is *speculating as to their intended function*.

    I do not see what is gained by labeling blind entropy-increasing processes as ‘intelligence’, nor do I see any way in which we can magically infer quality design without having criteria by which to judge configurations.

  9. You are right. You have to speculate about goals at least a little bit.

    Even in his example, with gears, he is at least speculating about sub-goals. He may not know what the whole machine does, but he knows what gears do.

  10. Nick Tarleton Says:

    You are right. You have to speculate about goals at least a little bit. Even in his example, with gears, he is at least speculating about sub-goals.

    …which EY says outright.

  11. “…which EY says outright.”

    So it doesn’t bother you that he contradicts himself?

    The fact that Eliezer tries to both make a claim and contradict it does not constitute a defense against criticisms of that claim.

    It also provides an entirely new venue of criticism.

  12. Nick Tarleton Says:

    If you mean the statement quoted in the original post, that’s an oversimplification and literally false, but not really problematic; I understand it to mean that he still has a high-entropy distribution over supergoals, just not max-entropy.

  13. Nick Tarleton Says:

    Also, given that statement’s location early in the post, (a) it’s hard to see how it could be clarified while fitting in well (b) it’s not even clear that it’s being presented as a true statement, rather than as naive and ultimately false.

  14. You are right, that is what he means. Today, he linked to the article as “Just as recognizing intelligence requires at least some belief about that intelligence’s goals, however abstract.” I have to admit I scan through his posts sometimes, because he has a tendency to explain something in 5 paragraphs that can be explained in less than 1.

    But you’re wrong about (a). It would have been very easy to clarify what he meant. Instead of:

    “I have no idea what the machine is doing. I don’t even have a hypothesis as to what it’s doing. Yet I have recognized the machine as the product of an alien intelligence. ”

    How about:

    “Ostensibly, I have no idea what the machine is doing, not even a hypothesis. Yet I have recognized the machine as the product of an alien intelligence. ”

    It’s shorter, yet much clearer. To be fair, he does have a “seems” in the previous paragraph, but it wasn’t entirely clear how far that “seems” applied.

  15. “that’s an oversimplification and literally false, but not really problematic; I understand it to mean ”

    There’s the problem.

    When evaluating arguments, what we can “understand them to mean” is utterly irrelevant.

    What they DO mean, is completely relevant.

    If you eliminate the errors in a statement by glossing over them, then the glossed-over statement contains no errors. That’s trivial. The whole point is to evaluate what they argument says, not what you think it would say if it were made reasonable and error-free.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: