Complexity and AI

Sherlock Holmes once told Watson that Watson’s suggestions were extremely useful to him. Once Watson had examined a situation and come up with an explanation, Holmes could immediately know that was a false trail to go down; the result was that Watson eliminated many avenues of thought Holmes would otherwise have to explore for himself.

Yudkowsky might not be a good Holmes, but he’s a great Watson:

Me: If you are ignorant of a phenomenon, that is a fact about your state of mind, not a fact about the phenomenon itself. Therefore your ignorance of how neural networks are solving a specific problem, cannot be responsible for making them work better.

Him: Huh?

Me: If you don’t know how your AI works, that is not good. It is bad.

As has been said by smarter people than myself, if the world were simple enough for us to be able to understand it, we wouldn’t be in it.

If a computational system’s behavior is simple enough for us to understand it well, it is likely to be too simple for it to qualify as an intelligent entity. We certainly do not understand ourselves, and although much progress can be made on this front, there are limits to the degree to which we can possess such reflective understanding.

Advertisements

2 Responses to “Complexity and AI”

  1. The analogy he usually gives is mediocre chess players creating Deep Blue.

    I don’t see why there needs to be a hard cut-off point for “intelligent entity”. Eliezer’s main concern is a self-modifying one, which is more specific than merely intelligent.

  2. (edited for grammar)

    Self-modifying intelligences are harder to make than non-self-modifying ones – especially they have to be relatively stable.

    Specifying self-modification makes understanding harder, not easier. It is certain that a human mind cannot understand the totality of the functioning of brains that could host it, although the mind could understand simple principles that made up that functioning.

    Rejecting neural networks whose operations we do not understand because we don’t understand them is stupid. Not understanding doesn’t make things work better, neither does it make them work worse. Given that AI is not understood at present, it’s a fair bet that looking for it among the things we already understand well is pointless.

    (edited to add)

    Rejecting methods that are not well-understood in their results, for that reason alone, is stupid. Rejecting methods of potentially generating AIs because we don’t understand how their results work is also stupid.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: