All in the Same Package

See this post. (This link may also be useful.)

I am far less interested in what various AI researchers are unable to do than in what Eliezer Yudkowsky is able to do. If Eliezer can solve problem X, what difference does it make how many other people are unable to solve it?

So: how has Eliezer demonstrated problem-solving skills in the field of Artificial Intelligence? What progress has he made? What advancements has he been responsible for? What theoretical developments has he contributed to?

If he were on trial for having furthered the field of AI, could he be convicted? Could the court be forced to stand on reasonable doubt? Or would he be found not guilty on all counts?

Eliezer is particularly dismissive of non-quantative reasoning. Yet he has written a series of essays in which he discusses various assertions he makes about AI, its importance, and its dangers — essays remarkably free of mathematical theory, equations, or formal logical arguments.

How efficient of him.

Advertisements

4 Responses to “All in the Same Package”

  1. “This link” was not useful. “This XML file does not appear to have any style information associated with it”

  2. Strange. Clearly I have more learning to do regarding trackbacks.

  3. mitchell porter Says:

    My response on a previous occasion to the question “what has he actually done?”:

    http://www.acceleratingfuture.com/michael/blog/2008/06/bloggingheadstv-interview-horgan-and-yudkowsky/#comment-121407

    He is best thought of as a philosopher of AI. There is a modern view of philosophy according to which it is the place where we think about things that we don’t know how to think about yet. Eliezer has done a lot to bring the problem of dangerous superhuman intelligence down to Earth. The available formulas for Friendly AI that we have – e.g. “a seed AI guided by renormalized human morality” (i.e. reflectively idealized human morality, human morality idealized with reference to its own criteria) – are still somewhere between schematic and metaphorical, but they contain the seeds of an exact specification. To make them exact, we would need (among other things) an exact conception of what sort of decision system the human brain actually is. At the moment, the discourse of Friendly AI leans heavily upon the idea of expected-utility maximizers (EUMs), e.g.

    http://sl4.org/wiki/SimplifiedFAI

    That page is getting towards a formal statement of the problem of Friendly AI. But it gets there by positing a simpler world where the AI-builders *really are* EUMs. In that world, the problem of Friendliness reduces, in part, to inferring the unknown utility function guiding the EUMs. In the real world, we still don’t even know the *class* of decision system that we belong to, let alone the particular features which pick out our cognitive architecture from the other possible members of that class. But one can still see here, in schema, a Yudkowsky-inspired semi-formal strategy for FAI

    1) identify the class of decision system to which human intelligence belongs

    2) identify the form of self-idealization appropriate for members of that class (by this I mean, identify how decision systems of this class would choose to modify themselves; trivial for EUMs, since EUM is itself their only criterion)

    3) ensure that your potentially transhuman AI is human-relative-idealized, according to the (hypothetical) exact form obtained in 1 & 2, before it becomes superhumanly powerful

    I’ve skipped over a bunch of other things, such as the definition of levels of intelligence (and that is mostly coming from other people, like Shane Legg), but my real message is that we wouldn’t even have this starting point without Eliezer’s initial contributions. Part of what needs to happen now is for step 1, in particular, to be fleshed out in detail. I know that here, Eliezer, trying to delegate to the AI as always, would instead be trying to devise an analytic method which allows an Idealized Bayesian Scientist (his seed AI, directed by an interim goal system) to answer question 1 in the strategy above – rather than trying to do cognitive neuroscience directly. However, there is no reason why human neuroscientists can’t attack that problem.

    Returning to your actual post, I will agree that Eliezer has no *rigorous* breakthroughs in his CV. He has some mathematical and program-design ability (the latter is most clearly on display in his earlier works); but he has preferred to work directly on the bigger, more formless problems, rather than on smaller problems; so his output looks more philosophical than mathematical. We don’t know if he has the problem-solving abilities of a John Conway, or even if he needs them (though it’s likely that such abilities *are* needed, at some point on the path from fuzzy FAI schema to exact implementation). If he doesn’t, we have to hope that someone who *does* have those abilities will come along, and of course that’s part of the reason for his OB posts.

  4. “He is best thought of as a philosopher of AI. There is a modern view of philosophy according to which it is the place where we think about things that we don’t know how to think about yet.”

    I have very little respect for philosophers and philosophy as a field. Such a defense is unlikely to endear him to me.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: