Truth and Absolutes

Those of us trying to understand the crazy world we live in spend a lot of effort sifting through assertions of truth. There even more people spending a lot of effort forwarding their favorite assertions and trying to get people to accept them as true.

At some point, it occurred to me that trying to determine what’s true, in an absolute universal sense, is a waste of time. If you roll blocks covered with letters of the alphabet, you’ll eventually produce syntactically valid assertions. And while it’s pretty unlikely that you’ll stumble upon a true statement by searching randomly, they might be.

But, so what?

We have no particular reason to accept randomly-generated statements as true; nor do we have any reason to assert that they’re false. They’re there, that’s all.

Likewise, we can reach a conclusion through careful analysis of the totality of available evidence and accept it as provisionally true. But later evidence might later show this ‘truth’ to be false?

Again: so what?

Most people have an unhealthy craving for absolute certainty, things they can rely upon to be true without the possibility of doubt. I suspect this is a shortcut to reduce the cognitive overhead of thought. We don’t actually like thinking; research shows that for the vast majority of Homo sapiens, engaging in mental effort is strongly negatively reinforcing. We’ll go far out of our way, and do a lot of extra work, to avoid having to do mental work.

Instead of worrying about what’s true, we should focus on what we can justifiably claim to be true. It’s the justification that matters.

Advertisements

10 Responses to “Truth and Absolutes”

  1. I didn’t see your comment criticizing Eliezer (except in a backhanded way) in that thread. Did he delete it?

  2. Speaking of which: comment archival.

    “Caledonian, I look forward to being able to downvote your comments instead of deleting them.”

    What, the software forces you to delete my comments? Someone’s holding a gun to your head?

    I look forward to your forming a completely closed memetic sphere around yourself, instead of this partially-closed system you’ve already established.

  3. Speaking of which, more comment archival:

    “If Eliezer’s so far beyond saving, what’s your rationale here?”

    A problem with this line of argument: it assumes that I believe a particular thing about Eliezer.

    But to answer your question: people could stumble into the event horizon without realizing it, and there should be some visible warning before the abyss; I believe important topics should never be unexposed to skeptical analysis.

    Most importantly, though, I dislike it when people linger in the twilight zone between true corruption and virtue. When I find such a person, I challenge them in such a way as to force their ambiguity to collapse and to become a pure essence of one type or the other. They must be forced to become — what they become is up to them.

    Eliezer was in such a superposition of potentialities. He is rapidly sliding out of that state and into a stable, defined configuration, not least because of my own goading. This is a desirable end.

  4. Why do you prefer that people be truly corrupted rather than in the twilight?

  5. We don’t have to worry about whether the damned can be redeemed. We can get straight to the business of letting them destroy themselves.

    It’s also rather tidy. Plus, it’s just. Destruction is not imposed from without but created from within. Fatal feedback.

    In Eliezer’s case, it’s far preferable that he fall into a completely closed memetic sphere, because (among other things) it greatly reduces the chances of his managing to achieve some ubercontrolling computer system. By limiting the availability of resources capable of distinguishing between his propaganda and reality, he denies himself the level of rationality necessary to make such an impressive error.

    Basically, it traps him at the level of small errors, rather than letting him rise to the level of potential large and very costly (for us) ones.

    It would be very difficult to get him to recognize the flaws in his FAI plans, and he might be capable of achieving something like them if his thought was only partially corrected.

  6. The main problem with his plan for AI seemed to be that he didn’t really have one, so he wasn’t really in danger of creating one and causing anyone harm.

  7. A maniac without a shotgun isn’t a tenth as dangerous as one with, but that’s not reason not to act to prevent his acquiring one.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: