The Tortoise and the Hare

When I first learned about Turing’s work on Computational Equivalence, ever so long ago, I was at first too stunned by the obvious implications to consider how it applied to, say, cognition. But eventually my dazzlement dissipated enough for a thorough examination of consequences.

If any sufficiently-powerful computation system could emulate any other system, given enough time and available memory, then people whose minds were slower or less adept than others should be able to solve any problem the smarter people could – it should just take longer and perhaps require external memory aids like writing. But this doesn’t actually seem to be the case with real-life human beings. Dumb people aren’t slower to deal with certain types of problems, they simply can’t deal with them at all – and it doesn’t matter how much time they’re given.

Turing was clearly right. So why was intelligence relevant to whether categories of problems could be dealt with, instead of merely affecting how quickly and efficiently they could be solved?

Our minds aren’t designed to function as truth-producing devices – they’re intended to result in effective survival and have the hardwired tendencies useful for bringing that about, even when they result in grossly incorrect content. I think it very likely that we’ve evolved so as to generate a response in a given (and very brief) amount of time, no matter how much that impairs the quality of the reaction. Even a wrong response might often be better than doing nothing but be caught up in the computation.

As a consequence, I think that many of our cognitive processes are set to terminate after a certain number of computational cycles – whatever we have then, we run with. That would have significant implications, among them possibly a reason why raw computational power would make a difference in reasoning itself rather than speed. If the brain couldn’t complete a complex line of reasoning before the allotted number of cycles ran out, it would never be able to produce the right answer no matter how much time it was given. Even a small increase in time efficiency would result in a detectable increase in the complexity of problems the system could handle.

Is there a way to test this speculation? I’m not sure yet. But it could be very important to training human beings to become rational thinkers.


4 Responses to “The Tortoise and the Hare”

  1. A simple algorithm often cannot solve things a complex one can. Less intelligent people also tend to have less memory, or at least cannot “chunk” information as well.

    I think you are right about it being natural to give up after some time has passed.

  2. Less intelligent people also tend to have less memory, or at least cannot “chunk” information as well.

    Ah, but external aids like writing ought to reduce the importance of innate memory ability. But they do not.

    I think you are right about it being natural to give up after some time has passed.

    Certainly, but keep in mind that the amount of time I’m talking about is probably a fraction of a second – and the problems aren’t the high-level ones we experience in our daily lives, but elemental and basic ones we’re not normally aware of.

    • Did you see Idiocracy, and if so do you remember the scene where the hospital has a pictorial punch-pad? I think such aids do help.

      • Yes, and yes.

        Not for the tasks I’m thinking of – primarily because people will not use them. Deciding whether to use such aids – and how – is one of the tasks that smart people do differently than the dumb.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: