Monday, February 20, 2006

A Profound Threshold

What is the most powerful computational device on the planet? If you answered "the human brain" you would have been right . . . until a few months ago.

As you may know, the brain is exquisitely designed to process information. Its array of 100 billion neurons, interconnected in a lattice of 100 trillion synapses (connections), is capable of processing an estimated 100 trillion pieces of information every second. This is an unfathomably large amount of computation.

When I took a computer class back in college in the 1980s, no man-made computer could even come close to that sort of processing capacity. Not even one millionth as much! It was impossible to believe that a computer could ever come close to rivaling our cerebral 'hardware'. But I remember one of my professors telling us to watch out for Moore's Law - which states that computer processing speeds double every 12 months or so. Due to inexorable technology advances, computers just keep on getting faster and more powerful, with no foreseeable end in sight.

As a result, the world crossed a remarkable threshold last year . . .

For the first time in history, the human brain was supplanted as the most powerful computer on earth. That distinction is now held by an IBM supercomputer known as Blue-Gene/L, which clocked in this past October at an astonishing 280 trillion operations per second. It has about three times more processing capacity than the human brain! (Sadly, it's being used by Livermore Laboratories to help develop our nuclear arsenal - your tax dollars at work.)

Does that mean that computers will soon be exhibiting superhuman intelligence? Well, in some ways, of course, they already are. Gary Kasparov, the best chess player in history, can no longer beat the best computer chess programs.

But it will still be many years before computers are able to accomplish several of the computational feats that we take for granted. The ability to understand and generate language looks like it will be the toughest 'artificial intelligence' problem - and computers at present are nowhere close to being able to do this. Mostly it's because artificial intelligence researchers and programmers usually don't have super-powerful computers at their disposal . . . most of them use the same desktop PCs that you and I do, which means they have less than 1/100,000th of the processing power of a human brain.

This will change in the decades ahead, however. Because of Moore's Law, it's fairly safe to assume that a good desktop PC in the year 2020 will have about the processing power of a human brain . . . enough to do a creditable job simulating human language.

While there is certainly much to be concerned about with such developments, in some future post I'll discuss some of the potentially positive implications regarding our understanding and treatment of mental illness.

8 comments:

Dr. Deborah Serani said...

Great post!

Psych Pundit said...

Thanks, Deb! It's kind of 'trippy' stuff, but well worth talking about, I think.

GM Roper said...

When in Grad School in the early 70's I did a paper on artificial intelligence. I got an A, but the professor thought it so much "science fiction." I wonder what he would have thought of your sentence: "For the first time in history, the human brain was supplanted as the most powerful computer on earth. That distinction is now held by an IBM supercomputer known as Blue-Gene/L, which clocked in this past October at an astonishing 280 trillion operations per second."

It truly is astonishing.

RunningRoach said...

Great Post!
Well I guess I can finally throw away my long trusted punch card compiler deck!

What's beyond exponential?

Anonymous said...

If we're going to have nuclear weapons, anyway, at least we should use the best tools available so as to avoid mistakes.

Raymond said...

The tech to duplicate the brain existed long ago,
(but this is not saying we have the ability to make one)

It would be analog, not digital.

As a loose example, the robotist who makes walking robot
bugs with only a few analog devices.

the problem is a nuron is not a fixed object, it must
have the ability to self adjust its gain based on traffic,
and the que to make new inteconnects is still not fully
understood.

modeling nural nets is software is very inefficient ...

However, a workable software based nural net could be
eventually used to model an analog expression.

The supercomputer your talking about is too loosly
coupled between nodes to work as a single machine as does,
well, a single machine, and only works well on problems
that can be broken into as many pieces as there are cpus
so they can be worked on in parallel.

An analog nural net however, would work its thruput
in parallel naturally.

eventually we will be able to make such devices, perhaps
using synthetic diamond as a substrate, the device
built up layer after layer to create a part that is as tall
as it is long and wide ... a cube analog 3d IC of 3in sq
might do it ...

Dont look for such a thing to be very human however ...

The reason I loved my big engine had to do with hormones
made between my legs and the effect it had on my brain

How to model that properly would seem a more difficult
problem ...

Psych Pundit said...

Raymond,

Thanks for the interesting and provocative comments. It's certainly true that supercomputers like Blue-Gene/L process information in a manner that's very different from that of our own neural networks.

However, there's an important point that you may be missing: ultimately, information processing is substrate neutral. That is, the information is the same regardless of how it's represented in matter. So, for example, I could (in principle) run my word processing program on any number of radically different physical architectures - Mac, PC, Cray, vacuum tubes, etc. The crucial thing is representation of the critical information and the algorithms that underlie its transformation across time.

Likewise, we now have a pretty good upper-bound estimate on the amount of information represented and processed in the brain - an upper bound that has recently been surpassed by a supercomputer. So, even though the task of specifying the precise information and algorithms embedded in neural tissue is a massive and daunting one, in principle it's one that - once completed - lends itself to translation to a supercomputer architecture.

Even if this translation process is somewhat inefficient (i.e., due to the hybrid digital-analog nature of neural function), Moore's Law suggests that we'll soon have plenty of terraflops to spare.

Best wishes,
Psych Pundit

Anonymous said...
This comment has been removed by a blog administrator.