Re: Thoughts on rapid breakout in AI over the mid term
Posted: Sun Jan 30, 2011 3:20 am
Where I'm sitting, a lot of problems have come from Egypt cutting off their internet connections. The spillover has affected lots of routes and there were WEIRD things happening with DNS for a while.
The forum may need to be have old posts archived or the database reindexed though.
Vince, it always helps to keep something in mind when thinking about computing power and the exponential growth of same.
During WWII, the most advanced computer in the world used vacuum tubes. When revealed to the world, one of the primary questions was "can a machine ever be built as complex as a human brain".
Someone back then actually sat down and computed the number of neuron connections in the brain, then compared it to the size of a typical bank of circuits in that machine (ENIAC, IIRC) and pronounced that it would take a building larger than the Empire State building to house a machine that large, and the entire Columbia River to cool the resulting heat emission.
The brain actually performs about a trillion operations per second. A reasonable comparison with modern computers says the brain performs at a teraflop (trillion floating point operations per second) though some object that the brain may perform more operations in a single neuron/neuron pulse than a computer does in a single floating point operation. This seems doubtful to me, the evidence is on the order of the "evidence" presented in the 19th century that no organic chemical would ever be synthesized by science, organic chemistry was the realm of GOD and GOD ALONE! Until it wasn't.
(I can't "prove" that a thinking computer can be created until one is created, and I'll bet you ten dollars against a wad of used chewing gum that a LARGE group of people will totally deny that a computer that talks and reasons can be "aware". OFC, they can't prove they themselves are "aware" either, but never mind THAT, the onus is always on the other party. BTDT, didn't like them much.)
As computers progressed through transistors into integrated circuits, the pronouncement was given repeatedly, a machine that had as much processing power as the human brain would be incredibly complex and huge. However, it was worth noting that the size dropped from skyscraper size to warehouse size, to house size. Then they quit talking about "impossible" hardware, and started talking about "impossible" software.
However, in 1998, the first machine capable of teraflop operation was introduced, it was a supercomputer made by (surprise) IBM. And that machine was the first one that actually could be compared to the human brain in processing power.
Did you buy a PS3 when they first came out? I believe that was new in 2006, wasn't it? The max processing speed of the cell processors in the original PS3, if running flat out, was 1.8 teraflops.
In less than eight years, a teraflop of processing power moved from supercomputer status to game machine status. Think about that! That's the kind of development speed John is talking about, becoming more and more compressed timewise as the machines get more involved in the designs.
And the new supercomputer chips appear as if they'll be made by NVIDIA in the short term. The advances in CUDA and other super massively parallel processing languages are really emulations of the massively parallel processing apparent in natural computers IOW, brains.
Thinking about it, that's no coincidence. The brain came to be in its present state because examining the physical world requires massive parallel processing to manipulate that quantity and type of data. Our computers are moving in that direction BECAUSE we design them to interface with the real world, which demands EXACTLY the same requirements for examining and manipulating that same data. We've done the "computers which interact only with logic and math" design, and found it inadequate for the task of associating with the real world.
If you like looking at the details of this, try http://gpgpu.org/
Though this is more amusing. http://nexus404.com/Blog/2010/05/07/son ... -clusters/
As computers become more powerful, the intent of the designers is for them to be able to interact more effectively with the real world. And that requires massive parallel processing, exactly as the brain processes data, for the same reasons.
I think ten teraflops will be available at a price of under 1000$ by 2014. It may not be on the desktop (or it may be in the video processors and hidden to the user) but it will be available.
The rollercoaster ride to the future is still going up that steep incline, but we are getting close to the top. And it's a helluva ride after that.
The forum may need to be have old posts archived or the database reindexed though.
Vince, it always helps to keep something in mind when thinking about computing power and the exponential growth of same.
During WWII, the most advanced computer in the world used vacuum tubes. When revealed to the world, one of the primary questions was "can a machine ever be built as complex as a human brain".
Someone back then actually sat down and computed the number of neuron connections in the brain, then compared it to the size of a typical bank of circuits in that machine (ENIAC, IIRC) and pronounced that it would take a building larger than the Empire State building to house a machine that large, and the entire Columbia River to cool the resulting heat emission.
The brain actually performs about a trillion operations per second. A reasonable comparison with modern computers says the brain performs at a teraflop (trillion floating point operations per second) though some object that the brain may perform more operations in a single neuron/neuron pulse than a computer does in a single floating point operation. This seems doubtful to me, the evidence is on the order of the "evidence" presented in the 19th century that no organic chemical would ever be synthesized by science, organic chemistry was the realm of GOD and GOD ALONE! Until it wasn't.
(I can't "prove" that a thinking computer can be created until one is created, and I'll bet you ten dollars against a wad of used chewing gum that a LARGE group of people will totally deny that a computer that talks and reasons can be "aware". OFC, they can't prove they themselves are "aware" either, but never mind THAT, the onus is always on the other party. BTDT, didn't like them much.)
As computers progressed through transistors into integrated circuits, the pronouncement was given repeatedly, a machine that had as much processing power as the human brain would be incredibly complex and huge. However, it was worth noting that the size dropped from skyscraper size to warehouse size, to house size. Then they quit talking about "impossible" hardware, and started talking about "impossible" software.
However, in 1998, the first machine capable of teraflop operation was introduced, it was a supercomputer made by (surprise) IBM. And that machine was the first one that actually could be compared to the human brain in processing power.
Did you buy a PS3 when they first came out? I believe that was new in 2006, wasn't it? The max processing speed of the cell processors in the original PS3, if running flat out, was 1.8 teraflops.
In less than eight years, a teraflop of processing power moved from supercomputer status to game machine status. Think about that! That's the kind of development speed John is talking about, becoming more and more compressed timewise as the machines get more involved in the designs.
And the new supercomputer chips appear as if they'll be made by NVIDIA in the short term. The advances in CUDA and other super massively parallel processing languages are really emulations of the massively parallel processing apparent in natural computers IOW, brains.
Thinking about it, that's no coincidence. The brain came to be in its present state because examining the physical world requires massive parallel processing to manipulate that quantity and type of data. Our computers are moving in that direction BECAUSE we design them to interface with the real world, which demands EXACTLY the same requirements for examining and manipulating that same data. We've done the "computers which interact only with logic and math" design, and found it inadequate for the task of associating with the real world.
If you like looking at the details of this, try http://gpgpu.org/
Though this is more amusing. http://nexus404.com/Blog/2010/05/07/son ... -clusters/
As computers become more powerful, the intent of the designers is for them to be able to interact more effectively with the real world. And that requires massive parallel processing, exactly as the brain processes data, for the same reasons.
I think ten teraflops will be available at a price of under 1000$ by 2014. It may not be on the desktop (or it may be in the video processors and hidden to the user) but it will be available.
The rollercoaster ride to the future is still going up that steep incline, but we are getting close to the top. And it's a helluva ride after that.