psCargile wrote:
The computers I bought recently were about 1.8 Ghz and the ones I bought about 10 years ago were also. They are still increasing the number of transistors on a chip, so it is possible that Moore's Law is still working some. And GPUs are amazing. However, I am less sure the Singularity is coming soon than I used to be.
While the photolithography necessary to print larger and larger numbers of transistors on silicon continues to increase, albeit at a slower rate, but none the less the capacity in terms of transistors per device continues to increase. But understand the numbers are prodigious 20% of 5 million transistors is a million devices; whereas in the past 20% of 100 thousand devices was only 20 thousand transistors. The question remains, as it has remained for a very long time: what do you create with those new transistors. The maximum clock speed of silicon is close to being reached and is predicated upon the semiconductor physics - that is the reason that processor clock speeds have remained "stable" over the last 10 - 12 years, and are likely to change little in the immediate future. The processing units are now all horizontally microcoded with each clock tick performing at a minimum a single processor instruction and depending upon the incoming instruction stream may in fact execute multiple processor instructions on each clock cycle. Memory is still an area where some improvement can take place, but even that with multiple levels of caching, even that benefit is reaching the limits of what is possible - again physics. The result is that the days of seeing vast differences in performance between succeeding generations of processors are gone, you will see performance increases but they will be on the order of double digit percentages.
There is no equivalent of Moores's Law, nor has there ever been, for software development. And while semiconductor physics is the same for all silicon devices, the same isn't true for software development. Hardware engineers are continually creating new processors, so software engineers are having to continually reinvent the wheel (languages to be executed by those processors) for their newest Frankenstein (that would be processor if your a hardware engineer, and the software if your a SW guy).

All processors are still stored program machines, each operating, as in the earliest days of stored program computers, from a physical instruction stream originating out of memory. At power on they come up "dumb", and need to be sent to a resident area of memory to execute read-only-memory programs that will, hopefully, be a coherent boot program that will load a significantly more complex program from secondary (disk) storage. For every "new" processor (processor de jour if you will) at some point early in the processor's development some poor grunt got saddled with doing the first code for their new "baby"; that "baby" whom everyone believes will become the world's next great gift. While hardware complexity has made for more complex instructions - high level or complex instructions - most of these high level additions to a processors instruction set are of an esoteric nature and of limited general value - that is they serve very specific niche applications. Everything is still, essentially, quantified in terms of the machine's instruction set, and its ability to process an instruction stream out of memory (either cache, or main memory). With a modest bit of change in languages software is still done the way - by hand - that it was done in the 1950s. When the Obamacare web site debacle was under discussion John noted a software development productivity number for programmers of 6 - 10 lines of code per day, that is a very old number and anyone who has written any appreciable amount of software will not argue too much with that number. That productivity hasn't changed too much, while the code is different, and that code may generate many many more executable machine instructions for each line of code, the productivity in terms of program lines of code generated per programmer per day hasn't changed all that much in the last 65 years. What has changed fairly dramatically is the number of people generating those 6 - 10 lines of code per day.
Much of the hardware/software development over the last 30 years has gone into fail-soft multiple processor systems who have the ability to run non-stop; systems that verify the hardware integrity of their elements on an almost continuous basis; including all elements being hot-swapable; additionally much effort has gone into very specific peripheral devices that are more significant, and more capable than prior devices. The question is, as it has always been, is how do you coordinate asynchronous operations from various and multiple asynchronous processing units and - whose in charge? The trade off is do we create high level high capability general processing units (cores) operating from serial operating system and application streams, or do we create an ever increasing number of limited targeted self-contained smart peripheral processors (GPUs). It is almost a revisit of the CISC vs RISC argument of yore. You can certainly do magnificent things with parallel processing elements to service a video screen, but somebody has to control what is fed to whom, from where, and when. The same might be said of networking stream processors. What is happening is that the level of complexity is going up very rapidly, and it is rising non linearly; looking at the finished system's level of complexity there are two temptations: 1.) the first is to be astounded that all the stuff cobbled together - some call it engineering - would work at all; and 2.) when viewing the system under consideration's remarkable flexibility, the temptation is to ask: "... is it alive?...does it get hungry, or go to lunch?"
The magic will be sentience - but is that possible? I don't know, perhaps one day, but I don't think that day is near. While I haven't been close to the semi industry for a while, for most of my adult life, the average product life was 18 - 24 months. What that translates to is a continually changing panorama, that is, an environment that is never stable long enough to do anything too enduring, or too do too much damage. That's a good and a bad thing; bad choices are left behind both HW and SW; new innovation is incorporated. When a machine is created that self generates those 6 - 10 lines of code 24/7 and can be directed in what it creates then everything has changed; then is the time to get scared. When that machine is given the capacity to perform its own developmental direction everything will be different. The magic will be sentience. When a stored program machine becomes self aware, should that ever happen, everything will change. When program bug fixes cease, and only enhancements become available, then is the time to start to worry.