2000 Intel Pentium 4 1.5Ghz: Passmark score of 171
Price of $819 at launch.
2008 Intel Atom Z510 1.10GHz: Passmark score of 186
Price of $45 at launch.
2010 Intel Core i7 970 3.20GHz: Passmark score of 9954
Price of $579.99 at launch.
Ghz used to be a performance mark in itself, now things are more nuanced. I'd say Intel still did right by their estimate.
> Moore's Law states that the number of transistors in a common microprocessor will double every 18 months. As well, it can be applied to processor speed and many other computing/technology metrics.
As Gropo pointed out, there is in fact a point where Moore's law breaks down. So far since about 1975 it has held, and it will continue to until about 2004 or 2005. At that point, we run into an actual physical-laws-of-nature barrier.
As silicon-based transistors decrease in size, obviously, all parts of the transistor have to shrink. This includes the gap at the PN-junction. Once this gap reaches a certain size (currently estimated at approximately the width of five silicon atoms), quantum effects (strong force, weak force, et al) begin to overtake the electromagnetic force that allows the transistor to transist. In other words… it's no longer a transistor, just a really small piece of doped silicon that doesn't do much.
That is an absolute, no-way-around-it limit. After that we have only two choices: More transistors (bigger chips) or new technology.
Adding more transistors has the problem of adding heat, which means slowing the clock. And there will be a finite maximum for number of transistors as well…. these things have to operate in sync with each other, and at very high clock rates, propagation delay becomes an issue… that is, the information created on one side of the chip cannot be transmitted all the way across the chip within the space of a single clock cycle. Also, the areas of the chip near the clock generator will receive their clock pulses sooner than those far away. If the near-the-clock pieces rely on data produced by the far-from-clock pieces, your chip is in trouble. This is called “clock skew” and is a major design consideration for any chip built today… it only gets worse as clock speeds increase.
The point is, within ten years, we won't be using silicon-based computers. They'll be made obsolete by DNA/protein type bio-computers or maybe molecular computers.
- by MonkeyMan
The first comment does, but the responses are more prescient:
Pullleeeeaze! What makes Moores 'Law' the end all and be all for computer advancements? It's not like the Law of gravity or thermodynamics for goodness sakes! Moores 'Law' has to break down at some point. There needs to be a whole new paradigm of microporcessor technonogy before we get to speeds of 128GB. It will either be optical or wave based (the rest of the thread is filled with snark about his "GB" typo)
then again,
By 2011 we will have implementation of Quantum Processing that will make the xHZ debate look like the colonists debating over sucession from the UK.
or how about
true, clunky beige boxes will be out of style. I was thinking more like 3 in. cube.
(would you settle for a MacBook Air?) Or there's also
As Gropo pointed out, there is in fact a point where Moore's law breaks down. So far since about 1975 it has held, and it will continue to until about 2004 or 2005. At that point, we run into an actual physical-laws-of-nature barrier.
As silicon-based transistors decrease in size, obviously, all parts of the transistor have to shrink. This includes the gap at the PN-junction. Once this gap reaches a certain size (currently estimated at approximately the width of five silicon atoms), quantum effects (strong force, weak force, et al) begin to overtake the electromagnetic force that allows the transistor to transist. In other words… it's no longer a transistor, just a really small piece of doped silicon that doesn't do much
which is true apart from the bit about 2005, because we're still very far away from having any components which are only 5 Si atoms across.
The point is, within ten years, we won't be using silicon-based computers. They'll be made obsolete by DNA/protein type bio-computers or maybe molecular computers
[takes a drink]
Do you know there is something called carbon nanotube? The nanotube is only 1 to 10 nanometer(1/1000 of a micron) in diameter and might be the second best conductor( next to superconductor) ever know to human beings. And yes, it can operate under room temperature. Also scientists are working on quantum computers. Just ask a scientist and he/she will tell you that a quantum computer will at least 10000 times faster than a current computer. Who cares about Intel's 10 GHz microprocessor?
c'mon thats not fair. Sure they havn't delivered on the speed front ..but they overcome it with multicore tech, which in the end means the same thing to the users.
At the cost of significantly increased work for the programmer, yeah.
I must admit I've lost track. What is the reason that clock speeds have stalled at 3 GHz? There was some fundamental physics problem happening, but I forget what it was.
I believe I have read, though please correct me if I'm wrong, that there is a problem with current printing methods. The way that microchips are built is sort of like those projectors we used to have in class; you would dark out certain places and the rest of the areas light shines through. So those spots where the light shines get etched away, and you have your transistor layout.
Only now the light rays are so close together they get tangled up with quantum effects. So there is this scattering effect, and we can't make the transistors any smaller, because the light is out of focus. The blurry edges caused by quantum dynamics seems to be holding us back.
That's one of the reasons why we're having difficulty making 'em smaller (though semiconductor engineers are very clever and keep coming up with new methods on that front)), but doesn't have much to do with making 'em faster. We've been making 'em smaller and smaller all the time, but clock speed hasn't gone up since ~2004.
And yet the latest i7s run circles around the CPUs of that time period, even when running code that is strictly single-threaded.
There have been tons of performance wins made over the last 10 years that aren't just from adding more cores and Intel's predictions were more or less right on the money, if you overlook the marketing mistake common at that time of using ghz as a shorthand for overall chip performance.
> At the cost of significantly increased work for the programmer, yeah.
It's more a language issue. Parallelism is tricky because of all the unexpected interactions between multiple executions referring to the same data. FP eases that.
The physics problem is heat dissipation. When you double the clock, the heat output increases 4x (IIRC, it's been a long time and, by the time I was graduating, we thought Moore's Law would hit us at 100 MHz).
Im not an anything (maybe hacker) but my understand is that its centred on power and current materials. I understand it as follows:
Increasing clock speed means increasing power usage, smaller designs means that quantum tunnelling effects increases the probability of electron leakage - requiring even more power, conversely more power makes electrons more likely to leak making cpus less efficient and hotter. performing non reversible computation increases entropy as information is converted to heat, more heat means less efficient and less reliable. So using current materials, more smaller transistors operating at high clock speeds performing non reversible computations means a lot of power and heat. My take is that they are trading away clock speed and keeping smaller sizes and a slightly better more 3d architecture to keep gains up.
I'm no expert on this subject but I think Moore's Law is more or less still true, as we are getting more and more cores. Note that Moore's law never stated that clock speed doubles every 18 months. In fact, more transistor can cause a decrease in clock speed because of more gate delays.
I find it amusing that they predicted 5Ghz chips in 2005 but that pretty much tanked with their inability to make a chip that could both run above 4Ghz and not desolder the holder it was in from its motherboard.
If you will recall however, the last time Intel became obsessed with clock speed it gave AMD a chance to deliver us the Opteron and a 64 bit x86 extension so by all means I'm egging Intel on! :-)
Read "The Pentium Chronicles" by Bob Colwell. The Pentium 4 is the time frame when marketing took over Intel and fed this Ghz myth when many knew it was a doomed long term path.
It depends on what you want. If you want to move your furniture from house A to house B, I grant you I can move them in half the time with twice as many cars.
If the workload can be divided, multi-core is a solution.
I wouldn't rule that out with that Intel "3D transistor" thingie. Power dissipation goes down, clock can go up. And the device also gets smaller, so more cores can be put on the same chip.
Not only that, but we are only starting to play with memristors. Silicon still has some mileage.
I would rule it out. The 3D transistor design buys us just a few percent but really, we're starting to hit the walls of physics with our current technology.