Hacker Timesnew | past | comments | ask | show | jobs | submitlogin
Intel predicts 10GHz chips by 2011 (from 2000) (geek.com)
21 points by biotech on May 29, 2011 | hide | past | favorite | 30 comments


    2000 Intel Pentium 4 1.5Ghz: Passmark score of 171
    Price of $819 at launch.

    2008 Intel Atom Z510 1.10GHz: Passmark score of 186
    Price of $45 at launch.

    2010 Intel Core i7 970 3.20GHz: Passmark score of 9954
    Price of $579.99 at launch.
Ghz used to be a performance mark in itself, now things are more nuanced. I'd say Intel still did right by their estimate.


> Moore's Law states that the number of transistors in a common microprocessor will double every 18 months. As well, it can be applied to processor speed and many other computing/technology metrics.

That's not Moore's law any longer.


One guy had his head on straight:

As Gropo pointed out, there is in fact a point where Moore's law breaks down. So far since about 1975 it has held, and it will continue to until about 2004 or 2005. At that point, we run into an actual physical-laws-of-nature barrier. As silicon-based transistors decrease in size, obviously, all parts of the transistor have to shrink. This includes the gap at the PN-junction. Once this gap reaches a certain size (currently estimated at approximately the width of five silicon atoms), quantum effects (strong force, weak force, et al) begin to overtake the electromagnetic force that allows the transistor to transist. In other words… it's no longer a transistor, just a really small piece of doped silicon that doesn't do much.

That is an absolute, no-way-around-it limit. After that we have only two choices: More transistors (bigger chips) or new technology.

Adding more transistors has the problem of adding heat, which means slowing the clock. And there will be a finite maximum for number of transistors as well…. these things have to operate in sync with each other, and at very high clock rates, propagation delay becomes an issue… that is, the information created on one side of the chip cannot be transmitted all the way across the chip within the space of a single clock cycle. Also, the areas of the chip near the clock generator will receive their clock pulses sooner than those far away. If the near-the-clock pieces rely on data produced by the far-from-clock pieces, your chip is in trouble. This is called “clock skew” and is a major design consideration for any chip built today… it only gets worse as clock speeds increase.

The point is, within ten years, we won't be using silicon-based computers. They'll be made obsolete by DNA/protein type bio-computers or maybe molecular computers. - by MonkeyMan

Well.. except for that last part....


Intel beat that limit with their (newly announced) vertical or "3D" transistor tech, which has taken about 10 years to develop: http://www.anandtech.com/show/4313/intel-announces-first-22n... and a silly video to go with it http://www.youtube.com/watch?v=YIkMaQJSyP8


I like the comments after the article - many people suggesting that Intel is selling themselves short with this prediction.


The first comment does, but the responses are more prescient:

Pullleeeeaze! What makes Moores 'Law' the end all and be all for computer advancements? It's not like the Law of gravity or thermodynamics for goodness sakes! Moores 'Law' has to break down at some point. There needs to be a whole new paradigm of microporcessor technonogy before we get to speeds of 128GB. It will either be optical or wave based (the rest of the thread is filled with snark about his "GB" typo)

then again,

By 2011 we will have implementation of Quantum Processing that will make the xHZ debate look like the colonists debating over sucession from the UK.

or how about

true, clunky beige boxes will be out of style. I was thinking more like 3 in. cube.

(would you settle for a MacBook Air?) Or there's also

As Gropo pointed out, there is in fact a point where Moore's law breaks down. So far since about 1975 it has held, and it will continue to until about 2004 or 2005. At that point, we run into an actual physical-laws-of-nature barrier. As silicon-based transistors decrease in size, obviously, all parts of the transistor have to shrink. This includes the gap at the PN-junction. Once this gap reaches a certain size (currently estimated at approximately the width of five silicon atoms), quantum effects (strong force, weak force, et al) begin to overtake the electromagnetic force that allows the transistor to transist. In other words… it's no longer a transistor, just a really small piece of doped silicon that doesn't do much

which is true apart from the bit about 2005, because we're still very far away from having any components which are only 5 Si atoms across.

The point is, within ten years, we won't be using silicon-based computers. They'll be made obsolete by DNA/protein type bio-computers or maybe molecular computers

[takes a drink]

Do you know there is something called carbon nanotube? The nanotube is only 1 to 10 nanometer(1/1000 of a micron) in diameter and might be the second best conductor( next to superconductor) ever know to human beings. And yes, it can operate under room temperature. Also scientists are working on quantum computers. Just ask a scientist and he/she will tell you that a quantum computer will at least 10000 times faster than a current computer. Who cares about Intel's 10 GHz microprocessor?

[takes another drink]


c'mon thats not fair. Sure they havn't delivered on the speed front ..but they overcome it with multicore tech, which in the end means the same thing to the users.


At the cost of significantly increased work for the programmer, yeah.

I must admit I've lost track. What is the reason that clock speeds have stalled at 3 GHz? There was some fundamental physics problem happening, but I forget what it was.


I believe I have read, though please correct me if I'm wrong, that there is a problem with current printing methods. The way that microchips are built is sort of like those projectors we used to have in class; you would dark out certain places and the rest of the areas light shines through. So those spots where the light shines get etched away, and you have your transistor layout.

Only now the light rays are so close together they get tangled up with quantum effects. So there is this scattering effect, and we can't make the transistors any smaller, because the light is out of focus. The blurry edges caused by quantum dynamics seems to be holding us back.


That's one of the reasons why we're having difficulty making 'em smaller (though semiconductor engineers are very clever and keep coming up with new methods on that front)), but doesn't have much to do with making 'em faster. We've been making 'em smaller and smaller all the time, but clock speed hasn't gone up since ~2004.


"but clock speed hasn't gone up since ~2004."

And yet the latest i7s run circles around the CPUs of that time period, even when running code that is strictly single-threaded.

There have been tons of performance wins made over the last 10 years that aren't just from adding more cores and Intel's predictions were more or less right on the money, if you overlook the marketing mistake common at that time of using ghz as a shorthand for overall chip performance.


> At the cost of significantly increased work for the programmer, yeah.

It's more a language issue. Parallelism is tricky because of all the unexpected interactions between multiple executions referring to the same data. FP eases that.

The physics problem is heat dissipation. When you double the clock, the heat output increases 4x (IIRC, it's been a long time and, by the time I was graduating, we thought Moore's Law would hit us at 100 MHz).


Im not an anything (maybe hacker) but my understand is that its centred on power and current materials. I understand it as follows:

Increasing clock speed means increasing power usage, smaller designs means that quantum tunnelling effects increases the probability of electron leakage - requiring even more power, conversely more power makes electrons more likely to leak making cpus less efficient and hotter. performing non reversible computation increases entropy as information is converted to heat, more heat means less efficient and less reliable. So using current materials, more smaller transistors operating at high clock speeds performing non reversible computations means a lot of power and heat. My take is that they are trading away clock speed and keeping smaller sizes and a slightly better more 3d architecture to keep gains up.


I believe the reason is memory latency, and the sequential nature of CPUs. http://en.wikipedia.org/wiki/SDRAM_latency

I'm no expert on this subject but I think Moore's Law is more or less still true, as we are getting more and more cores. Note that Moore's law never stated that clock speed doubles every 18 months. In fact, more transistor can cause a decrease in clock speed because of more gate delays.

http://en.wikipedia.org/wiki/Gate_delay http://en.wikipedia.org/wiki/Cpu#Clock_rate

Please correct me if I'm wrong.


I find it amusing that they predicted 5Ghz chips in 2005 but that pretty much tanked with their inability to make a chip that could both run above 4Ghz and not desolder the holder it was in from its motherboard.

If you will recall however, the last time Intel became obsessed with clock speed it gave AMD a chance to deliver us the Opteron and a 64 bit x86 extension so by all means I'm egging Intel on! :-)


To be fair, the best thing about 10 Ghz chips is that there's 10 of them.


Read "The Pentium Chronicles" by Bob Colwell. The Pentium 4 is the time frame when marketing took over Intel and fed this Ghz myth when many knew it was a doomed long term path.


an estimation over 11 years that's off by a factor of three (or 2 for IBM's z196 processor at 5 GHz) is good


Not when they were already making 1.5 GHz chips in 2000 (http://en.wikipedia.org/wiki/Pentium_4, see Willamette).


It has four cores. I would count it as a 20 GHz unit.

Seriously, even the "mid 2011: 128 GHz" is not that off: how many cores are in a gaming-heavy GPU?


I wouldn't because not all parallel algorithms result in a constant linear speed up with respect to cores.

http://en.wikipedia.org/wiki/Parallelization#Amdahl.27s_law_...


No, you don't go twice faster with two cars.


It depends on what you want. If you want to move your furniture from house A to house B, I grant you I can move them in half the time with twice as many cars.

If the workload can be divided, multi-core is a solution.


Will 10Ghz be possible in 2020?


I wouldn't rule that out with that Intel "3D transistor" thingie. Power dissipation goes down, clock can go up. And the device also gets smaller, so more cores can be put on the same chip.

Not only that, but we are only starting to play with memristors. Silicon still has some mileage.


I would rule it out. The 3D transistor design buys us just a few percent but really, we're starting to hit the walls of physics with our current technology.


IBM already has 5 GHz chips. Do they know some physics Intel doesn't?


Not really. They do have some cool fabrication tech though. They also run a different ISA and different pipeline which can make a world of difference.

Really though, I wouldn't expect them to hit 10GHz anytime in the foreseeable future.


Don't their chips use lower voltages? i.e., lower heat, but less distance within the circuit path that a given cycle can take.


We got to 5GHz, that's pretty close in my book.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: