Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

What on earth is this article talking about? Sparsity and control flow are horrendous and basically the two things a gpu wants nothing to do with. And graph algorithms generally run on CPUs in supercomputers, not GPUs, for precisely those reasons.

The only way any of this makes sense is if the article is talking specifically about graph processors (e.g. the Cray Eldorado, which to be fair, is mentioned), but there are very few commercial graph processors and even those are shrinking in importance. The number one parallel processor is the gpu, the only one with any chance of making it into the mainstream, and even that is far from mainstream. And the gpu hates control flow and sparsity.



if you have machines that do grids well (vectors, simd), and you have a problem that you can get into that form, then clearly you do. its actually kind of stunning how things you wouldn't think could be phrased that way can.

i'm extrapolating from the talk, but i think the point is that architectures that can tolerate dynamic flow control (mimd) are more performance general than ones that cant. and while no, they aren't mainstream, I think the historical jury is still out.

since the cost of keeping an instruction pointer per unit is (largely) absorbed by hardware, its pretty easy to argue that they should win in the limit. unfortunately there are other costs, the memory interface is vastly different. caching becomes interesting and potentially prohibitive.

but is mimd dead forever because gpus can be used for some things and are clearly the cheapest ops you can buy today?


No, MIMD is not dead, for example, see the Rex Neo or the Adapteva Epiphany, two of my favorite processors. However, when you say that something works well on "massively parallel processors", you have an obligation to specify that you mean MIMD processors and/or graph processors, and SPECIFICALLY NOT A GPU.

Because the programming principles espoused in this article are antithetical to what you should do on a GPU, which is definitely the de facto "massively parallel processor" that would first come to people's mind.


ok, so we're just arguing about terminology and expectation. maybe that reenforces Feo's point.

it is worth noting that the El Dorado started out as a general purpose computer, then a general purpose scientific computer, and then later as a "graph processor" in an XT frame because the MTA on its own was market-dead. and that Feo has been a big supporter since at least stage 2.

maybe is exactly this 'parallelism means GPU' which he's trying to argue against. I agree the overall arch of that point didn't really follow through in his talk, or someone's summary of that talk.

'mpp' was certainly a label adopted by TMC's cms 1-2, which were pretty damn simd. its was also used by the cray T3e, which was markedly not. the mta had 128 threads per cpu. a large enough instantiation could have had a few thousand hardware threads. later unbuilt designs backed the thread state in dram and could have gotten pretty damn fat. arguably 'massive' should be squishily about concurrency width.


Well the article is from 2010, which was just the beginning of the GPU boom, and does not mention GPUs at all...


GPU -> parallelism -> fast linear algebra

graphs -> linear algebra


the second line could use some explanation, assuming I don't know either very well.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: