Hacker Timesnew | past | comments | ask | show | jobs | submit | trentnelson's commentslogin

WaitForMultipleObjects is fascinating behind the scenes. A single thread can wait on up to 64 independent events, which is done by plumbing the KTHREAD data structure with literally 64 slots for dispatcher header stuff, plus all the supporting Ke/dispatcher logic in the kernel.

There’s never been a POSIX equivalent to this. It requires sophisticated kernel support and the exact same parity can’t be achieved in user space alone.


Yeah I was wondering if some native Linux apps might want to use it, since it is clearly useful and hard to emulate.


Linux native semaphores are enough. Linux has been able to be very performant without it. That feature seems like way too over engineered for little gains.

This comes up often, but what can it do that poll can't?


Reading the link provided by https://hackertimes.com/item?id=47511778, I believe "atomically acquire multiple objects". The link states they try to emulate it by performing a poll then a read, but the gap between those results in a race, which is a terrible thing to have in a synchronisation primitive.

There was also something about needing to back out if any of the reads fails to acquire, which also sounds nasty.


Great post.

Ah, interesting, so wfm does both the wait and the acquire!

When using eventfd it is indeed annoying having to both poll and later read to disarm the object (there are epoll tricks that can be used but are not generalizable).

The signal+wait is also a primitive that it is hard to implement atomically on posix.


When the PDIMMs were used with an appropriate file system + kernel, it was pretty cool. NTFS + DAX + kernel support yielded a file system where mmap’ing didn’t page fault. No page faults because the file content is already there, instantly.

So if you had mmap heavy read/write workloads… you could do some pretty cool stuff.


Well now I’m curious how they did it in the 90s. Some poor schmo doing pixel by pixel font creation?


Vector tracing was a thing way back, and from there, it was probably some simple programming to make a font out of a number of vector glyph images.


If you’ve got an existing paragraph written that you just know could be rephrased more eloquently, and can describe the type of rephrasing/restructuring you want… LLMs absolutely slap at that.


I mean to be fair, WSL1 and WSL2 are extremely successful engineering efforts by Microsoft. I can’t imagine having to go back to the Cygwin days.


I'm one of the few I think who really liked Cygwin. Far from perfect of course, but I even still prefer it to WSL depending on what I'm doing.


I finished this article in February this year, just before joining NVIDIA. It didn't get officially published then for... reasons. Posting now despite some of the information being a little out of date as I still think the content might be useful to others.

Tried to make the article as readable as possible on mobile, tablet, and desktop. Mobile necessitated a smaller font size for the code to obviate the need for horizontal scrolling.

Light/dark mode is supported, and the images are even cognizant of the selected mode!

I am doing a talk at PyData Seattle this year (Nov 7-9) focused on this topic, so any feedback regarding additional areas of interest would be appreciated.


Loving your work, as always!


Oh man, the Abit motherboards! That takes me back. How much did this cost and at what time? Presume very late 90s.


Looks like ‘97. Unfortunately I can’t find the receipts! If I had to guess, I think I’d say somewhere around $2000 in all. That computer lasted me a LONG time. When it was done being a desktop, it became a Linux server until I sold it around 2009. Sometimes I wish I’d kept it: the Pentium 2 was such an important CPU and I smile every time I see the one on display in the Computer History Museum.


I remember my first job in 2000, straight out of 1.5 years of college, getting to play directly with Digital UNIX and Alpha processors! The Alpha 21264 was a beast at the time.


Based on an earlier comment, I think the person you're replying to is the author of aider.


It’s insane how hard hovering is. I had about 35 hours of fixed wing time, and treated myself to a helicopter lesson for my birthday.

Hovering was so humbling! You’d be stable for a few seconds and then oops now we’re suddenly crabbing backwards whilst rolling laterally whilst exacerbating everything with pilot-induced oscillations in every conceivable axis of movement.

Having to constantly enter three inputs whenever the external environment changes (ie wind, gust), or any time any one of the three inputs change… it absolutely requires some new neural pathways to be forged!

I flew with Patty Wagstaff many years later and even she admitted hovering was so hard, to the point it looked like she wasn’t going to be able proceed with her rotor license (before it all clicked).


> so hard, to the point it looked like she wasn’t going to be able proceed with her rotor license (before it all clicked)

Yeah, I think we were all convinced that we were going to wash out of flight school in the first few weeks. Hovering was not something that you could see yourself gradually getting better at, so it felt impossible right up until it wasn't. It really did just "click" one day. Almost two decades later, I still have a vivid memory of the very moment that I realized I had full control of the aircraft when picking it up from the ground.

When my buddy and I were telling an instructor pilot one day how we felt (like we'd never be able to hover), he wisely pointed out that the flight school syllabus had a certain number of hours for a reason. It had been refined over the past 50 years, so they knew exactly how many hours were needed, and if things "clicked" for us ahead of that schedule it would mean that time and money were being wasted.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: