Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

Not surprised though, with AMD being able to patch the Linux kernel and scheduler to its liking in order to squeeze out the maximum performance for it's architecture, versus depending on Microsoft to have the good will to figure it out on their own dime.

I'd be curious if the businesses buying these $20k+, 96-core, HEDT workstations actually do run Windows on them in order for this to be that big of a problem for Microsoft worth addressing, or if it's Linux all the way anyway for them so Windows isn't even on the radar.

Anyone here in the know?

Also, obligatory: "But can it run Crysis?" (in software render)



ive worked at companies where we buy these types of workstations for fea/cfd/ML

if the company has a overbearing IT, we run windows for "security". at places where we move fast and break things, we run ubuntu and IT says for us to manage it ourselves.

for anyone wondering why we done use the cloud or a server, we do too. but model setup, licensing, and small jobs are easier and quicker to do locally.


> "...depending on Microsoft to have the good will to figure it out on their own dime."

They don't have to figure it out on their own; everybody maintains close contact. For example, Intel has a field office literally one block away from Microsoft's main campus. One would assume AMD also has a field office similarly near by. (EDIT: they do, a couple of blocks further.)


One of the major markets for these would be visual art studios. They aren't generally Linux shops.


Funny you mention this because a lot of big visual arts studios use Mac for the artwork in the Adobe ecosystem, and heavily invested in Linux for the 3D rendering pipelines as they moved away from SGI/BSD with Windows almost nowhere on the radar.

I'd estimate the number of big-name shops running Windows is a minority with only smaller indie studios like Corridor Digital being all-in on Windows because it's a jack of all trades can-run-anything OS, and managing that one ecosystem without sys-admins and an internal IT department is a lot cheaper easier for small businesses of amateur-professionals.


The majority of professional movie and television VFX work is now done on Windows and Linux computers, with Linux especially being used for the rendering portion of the process.

Mac was the dominant platform for a long time, but Windows caught up and zoomed past Mac about a decade ago. Same or better performance, cheaper hardware, cheaper software, easier to upgrade/repair, and more choice for all of the above.

There's a reason that Apple has to pay big bucks to Olivia Rodrigo and others to use Apple products to film and edit their music videos: it's because Apple fell out of favor with creatives and Apple is trying to buy its way back in.


I think Apple lost a lot of goodwill in the market with the desaster that was the initial release of Final Cut Pro X (and rightly so probably).

And now I see quite a few people moving away from Premiere to Resolve, which seems to take a pretty platform-agnostic stance.


My experience is exclusively with small studios, which were Windows shops.


Yup - A while back (not sure if changed much since), Weta Digital people would talk about being pretty much mostly Linux for modelling/rendering/etc and heavily Mac for audio. Very little Windows.


"a lot" is nowhere near "all"


And none is nowhere near "a lot".


None is nowhere, a lot is somewhere and all is everywhere. sort of.


The improvements coming for Wayland's HDR/color management are likely to help with that, as the features they're aiming for appears to beat Window's more slapdash implementation with per-window color management, where window contents are accurately tonemapped and composited within the widest color space the monitor supports.

Adobe would need to be incentivized to port their suite over for it to be taken seriously, but maybe Wine could bridge that gap at first.

Ah, the mythical Linux Desktop. One day...


Will Wayland be able to render mixed HDR and SDR content correctly, with e.g. an HDR video on YouTube rendering with extended range while the rest of the screen renders as normal?

Currently only macOS can do that. With Windows you have to choose between SDR and HDR display modes which affects everything on screen regardless of type, which makes SDR content look dingy in HDR mode.


On that subject, anyone know why shifting to HDR dims everything that way? My mental model of it is that SDR brightness goes from 0 to 100, and HDR brightness goes from -100 to 100, and that turning on HDR moves everything not HDR-aware down to the bottom of the brightness space.

I could look this up, but never think about it outside of conversations like this and figure it might be more fun to talk about it.


Consider that your display can only do 0-100 brightness (not really but for sake of argument)

In SDR, you map the full SDR range (also 0-100) to that 0-100.

When you add HDR, you’re now adding levels above 100 (let’s say 0-200).

If your display can only do upto 100, you now need to put all the 0-100 stuff in 0-50. Or you get a display that can also display 0-200.

Very few computer displays can do over a standard SDR range of 500 nits.


But why isn't SDR then scaled from 0-200, then? That is, why isn't "fully bright SDR" mapped to "fully bright HDR"?


Well, it's pretty arbitrary. What is missing from most image (or sound) data is metadata to say what physical intensity is represented by the signal. I.e. how many nits (or decibels) should be emitted for this signal.

AFAIK, most encoding standards only define a relative interpretation of the values within one data set. And even if standards did have a way to pin absolute physical ranges, many applications and content producers would likely ignore this and do their own normalization anyway, based on preconceptions about typical consumers or consumer equipment.

To have a "correct" mixing, you would need all content to be labeled with its intended normalization, so that you can apply the right gain to place each source into the output. And of course there might be a need for user policy to adjust the mix. I think an HDR compositor ought to have gain adjustments per stream, just like the audio mixer layer.


What context are you talking about?

In single mode where it’s just SDR, it’s mapped to take the full range of the display up to whatever is deemed a comfortable cap.

In a mixed HDR/SDR mode, the H is range above the S , so it doesn’t make sense to scale it up. I prefer Apple’s terminology of Extended Dynamic Range because it’s clearer that its range above the SDR range.

Now you could say that you intend for that SDR to be treated as HDR, but without extra information you don’t know what the scaling should be. Doing a naive linear scale will always look wrong.


Because windows map sdr contents to srgb color space. Which nobody except designers uses. Most monitor today ship with a much brighter and, high contrast, vivid color profile by default. If you toggle your monitor to srgb color profile. You should see a color that looks really similar to sdr contents in Windows hdr mode.

In my opinion, I also don't like it. But there is surely no way for Microsoft to chose a color profile that looks like without toggle hdr on given there are so many monitor manufacturers in the market. I think chose the most safe srgb option is understandable.


Hopefully, DCI 3P will be standard in future.

I have monitor that have 187% sRGB, 129% Adobe RGB and 133% DCI P3 gammut volume. But to have correct sRGB colors with maximum coverage on monitor I need to clamp ICC profile via Novideo SRGB. Without it, sRGB content looks oversaturated in orange spectrum.


It's more like SDR goes from 0 to 255 and HDR goes from 0 to 1024. In SDR mode, 255 = (say) 500 nits while in HDR mode 1024 = 1000 nits and thus 255 = 250 nits so SDR content looks dimmer.


A better way to describe it, IMO:

SDR goes from 1-100. HDR goes from 0.01 to 100. Twice as many orders of magnitude difference from bottom to top. So if you peg the top to max brightness in both cases, the HDR looks brighter because the contrast is bigger.

(Note that this is an analogy. In other words, it's wrong, but a way of looking at it)


I am super excited for this, Wayland seems extremely promising. I'll probably try Fedora out again soon - my last experience with Wayland was GNOME and it was super nice.


We sell systems based on these high-end workstations and yes they do run Linux, Rocky Linux 8 at the moment.

Still stuck on Xorg because reasons, but for HDR monitoring you're usually using a display interface card with SDI or HDMI outputs anyway.


Pretty sure Microsoft is very cooperative on working with AMD and others on this stuff, it may just take a little longer in this case.


> versus depending on Microsoft to have the good will to figure it out on their own dime.

Cutler stated in a recent interview that he had a 96-core machine as one of his two daily drivers. I wondered at the time if he pre-announced that CPU since the press about it seemed to reach me a few days later.


Epyc has had 96-core cpus out for a bit over a year now, so probably that.


Epic Games runs mostly Windows Threadrippers for Unreal Engine development. Compiling Unreal faster, or anything really, even Windows itself is a compelling argument.


The portion running a hypervisor would also be interesting to know. That’s one hell of a processor.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: