Not surprised though, with AMD being able to patch the Linux kernel and scheduler to its liking in order to squeeze out the maximum performance for it's architecture, versus depending on Microsoft to have the good will to figure it out on their own dime.
I'd be curious if the businesses buying these $20k+, 96-core, HEDT workstations actually do run Windows on them in order for this to be that big of a problem for Microsoft worth addressing, or if it's Linux all the way anyway for them so Windows isn't even on the radar.
Anyone here in the know?
Also, obligatory: "But can it run Crysis?" (in software render)
ive worked at companies where we buy these types of workstations for fea/cfd/ML
if the company has a overbearing IT, we run windows for "security". at places where we move fast and break things, we run ubuntu and IT says for us to manage it ourselves.
for anyone wondering why we done use the cloud or a server, we do too. but model setup, licensing, and small jobs are easier and quicker to do locally.
> "...depending on Microsoft to have the good will to figure it out on their own dime."
They don't have to figure it out on their own; everybody maintains close contact. For example, Intel has a field office literally one block away from Microsoft's main campus.
One would assume AMD also has a field office similarly near by. (EDIT: they do, a couple of blocks further.)
Funny you mention this because a lot of big visual arts studios use Mac for the artwork in the Adobe ecosystem, and heavily invested in Linux for the 3D rendering pipelines as they moved away from SGI/BSD with Windows almost nowhere on the radar.
I'd estimate the number of big-name shops running Windows is a minority with only smaller indie studios like Corridor Digital being all-in on Windows because it's a jack of all trades can-run-anything OS, and managing that one ecosystem without sys-admins and an internal IT department is a lot cheaper easier for small businesses of amateur-professionals.
The majority of professional movie and television VFX work is now done on Windows and Linux computers, with Linux especially being used for the rendering portion of the process.
Mac was the dominant platform for a long time, but Windows caught up and zoomed past Mac about a decade ago. Same or better performance, cheaper hardware, cheaper software, easier to upgrade/repair, and more choice for all of the above.
There's a reason that Apple has to pay big bucks to Olivia Rodrigo and others to use Apple products to film and edit their music videos: it's because Apple fell out of favor with creatives and Apple is trying to buy its way back in.
Yup - A while back (not sure if changed much since), Weta Digital people would talk about being pretty much mostly Linux for modelling/rendering/etc and heavily Mac for audio. Very little Windows.
The improvements coming for Wayland's HDR/color management are likely to help with that, as the features they're aiming for appears to beat Window's more slapdash implementation with per-window color management, where window contents are accurately tonemapped and composited within the widest color space the monitor supports.
Adobe would need to be incentivized to port their suite over for it to be taken seriously, but maybe Wine could bridge that gap at first.
Will Wayland be able to render mixed HDR and SDR content correctly, with e.g. an HDR video on YouTube rendering with extended range while the rest of the screen renders as normal?
Currently only macOS can do that. With Windows you have to choose between SDR and HDR display modes which affects everything on screen regardless of type, which makes SDR content look dingy in HDR mode.
On that subject, anyone know why shifting to HDR dims everything that way? My mental model of it is that SDR brightness goes from 0 to 100, and HDR brightness goes from -100 to 100, and that turning on HDR moves everything not HDR-aware down to the bottom of the brightness space.
I could look this up, but never think about it outside of conversations like this and figure it might be more fun to talk about it.
Well, it's pretty arbitrary. What is missing from most image (or sound) data is metadata to say what physical intensity is represented by the signal. I.e. how many nits (or decibels) should be emitted for this signal.
AFAIK, most encoding standards only define a relative interpretation of the values within one data set. And even if standards did have a way to pin absolute physical ranges, many applications and content producers would likely ignore this and do their own normalization anyway, based on preconceptions about typical consumers or consumer equipment.
To have a "correct" mixing, you would need all content to be labeled with its intended normalization, so that you can apply the right gain to place each source into the output. And of course there might be a need for user policy to adjust the mix. I think an HDR compositor ought to have gain adjustments per stream, just like the audio mixer layer.
In single mode where it’s just SDR, it’s mapped to take the full range of the display up to whatever is deemed a comfortable cap.
In a mixed HDR/SDR mode, the H is range above the S , so it doesn’t make sense to scale it up. I prefer Apple’s terminology of Extended Dynamic Range because it’s clearer that its range above the SDR range.
Now you could say that you intend for that SDR to be treated as HDR, but without extra information you don’t know what the scaling should be. Doing a naive linear scale will always look wrong.
Because windows map sdr contents to srgb color space. Which nobody except designers uses. Most monitor today ship with a much brighter and, high contrast, vivid color profile by default. If you toggle your monitor to srgb color profile. You should see a color that looks really similar to sdr contents in Windows hdr mode.
In my opinion, I also don't like it. But there is surely no way for Microsoft to chose a color profile that looks like without toggle hdr on given there are so many monitor manufacturers in the market. I think chose the most safe srgb option is understandable.
I have monitor that have 187% sRGB, 129% Adobe RGB and 133% DCI P3 gammut volume. But to have correct sRGB colors with maximum coverage on monitor I need to clamp ICC profile via Novideo SRGB. Without it, sRGB content looks oversaturated in orange spectrum.
It's more like SDR goes from 0 to 255 and HDR goes from 0 to 1024. In SDR mode, 255 = (say) 500 nits while in HDR mode 1024 = 1000 nits and thus 255 = 250 nits so SDR content looks dimmer.
SDR goes from 1-100. HDR goes from 0.01 to 100. Twice as many orders of magnitude difference from bottom to top. So if you peg the top to max brightness in both cases, the HDR looks brighter because the contrast is bigger.
(Note that this is an analogy. In other words, it's wrong, but a way of looking at it)
I am super excited for this, Wayland seems extremely promising. I'll probably try Fedora out again soon - my last experience with Wayland was GNOME and it was super nice.
> versus depending on Microsoft to have the good will to figure it out on their own dime.
Cutler stated in a recent interview that he had a 96-core machine as one of his two daily drivers. I wondered at the time if he pre-announced that CPU since the press about it seemed to reach me a few days later.
Epic Games runs mostly Windows Threadrippers for Unreal Engine development. Compiling Unreal faster, or anything really, even Windows itself is a compelling argument.
I'd be curious if the businesses buying these $20k+, 96-core, HEDT workstations actually do run Windows on them in order for this to be that big of a problem for Microsoft worth addressing, or if it's Linux all the way anyway for them so Windows isn't even on the radar.
Anyone here in the know?
Also, obligatory: "But can it run Crysis?" (in software render)