The specular highlights on faces definitely look wrong to me though I struggle to describe why. Shadows and diffuse lighting is a totally different story, though. Look at how it completely deletes the shadow of the steeple on the right hand side[1], or how it completely eliminates the shadows on this guy's face and jacket. Overcast lighting is an easy cheat for hyper-realism[3] and almost every single scene shown has softened or absent shadows and more diffuse light.
As an aside, I'm starting to wonder if they are modifying engine settings when switching it on and off. There's clearly some amount of accumulation it has to do and its impossible to frame-by-frame a video of a monitor, but in [1] the first frame snaps from a dynamic shadow of the steeple to a generic small blob shadow, then gets entirely eliminated on the next frame.
Hmm, I do see the shadows being removed in the links you have, and have noticed that the backgrounds do look like their lighted differently from the original, but was wondering if that is just because the AI lights things differently? - they did say that these AI effects are done with the actual 3d assets themselves and is not just some type of filter that run over the existing images, so could see how the lighting could change quite a bit.
Yeah, may be the fact that they are lighted differently from the original is turning people off. Understandable. For me, still find it impressive, and think the level detail in the faces and clothing is a full step up in capability.
> they did say that these AI effects are done with the actual 3d assets themselves and is not just some type of filter that run over the existing images
That was essentially just Jensen Huang lying during his Q&A. DLSS5 uses the same input data as DLSS<5, which is just screen space color data and motion vectors. From NVIDIAs announcement: "DLSS 5 takes a game’s color and motion vectors for each frame as input, and uses an AI model to infuse the scene with photoreal lighting and materials that are anchored to source 3D content and consistent from frame to frame."
I agree, every shot has something to like, especially in fine details, but I question the feasibility of fixing the issues while running the model on a consumer GPU in realtime. Getting similar improvements without falling back to diffuse lighting would require the model to infer a huge amount of information about off-screen light sources and objects. I'm much more excited about putting my tensor cores and vram towards neural textures since they can actually add detail at the geometry level.
Hmm, actually heard it from a podcast that this is actually working with the 3d assets, and even that statement you quoted says "anchored to source 3D content." Although, that could mean a lot of things and it's still early on, so it could still just be a pass at the end by an AI model. Yeah, I'll stay on the fence until more details are released - and should mention, am no graphics expert, and am only giving my opinion as a fan of good graphics on what the results look like:)
As an aside, I'm starting to wonder if they are modifying engine settings when switching it on and off. There's clearly some amount of accumulation it has to do and its impossible to frame-by-frame a video of a monitor, but in [1] the first frame snaps from a dynamic shadow of the steeple to a generic small blob shadow, then gets entirely eliminated on the next frame.
[1] https://youtu.be/4ZlwTtgbgVA?t=435, [2] https://youtu.be/4ZlwTtgbgVA?t=326, [3] Cyberpunk hyper-realism mod: https://www.youtube.com/watch?v=_toA8lErAHg