Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

I can't imagine cloud gaming will be viable for VR anytime soon. It's far too sensitive to latency.


An alliance like this is a decades-long proposition.


Recently I saw a talk where John Carmack said he is working with camera manufacturers to find ways to reduce firmware overhead on the image coming out of the sensor for VR/AR applications to reduce latency. A lot can happen in a few decades, but this is a very hard problem to solve.


It's possible.

The trick is to render the pictures using the latest position of the headset. Include depth maps to make the rendering more accurate. Use ML to in-paint the gaps.


But the rendering happens at the server. You'd still need a local GPU to do reprojection for the headset.


Yes, but you need a GPU to decode 4k video anyways.


Reprojection for VR is typically done on the currently rendered frame, so the difference that the additional camera movement the reprojection is trying to account for is between 1/60th and 1/120th of a second. Here you would be trying to compensate for 30+ms, so the result is going to be a lot lower in quality.

Also for a moving scene, you'd need to send a depth or position buffer and a velocity buffer for each frame, meaning you'd need something like twice the bandwidth for your video. Probably more, since I can't imagine how you would compress that information: any artifacts are going to give weird results in the reprojected image.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: