Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

a) Deep-learning super sampling (DLSS) allows upscaling to 8K at a low cost.

b) VR uses variable rate shading because we have a lower visual acuity in our peripheral vision.

c) VR rendering typically "shares" a significant amount of the work between the two viewports. E.g.: one set of "commands" are rendered simultaneously into two buffers with different view transforms. Textures and meshes are cached once and rendered twice, so the bandwidth requirements aren't actually doubled.

d) Display stream compression (DSC) and similar technologies would work well for VR because the viewport is always in motion with a high refresh rate. One could even imagine sending a H.265 compressed stream wirelessly at a mere gigabit, which is fantastically high bitrate video but well within current WiFi capabilities.

e) There will be future developments as well, we're not stuck with current technology. Keep in mind that current era flagship GPUs are manufactured on silicon processes that are about 3 generations old! By the time this VR kit hits the mainstream market, GPUs could be manufactured on a 3 nm TSMC process and easily put out 90fps in 8K resolution.



Regarding (d), if you have a Quest / Quest 2, there is no need to merely imagine; this is how WiFi + Virtual Desktop works now for PCVR connectivity, and it’s excellent. (Not sure if it tops out at a gigabit but it’s much more than enough for a great experience at 4K of the Quest 2.)


Especially if Apple keeps pushing their custom GPU's (which are already at 3nm). The current M1 GPU is somewhere between a GTX 1050 and a 1070 - so still a few generations old. With a bit more focus on the GPU part (and use a bit more power) they might be able to pull it off.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: