Nvidia researchers post findings on how to boost ray tracing efficiency on future GPUs

Nvidia put on a real show back in 2018 when it first showed of real-time ray tracing on its RTX 20 Series graphics cards, but it didn’t take long for most gamers to realize the tech was ahead of its time in terms of usability. That’s why it’s exciting to hear that the company is making more progress in improving the efficiency of ray tracing on GPUs for future generations of hardware. As spotted by @0X22H on Twitter and reported by Tom’s Hardware, an Nvidia research team aided by Sana Damani of the Georgia Institute of Technology recently published its findings on the topic. As it would have it, they settled on quite the abstract name for the technique: introducing “Subwarp Interleaving.”

The publication is by no surprise very technical and delves into levels of physics we’re not even going to attempt to explain. Don’t take our word for it though, here’s an excerpt from just the introduction: “Subwarp Interleaving exploits thread divergence to hide pipeline stalls in divergent sections of low warp occupancy workloads. Subwarp Interleaving allows for fine-grained interleaved execution of diverged paths within a warp with the goal of increasing hardware utilization and reducing warp latency.” You can of course read the paper for yourself for better context on the matter, and it will make more sense than just the excerpt we provided. Still, good luck with that.

Here’s the takeaway from it that we gathered. This new “technique,” as the paper refers to it, allows real-time ray tracing efficiency to improve by an average of 6.8%, with best case results up to 20%. Now that’s not phenomenal when considering the massive performance impacts induced by toggling RTX on, but it is progress to be sure and will matter as hardware performance improves overall. Nvidia seems determined to push on with ray tracing in games, and the results can be visually impressive.

Wondering when could we see this in products?

The paper goes on to note that the technique does require alterations on the architectural level of the hardware to work. This means gamers won’t be able to reap the benefits of improved ray tracing performance through an Nvidia driver update, but the option could be on the table for unreleased products. Seeing as Nvidia only published this as an academic effort, we shouldn’t expect anything in the immediate future.

The good news though is that as Nvidia continues to add more tensor cores and other improvements to graphics cards, this technique will further push real-time ray tracing towards becoming a practical technology in mainstream games.

Will we see more support for fully ray-traced lighting soon? Probably not from most developers, but ray-traced shadows is less intensive to run and does have worthwhile visual payoffs. We just might see more of that in the near future, especially since it could save on development costs by allowing developers to forego manual processes of creating prebaked shadows. Better visuals for less effort and at a minor performance cost? Ray tracing is sure to find more widespread adoption sooner or later.

If you’re curious about more visual and performance-related technologies coming to gaming within the foreseeable future, check out our CES 2022 coverage. Nvidia showed of DLDSR, which is an interesting spin on using DLSS for downscaling, while AMD revealed a universal image upscaler called Radeon Super Resolution that works for all RDNA graphics cards.

nvidia ray tracing



Nvidia researchers post findings on how to boost ray tracing efficiency on future GPUs
Source: Showbiz Celeb Central

Post a Comment

0 Comments