Nvidia and Microsoft announced on Thursday that they would be adding neural shading support to the Microsoft DirectX preview this April. Neural shading will use cooperative vectors and Nvidia’s Tensor cores (matrix operations units) to speed up graphics rendering in games that support the technology. It will better allow for the generic use, via HLSL (high level shading language) of traditional rendering techniques alongside AI enhancements.
While real-time computer graphics and graphics processing units (GPUs) have come a long way, the graphics rendering pipeline itself has evolved slower than hardware. In particular, while Nvidia’s GPUs have featured Tensor cores (primarily aimed at AI compute) for over seven years now, they have only been used so far for things like upscaling (Nvidia’s DLSS), ray reconstruction (DLSS 3.5) and denoising, and frame generation (at least for DLSS 4).
This is going to change with the so-called neural rendering — a broad term that describes a real-time graphics rendering pipeline enhanced with new methods and capabilities enabled by AI.
A specific subset of neural rendering focused on enhancing the shading process in graphics is called neural shading. Its main purpose is to improve the appearance of materials, lighting, shadows, and textures by integrating AI into the shading stage of the graphics pipeline. The addition of cooperative vectors — which let small neural networks run in different shader stages, like within a pixel shader, without monopolizing the GPU — is a key enabler for neural shading.
Cooperative vectors rely on matrix-vector multiplication, so they need specialized hardware, such as Nvidia’s Tensor cores, to operate. To that end, they can work on Intel’s XMX hardware as well as Nvidia’s tensor cores. Intel also released a statement saying cooperative vector support will be provided on Arc A- and B-series dedicated GPUs as well as the built-in Arc GPUs found in Core Ultra Processors (Series 2) — basically, every GPU from Intel that includes XMX support.