AMD and Sony jointly announced three core innovations for the next-generation RDNA architecture - Neural Array, Radiance Core, and Universal Compression - and confirmed that these technologies will appear in future GPUs and customized SoCs. As the gaming industry enters a phase of parallel development of ray tracing, AI rendering and path tracking, these three updates are believed to redefine the way GPUs work and will form the basis of future PlayStation and high-end Radeon GPUs.

First up is the Radiance core. This is a dedicated ray traversal hardware module integrated into the next-generation RDNA architecture that handles the most complex computational aspects of path tracing. Ray tracing requires the GPU to perform two types of tasks at the same time: geometric calculations, which determine where rays intersect with the millions of triangles in the scene, and traditional shading operations, which include lighting, reflection, and texture sampling. In the past, these tasks have all relied on general-purpose shader units, creating competition for resources and performance bottlenecks. the Radiance core takes over the ray traversal logic with separate hardware-accelerated paths, allowing the GPU's main compute unit to focus on rendering, which dramatically improves real-time ray tracing and path tracing performance, according to Jack Huynh, senior vice president at AMD, who said that internally this architecture is seen as a According to Jack Huynh, senior vice president of AMD, this architecture is internally recognized as a "whole new rendering pipeline" that handles light transmission in a cleaner, more efficient way, and Mark Cerny, PS5 series architect, noted that the introduction of the Radiance core means that, for the first time, developers will have access to a flexible and extensible hardware-level ray tracing solution, and that dynamic lighting and global lighting calculations in games will be closer to cinematic levels. Dynamic lighting and global lighting calculations in games will be closer to cinematic visuals in the future. The Neural Array, introduced in parallel with the Radiance core, is another key future-proof design. While compute units (CUs) in traditional GPUs typically operate independently, neural arrays allow them to share data and collaborate on processing at the bottom, operating as a single AI engine. This architecture enables a "distributed machine learning engine" inside the GPU, which improves inference speed and resource utilization, AMD said. The goal of the Neural Array is to integrate machine learning capabilities directly into the GPU rendering process," AMD said. The goal of the Neural Array, AMD says, is to integrate machine learning capabilities directly into the GPU rendering process, providing even greater efficiency and image quality potential for future versions of the FSR (FidelityFX Super Resolution) and Redstone upgrade algorithms. While traditional compute units have struggled to meet real-time performance requirements as model sizes have increased and frame generation algorithms have become more complex, neural arrays can handle more complex AI rendering tasks with the same power consumption through a shared architecture, which, according to Cerny, will revolutionize the neural rendering landscape, enabling future upscaling, denoising, and ray reconstruction techniques to run in real-time on the GPU. AMD added that this is also the basis for FSR Redstone, which will support new Neural Radiance Caching and AI-accelerated frame interpolation on next-generation GPUs.

The third technology, Universal Compression, focuses on improving bandwidth utilization. As resolution and mapping complexity continue to rise, memory bandwidth has become a critical limitation to GPU performance, and AMD's Universal Compression algorithm dynamically selects the optimal compression method by evaluating all types of data streams within the GPU, including texture, geometry, and intermediate caches, to dramatically reduce bandwidth usage without compromising image quality. This hardware-level module is said to reduce data transfers by up to 30 percent, resulting in improved performance and lower power consumption. For games, this means faster texture loading and smoother frame rates, and for developers, it means more complex scenarios can be realized with limited memory bandwidth in the future. Together, these three technologies outline the direction of AMD's next-generation RDNA architecture - the Deep integration of specialized hardware modules with AI compute units to transform the GPU from a traditional renderer to an intelligent compute platform. The Radiance core delivers path-tracing levels of ray accuracy, the Neural Array empowers the GPU with real-time machine learning, and generalized compression provides greater data efficiency for all of these computations. Not only will they impact the performance landscape for Radeon cards, but they will also be a key component of the graphics units in future custom SoCs such as the PlayStation 6. AMD has not yet announced a specific release date, but it is widely expected that these innovations will be the first to appear in the next-generation RDNA GPU architecture for high-end graphics cards and next-generation console chips. Considering that RDNA 4 has already delivered significant improvements in ray-tracing efficiency, the next-generation architecture could hit the consumer market around 2026. By that time, upgraded neural array drivers and Radiance core-accelerated ray rendering could take gaming graphics to the next generation.