Hybrid Rendering

Hybrid Rendering Technology for Ray Tracing

The advent of ray tracing authoring content for current and launching platforms is creating an opportunity for all platform developers to adopt this new technology.  This requires many platforms to upgrade their GPU technology from raster graphics technology that is drawing only triangles and shading them with artistic content to deploy full 3D modeling technology with automatic shading brought on by ray tracing lighting technology.  This is an exciting step forward for graphics technology in general and provides a simplified approach to generating content that is not screen dependent.    One of the key advantages of ray tracing that is not fully recognized is the benefit to the content developers by not requiring artistic shading libraries and engines that can paint in approximations for what natural lighting might produce.  Instead, by maintaining an accurate 3D model and materials, the lighting result is automatic, that is with a lot of computation, but it is deterministic using ray tracing algorithms to generate a natural lighted scene without any input from a graphics artist.

Content product developers are not able to make this transition consistently since it would require a graphics processing unit (GPU) that can support ray tracing for anyone viewing their content.  The majority of legacy content requires raster graphics support which means that ray tracing has to be added to the current raster capabilities to get ray tracing content accepted in the market.  To meet this requirement,  the industry has found a hybrid approach that can combine ray tracing graphics processing with existing graphics solutions. The hybrid rendering architecture has the advantage of  backwards compatibility plus capabilities for ray tracing feature support can be added to the platform.  This hybrid rendering can be performed in the cloud,  the client or a combination of both.  This partitioning  leverages common graphical API support and is dependent on the implementation approach.  Other hybrid techniques include partial ray tracing rendering for those portions of scenes that require or can really take advantage of the advanced lighting of ray tracing, requiring  a mix of raster and ray tracing rendering for content,  sometimes in a single scene.  Another approach is to enable only ray tracing generated content is to reduce the resolution and frame rate of the ray tracing rendering to meet the client ray tracing performance and then enhancing it to higher resolution using spatial interpolation filtering with artificial intelligent algorithms (GAN filters) and/or frame rate increases using temporal frame rate interpolation with motion estimation.

On many platforms that are commonly in use today, they lack either the graphics performance, screen capabilities or computing performance to provide these types of photo-realistic video images. The advantage of cloud computing is that the scene complexity is not limited by the host device since the cloud can store and manage the details of the 3D virtual world that is being modeled. The cloud can also provide nearly unlimited computational performance only limited by the business model of compute costs and data transmission costs to the client. In fact, several companies are already deploying cloud computing to provide ray tracing performance via the cloud.

There are challenges to delivering high performance graphics for gaming applications from the cloud, particularly those that are twitch games that have short latency times.  There is an interesting solution to latency problem for a cloud based gaming application if we can break down the game into static and dynamic scene generation. In static scene generation, the overall 3D objects are positioned statically relative to each other, the overall model of this can be computed as a static scene. Enter dynamic 3D objects and a dynamic view port from the user, and this creates a lot of dynamic computational requirements to compute the dynamics of the moving 3D objects and the relationship to the light scattering that this movement generates.  If we look at the decomposition of a 3D graphics scene, we can compose it as static and dynamic scene components. If we compute the very large overall model in the cloud in a static model implementation, this can be performed in parallel across cloud servers and delivered as a composite scene to the game application. The dynamic scene components that are receiving inputs from the game controller can be computed locally by modifying the associated static model and incorporating any dynamic local movements to the calculations.


SiliconArt’s enhances an existing raster GPU to upgrade the graphics platform to execute ray tracing content.  This hybrid rendering approach preserves legacy graphics operation by using the extended Vulkan RT APIs to  support ray tracing primitives.

SiliconArt’s dramatically increases performance in calculating scene view ports and determining which incident rays intersect with the view for cloud-based graphical ray tracing solutions,  from game streaming to cloud visualization applications.

See our hybrid rendering options for ray tracing

Contact us for a demonstration of hybrid rendering