Hybrid Rendering for Ray Tracing

Hybrid Rendering for the Cloud

The utilization of graphics in the cloud has been an area of increasing interest with the largest game companies hosting virtual worlds that users subscribe to to enter these worlds and interact along with a game theme. The theme of the game is up to the creativity of the author, but to deliver a realistic, immersive experience, it is often difficult to deploy this developer or artist. The developers will reach a tipping point where they recognize with ray tracing, it is easier to create once, publish to many platforms than using raster technology and the end result is arguably superior to raster graphics approximations.

This leaves many product developers with the need to provide a graphics processing unit (GPU) that can support ray tracing. However, there is a significant amount of legacy in the development, support and maintenance of their existing GPU technology, not just on their own, but likely with an ecosystem of IP, tools, drivers, software and applications solutions associated with their current legacy CPU technology. To address this dilemma, SiliconArts proposes a hybrid GPU architecture where the raster GPU is retained for backwards compatibility and platform support while the ray tracing feature capabilities are incorporated by adding the SiliconArts GPU co-processors to the existing GPU solutions, thereby extending support for ray tracing authoring solutions while preserving legacy support and capabilities for existing applications.

On many platforms that are commonly in use today, they lack either the graphics performance, screen capabilities or computing performance to provide these types of photo-realistic video images. The advantage of cloud computing is that the scene complexity is not limited by the host device since the cloud can store and manage the details of the 3D virtual world that is being modeled. The cloud can also provide nearly unlimited computational performance only limited by the business model of compute costs and data transmission costs to the client. In fact, several companies are already deploying cloud computing to provide ray tracing performance via the cloud, notably Amazon Games has released a game that supports cloud ray tracing computations streamed to a web browser, enabling photo-realistic gaming without a state of the art GPU in their device. There are challenges to delivering high performance graphics for gaming applications from the cloud, particularly those that are twitch games that have short latency times.
There is an interesting solution to latency concerns for a cloud based gaming application if we can break down the game into static and dynamic scene generation. In static scene generation, the overall 3D objects are positioned statically relative to each other, the overall model of this can be computed as a static scene. Enter dynamic 3D objects and a dynamic view port from the user, and this creates a lot of dynamic computational requirements to compute the dynamics of the moving 3D objects and the relationship to the light scattering that this movement generates.

If we look at the decomposition of a 3D graphics scene, we can compose it as static and dynamic scene components. If we compute the very large overall model in the cloud in a static model implementation, this can be performed in parallel across cloud servers and delivered as a composite scene to the game application. The dynamic scene components that are receiving inputs from the game controller can be computed locally by modifying the associated static model and incorporating any dynamic local movements to the calculations.

See us Speed Intel Server Ray Tracing Rendering Performance with our RayCore

Request a Demonstration Today!