There are challenges to delivering high performance graphics for gaming applications from the cloud, particularly those that are twitch games that have short latency times. There is an interesting solution to latency problem for a cloud based gaming application if we can break down the game into static and dynamic scene generation. In static scene generation, the overall 3D objects are positioned statically relative to each other, the overall model of this can be computed as a static scene. Enter dynamic 3D objects and a dynamic view port from the user, and this creates a lot of dynamic computational requirements to compute the dynamics of the moving 3D objects and the relationship to the light scattering that this movement generates. If we look at the decomposition of a 3D graphics scene, we can compose it as static and dynamic scene components. If we compute the very large overall model in the cloud in a static model implementation, this can be performed in parallel across cloud servers and delivered as a composite scene to the game application. The dynamic scene components that are receiving inputs from the game controller can be computed locally by modifying the associated static model and incorporating any dynamic local movements to the calculations.