With the advent of 3D, it becomes more and more important to decide where rendering happens. One can either render on the server and stream video data to the client, or stream 3D data and render on the client.
Overview and terminology
In case you don’t know at all how Spice works, you can watch the following introduction video, courtesy of DevConf.cz:
Difference between 2D and 3D case
Like VNC or RDP before, Spice in 2D does the rendering on the server, and then sends bits of information to the client. The only function that the client really needs is some kind of bit-blitting.
As illustrated in the video above, for 3D or videos, it does not work very well, so it is much better to stream video data, and playback the video on the client. This is a great solution, in the sense that it is relatively easy to implement, requires very little from the client (which could be a mere mobile device), and is very faithful.
Streaming 3D data is another option, but it is unclear at this point if it makes sense. In theory, you could reduce the bandwidth by sending data that takes a lot of space, like textures, ahead of time, and then streaming only vertex coordinates. Also, textures can be sent at multiple resolutions, starting with the lower ones.
It is well known that it is possible to constraint the maximum bandwidth required for video, but it’s unclear that the same can be done with 3D data, or what the effects would be in term of image quality. Therefore, if we decide to stream some 3D data in the future, switching between 3D and video streaming should be dynamic, depending on the amount of data to send in either case.
Data to justify streaming 3D
We need to collect some hard data to figure out if 3D streaming makes sense at all. If you have any such data, please share.
To me, there are two examples out there that give me a good feeling about 3D streaming: World of Warcraft and Google Earth.
Both applications include an enormous amount of 3D data, that they stream dynamically to the client side. However, as far as I can tell, the 3D rendering proper remains done on the client, which gives better reactivity and response time, but also at least in theory, higher quality graphic, with more dynamic color ranges, sharper edges, etc.
Making 3D applications “remote aware”
So the big benefits I can see to streaming 3D are:
- Rendering quality
- Response time
However, to take full advantage of these benefits, applications have to be aware that they are displaying remotely. Ideally, it should be possible for an application to tell the remote client: “Hey client, here is what to do if the user clicks on the right arrow“. The world as seen by the user could spin in a way that only requires minimal network traffic.
Tao3D or WebGL have demonstrated that an approach like this can be done. Whether it can reasonably be done with APIs that don’t require a full blown language running on the client is a bit unclear at the moment. Whether that belongs to the Spice project is another matter entirely 😉