Change Displayed Text SizeGrow Displayed Text SizeShrink Displayed Text Size
 

Saturday, February 07, 2004

Rethinking the 6-sided box

The driving problem
The real problem with 3D modeling applications of the time was that none had really given much thought to the artists using them. Using a 3D modeling package to do anything more than create a picture of a shiny ball required a high degree of specialized training in that particular application. To create complex shapes required a very complex mind that could think several steps ahead - and still often resulted in failure. Artists rarely think that far ahead when creating. I saw a need for an environment that fostered more direct manipulation of geometry.
A good example here would be adding a texture to an object. A texture is an image that's applied to the geometry of an object - like a decal. Then-current 3D applications required you to switch to Adobe Photoshop, create an image of your texture there, and come back to the 3D environment. There you would attempt to fit your image to the shape of the object - no small task. Often there was no way in hell that the texture would line up correctly with the object.
There was no reason you couldn't paint directly on your object other than the existing paradigm had not been broken.
The goal of Event Horizon was to break that barrier and provide tools that artists could use.

A Starting Point
To create an application that allowed direct manipulation of 3D geometry was arguably beyond the technology of 1996. At the time, hardware accelerators were just beginning to come to desktop computers. ATI was just beginning to ship the Rage/Rage II chipsets, and there wasn't much software out there to support hardware acceleration. OpenGL existed of course, but was largely unknown in the desktop world - only high end workstations like SGI hardware used OpenGL.
3D applications of the day were very limited. Infini-D, RayDream, Strata, and others were on their surface all the same. They supported a limited number of primitives - box, sphere, cone, cylinder - and a few spline-based tools like a lathe tool. All of those applications used software rendering, not a hardware accelerator card, and all of them relied on a 3+1 view of the modelling scene to present a 3D world to the viewer. The work area was divided into 4 panes, each viewing the 3D scene from a different perspective. One was a top view, another a side view, another front, and the last usually a "camera" view or orthagonal projection. It was left to the user's brain to synthesize 3 dimensions from 4 different views. This lead to a process of construction that was tedious and frustrating. Often this was done in wireframe, with no depth information. The user would never be certain that the surface they were editing was on the front or the back of the object, leading to many lost hours and lost hair.

Soft Shadows The allure of radiosity
One rendering technique that was in the spotlight was radiosity. Conventional ray tracing did not handle soft shadows and shading at all, which is why man computer rendered images look very artificial. Radiosity could be used to complement ray tracing by computing the soft shadows and altering the actual geometry of the scene. So in places of soft shadow, the geometry was broken up into ever smaller triangles, each with different shading values which when rendered would give you nice soft shadows.

Of course, with each surface of the scene's geometry broken up into many more triangles, you were giving the rendering pipeline much more work to do - more triangles to process. Some games got around this problem by computing the radiosity solution, pre-rendering the scene and using that as a texture on the original, simple geometry.

The rendering pipeline that Event Horizon used supported very, very high numbers of triangles for it's time. In fact, more than I knew what to do with. So not only could it support the high number of triangles a radiosity-cooked scene, but in many cases it could in real time. At the time, radiosity and other soft shadow systems broke a barrier in the suspension of disbelief for users - with both hard (ray tracing) and soft (radiosity) shadows covered, it was far easier for users to believe that a scene was a photograph, for instance, than an artificial environment.

In the end, after a number of experiments variously named "Sunburn" and "Shake n Bake", radiosity was only one technique that was used to solve soft shadows. Some of the experiments made for dramatic demos though - things like moving lightsources producing soft shadows impressed programmers. But it wasn't convincing enough for the uninitiated. It still came out looking very sterile and artificial, like a photo out of a magazine, a bad movie, or Martha Stewart. In many ways it was too perfect.

Spatial Audio SoundSprocket
Using the Apple GameSprockets API was one of the best decisions I had made. Besides the InputSprocket layer which gave me access to consumer game input devices, the real gem of the GameSprockets was SoundSprocket. SoundSprocket was a 3D audio library that was very well thought out and implemented - the data structures for working with SoundSprocket were very similar to those used for QuickDraw3D, which made the two very easy to use together. OpenAL and OpenGL, on the other hand, are often speaking in two very different dialects of the same tongue, which makes them difficult to use together.
You would think that the designers of those two APIs would assume developers would want to use OpenAL to attach sound to OpenGL scenes, but I guess not.

At any rate, SoundSprocket let me attach sound to 3D objects, and served as a 3D audio renderer. You could pick a point in 3D space and put your sound there, or link the position of a QuickDraw3D object with a sound source. Very cool, and very useful. Spatial audio was a huge advantage for users. If you've played a game with 3D audio you can understand - in a game you can hear things sneaking up behind you, rockets whizzing by. In Event Horizon, your objects and actions had sounds attached to them that travelled in 3D space.

QuickTime provided a built in synthesizer that was adequate for generating some kinds of sounds and tones on the fly (it was better with musical instruments than with raw FM tones). Generating a sound was not a big deal, even though it could be tricky to generate a desired sound - it would often come out sounding like static instead of what you wanted. But that meant that the program could create sounds from nothing - or from the parameters of the environment itself.

Audio Shaders Sound as Texture
I had read an interesting paper on "Sound As Texture" a few months before this point, and hadn't thought much of it at the time. I went searching for the article again and didn't find it until long after I had implemented audio shaders, but what I ended up with differed significantly from what the authors had described anyway, so I guess this was a good thing.
Objects in 3D environments have properties attached to them that define their appearance, called shaders. The hair on Sully in Monsters, Inc. was a complex shader, the pattern on Nemo's scales in Finding Nemo was a simpler shader. Shaders can be very simple, like setting a color and a few things like how reflective or transparent your object is, or they can be complex and define how a shader can respond to the environment and the object it is attached to (very convincing clouds are often well written complex shaders, for example). So a shader can be as simple as a color, or it can be as involved as being a programming language in itself. The most famous language for writing shaders came from Pixar in the late 1980s and is what they and the rest of the world still use today, Renderman.
Whole programs can be written in a shader language. So the idea here was to have shaders that could produce audio - primarily so that when two objects interacted in a scene, their shaders would create the appropriate sounds. A large, hollow object would make a different sound than a small, solid one when touched or struck - and the sounds would be governed by the shaders and objects interacting with each other.
In practice, more often than not you got rasping noises. I couldn't pin down wether it was a synthesizer problem (I wasn't as good with sound programming at that level as I thought I was) or a shader problem (the system may not have thought that objects were hitting each other as hard as it should have). When it worked though, it worked well. Most of the time, however, you'd want to use a prerecorded sound rather than a programmed shader since the results with shaders were unpredictable. There was code to support mixing the two - like putting distortion or a "whammy bar" on a recorded sound in memory inside a shader. That was designed to support some user interaction features that never got implemented. Unfortunately, when a user shaped geometry with their "hands", the system couldn't quite make squeaks like making a balloon animal.

2/07/2004 01:03:00 AM ] [  0 comments  ]
[archives]
A good quick laugh