Another amazing feat from production teams at Pixar Animation Studios, Emeryville, California, “made possible by modern technology (William*)”. With processors and memory available in such large quantities more computing can be done in less time, with time being money, and films having a limited budget it means higher quality films can be made at lower special effects costs.
Simply put: How?
The film WALL·E uses a derivative of the Ray Tracing Technique, light rays are projected perpendicular to the viewpoint plane into the digital environment and reflected off surfaces until a set number of bounces or they reach a light source. This creates a very realistic looking shot, with the realism being proportional to the number of reflections, 1 bounce casts shadows but doesn’t produce any ambiance, 3 looks only just plausible but will be too dark, 8 would be acceptable for daytime television and a full 16 or more are used in motion pictures.
As you can imagine each bounce has to be remembered, the colour information of its reflecting surface(s) and the distance between each one until it matches a finishing condition, this has to be done for each pixel. A rough idea of film resolution is 2048 by 1152, that’s 2,359,296 light rays (2.4 MegaPixels) every 1/24 of a second. An awful lot to remember for just one frame of 129600 in a 90 minute feature.
Is there a simpler way?
Ray Casting functions in a similar way to Ray tracing except there are no bounces once reaching a surface, colour and shading is faked. With less information to remember the process is a lot quicker but also has more inaccuracies. If the shading and colouring isn’t done proficiently then the entire shot looks fake.
There you have it, the basics in how light and shadows are produced digitally. Mathematical equations work out the path a real light ray might take, complicated stuff made possible by the advances in technology. Luckily Pixar aim to create one frame (1/24 second) to be rendered in 3 minutes, making a whole film take a year, so its safe to say we’re a long way off being able to create photorealistic digital environments in realtime. When that happens I would worry, if we could create a near perfect environment in a simulator and you went into that simulator how would you know if you really left?