In order to understand how current 3D imagery methods work, and why they are limited, it is important to have some basic knowledge of how the human visual system perceives depth. By depth I am referring to how far away an object is perceived. There are primary and secondary depth cues, or indicators, that the human visual system uses. I will cover the secondary depth cues first as these are ones already commonly used in games and in the movies.
As a very general and often inaccurate rule, larger objects are closer. The more field of view that an object takes up, the closer it is. This particularly applies if you have two of the same object. As shown below, one monitor looks closer than the other because it is larger. This example works well because we have two instances of the same object, which your brain assumes are the same size.
The above example causes our brain to make a couple of assumptions: First, we assume that both monitors are the same size. Second, we assume that they are standing the right way up, and not upside down, and that they are being viewed from above. Now if we do something to break these assumptions by changing the assumed viewing angle.
We now see that the image doesn't look right. Your brain is confused because it can interpret the above image in two different ways: There is one very large monitor behind a much smaller monitor that is closer to you or, there are still two monitors of the same size but they are hanging from the ceiling, or the larger one is floating in the air in front of us.
Here is a perspective drawing. The lines on the road fill more of your field of view closer to you and grow narrower and narrower until they reach a single point in the distance. This point is called a vanishing point and is labeled "vp" on the drawing. You probably did pictures like these at school.
Your brain assumes that the road, the fence, and the lights are all uniform in size, so it interprets the narrowing of them towards the center of the picture as an effect of distance. Here is a Quake screen shot where I have highlighted the main vanishing point.
Size and perspective alone can be effective at representing a three dimensional object, although they are usually used in conjunction with secondary methods such as shading.
Overlaying and Parallax and Speed
When one picture overlays another, the eye assumes the picture doing the overlay is on top or in front. Illustrated below is an example where one man appears to be standing behind the other even though both are drawn the same height so there is no perspective.
Overlaying occurs in games also. If a wall was overlaid on top of a person who is standing in front of it, things would appear very strange. Sometimes you see these sort of anomalities happening if a game messes up its z-buffer or you have your graphics card clocked too high. A common flavour of overlaying is called parallax scrolling which typically are seen in horizontal and vertical scrolling shoot-em-ups and platform games.
The following animated GIF will demonstrate (it may run slow on some browsers but you get the idea). In this example, it's obvious that the pillars are in front of the wall behind it. The brain doesn't have to perceive it this way, but with parallax scrolling, it's nearly impossible to persuade your brain to perceive it in any other way.
Also notice that the pillars are moving faster across the field of view than the wall behind it. This is another secondary cue that the human eye uses. As a general rule, if something moves fast across your field of view, then it's closer to you than something that is moving slower.
Camera Focus And Depth Of Field
The human eye, like a low cost camera, can only focus on one area at a time. If you are focused on one point, then objects closer or further away from you will be out of focus, or blurry. To simulate this, close one eye, and hold your right thumb in front of you at arms length. Next, hold your left thumb in front of you close to your nose. Now if you focus on your right thumb, the left thumb goes blurry. If you focus on your left thumb, then your right thumb goes blurry. See it? I asked you to close one eye because if you don't, the other thumb will not only go blurry but you will see double (which relates to a topic I will discuss later).
The human eye is an active system, even when you think that you are looking at one thing, your eye is bouncing around analyzing the surrounding environment. By seeing which objects blur and by how much when you focus on another object, your eye gets a secondary cue of depth. By deliberately making one part of the image (that we hope the user is concentrating on) out of focus we can try and convince the viewer that it is far away, or closer to, as these examples show.
This technique is often used in films and photography. It is has very limited application to computer games because it constricts the viewer to focusing on one particular part of the image. Imagine playing Quake when all of a sudden your PC decided that the guy immediately in front of you with a machine gun should be your focal point and everything else, including the guy with the railgun behind him gets blurry. You can see that with an interactive environment you really can't force the user to focus on one particular spot. Depth of field remains a useful tool in films and photography where you want to draw attention to a particular part, and we may see it in real-time generated cut-scenes in future games.
Lighting And Shadows
By rendering correctly the lighting and shadows in an environment (light maps, etc.), a scene can look more 3 dimensional. Here is a picture of a cube and a sphere, before and after shading and shadowing is applied. Here I have assumed that the light is being cast from the upper left hand corner of the screen.
By grading the polygons in a linear way we can create an illusion of distance, not necessarily from the user, but from the light source (unless the user is the light source).
Haze, Fog and Atmospheric Distortion
Haze refers to atmospheric distortion. The atmosphere we breath isn't totally clean, there are little bit of dust and alike floating around in it. If a mountain is really 10 miles away then there will be some atmospheric distortion between you and it. The mountain will look slightly less detailed with slightly less color definition. Haze is something essential to give a computer generated (Bryce, Vista, etc.) landscape that extra touch to make it look realistic and believable. This Falcon 4 screen shot shows haze being used on the horizon.
A variation of haze is fog, which is the same as haze only closer to and more opaque. Some games simulate darkness by using black fog. With fogging, closer items are more visible, and objects further away are hidden in the fog. Racing games like Motorhead (shown below) often use fogging.
Fogging is also a convenient way of introducing objects into the immediate location of the player within a game. Without fogging, in racing games like Motorhead, the surrounding world would have to be rendering to the horizon which would be very CPU consuming or have objects simply pop into existence when you got close to them.
There are two old fashioned techniques for viewing the two photos taken from a stereo camera. These techniques have been known about since 1833 when Sir Charles Wheatstone created a stereo viewing machine. These pictures can be seen with the naked eye and I will show you how.
This technique I find the easiest. Below is a cross-eyed image, its the same Quake picture that we saw earlier.
To assist you, I have placed a yellow dot in each picture. The idea is to make yourself go cross-eyed so that the two dots join and form a third dot in between them which is closer to you. Below is a diagram of what you should make your eyes do.
Go as cross-eyed as you can, then gradually let it out until both dots meet. Tilt your head slightly left or right until they are perfectly aligned and then allow your eyes to adjust and relax in the new position. After the two dots are perfectly aligned try and relax - let your eyes roam around the picture, but only a little, and keep your focus on the center dot.
Your eyes are now looking at the correct place as if the object is closer to you, your pupils are also focused as if the object is closer to you also. This isn't the case because the images are really on the surface of the screen. You want your eyes to be pointed as if the object is closer to you, but your pupils to focused on the screen farther away. The focusing of your individual eyes is mostly a subconscious process. Just keep the middle dot aligned, relax as best you can, and wait a few minutes. You should see the stereo image in focus within a couple of minutes. If you still can't see anything, then rest your eyes before trying again as going cross-eyed like this can cause eye strain and even temporary short-sightedness.
Parallel Viewing Technique
Here is a parallel pair of images. It's the same Quake picture we saw earlier, only the images are reversed. To see this, the viewer does not go cross eyed. Instead the right eye looks through the right picture and the left eye looks through the left picture.
To see a single 3D image you must force your eyes to focus on a focal point beyond the screen. To prevent eye-strain, I recommend that you print this picture off if you intend to spend any time on this technique. Below is a diagram of what you need to make your eyes do.
You will probably find this very difficult. For assistance, try putting a piece of paper between the two pictures and up to your nose. This this will prevent each eye from looking at the wrong picture. You can also use the yellow dots as guidelines. You must focus your eyes stereoscopically to get the two dots to split and join in the middle to form a third dot in the distance. The pupils of your eyes will focus in the distance too but you don't want this to happen, instead you want the pupils of your eyes to focus on the screen as this is where the actual picture is.
If you still can't get it, then don't worry, I can't either - and I had my eyes checked and the optician said they were perfect. This is just a very tricky thing to do because you are asking your visual system to perceive an object as being both far away and closer to at the same time.
One limitation of this technique is that the content of the two pictures can't be too far apart, and therefore the pictures must remain small, if they are two far apart then you eyes will have to do this.
I saw one guy on TV who could make his eyes to this. I don't recommend practicing it though :). By the way, viewing machines for these kinds of images contained lenses to magnify the image so they don't look so tiny.
Stereograms, which I will mention later, are a variation of this.
There are numerous different type of machines that show a stereo pair of images to the viewer. The most popular kind is probably the "View Master" which most of you have probably seen in toy stores.
These machines are basically an assisted version of the parallel viewing method. They contain lenses to magnify the image and make sure that each eye only looks at the image it was meant to see.
On TV and in the movies we see a whole different kind of 3D imaging. Programs like Star Trek often show a holographic image actually projected into the middle of the room, and the users walk around the object, viewing it from different angles.
Actually Star Trek takes things even further and allows the user to touch the holographic images, fight holographic ninjas, and bed holographic women. Of course what we see in Star Trek is just special effects, but why can't we generate holograms for real?
Note: A lot of this is just speculation and ideas.
Experts in the fields of holographs are often asked to create a holographic object in the center of the room that people can walk around and observe from all angles. To do such a thing is presently impossible because of one fundamental problem: How can you make light reflect off nothing? You see, light is only visible if the photons actually reach your eye. An object is only visible if some light strikes it, bounces off, and some of the reflected light reaches your retina.
To create a hologram in thin air is impossible because there is nothing there to bounce the light off. I have seen TV shows and commercials where they project the image by focusing several laser beams in the center of the room. This wouldn't work either. A laser beam is only visible if it reflects off something. This common misconception is obvious to everyone who plays with a laser pointer for the first time. They expect to see some Star Wars like laser beam shine across the room and instead see nothing but a small red dot projected on the wall.
Now, you may have seen laser beams at night clubs and alike. In these places the beams are visible. Why? because the rooms are filled with artificial fog, and not to mention tobacco smoke. The laser light bounces off all the little particles of fog and smoke and its the illuminated particles that you see as being the "laser beam". If you have a laser pointer try it outside on a foggy morning, it looks really cool.
Anyway if this were a movie we could generate a subspace field, or project the photons directly into the middle of the room over a transporter beam. But this is real life, and I intend to discuss this realistically. But lets humor the idea of the laser based system for the moment and look at some other limitations and considerations of it.
All with a dot
We can't create a holographic image with beams of light as it would look something like this...
Instead we need dots of light to construct and image. If one dot could be projected in three dimensional space then we have it made because it would be a relatively simply task to adapt that into rendering a complete image. A complete image could be made up millions of points of light. You monitor works by moving a single point across the screen like so..
The beam scans across the surface of your screen and is alters in strength as it does so. When this is done 75 times a second (a cheap monitor) the result is an apparently solid image. With a dot projected in 3D space, which we call a voxel, this process could work also. The voxel would move in 3D space, in a scanline fashion in three dimensions, something like this...
By varying the voxel in brightness as it scans we could render a 3D image. Directing the beams in a scanline pattern would be easy, you could just use a set of rotating mirrors. The laser tubes themselves don't need to move.
Making The Dot
Now that we have established that we can create a 3D image with a single dot in such a manner, how do we project that dot? Well as I said before light isn't visible unless it reflects off something. But lets say that we can fill a three dimensional space with particles of matter, e.g. fog. Then we could focus several laser beams onto one spot. Each laser beam would be too weak to be seen individually, but when they all meet at a particulary point in space, the particles in that area of space reflect sufficient light to be seen as a dot...
The problem with filling a room with fog or mist is that it would be impossible with current technology to keep the density of the fog consistent and uniform. Even if you could it would be distorted once someone in the fog filled room moved, talked, breathed, broke wind, anything to cause a little draft and upset the placement of all the little fog particles. The same would apply for a liquid too.
But what about a solid? It might be possible to construct a three dimensional cube of a glass like material with reflective pieces of matter uniformly placed throughout its structure. Furthermore the placement of the reflective pieces could be ordered in such a way that a laser beam could strike any piece without any other pieces getting in the way.
So if this were possible, what would you have? well you would have a cube like solid of glass like material, within it could be projected a three dimensional image. This is called a volumetric display. Using laser beams in this way is just my idea. There are probably other (and better) ideas for creating a volumetric display.
Our imaginary laser based 3D system has another major flaw. The image it projects is transparent. If we try to project a block it may look like this.
We can see that all sides of the image have been rendered, it is truly 3d, but the downside is that we can see far parts of the image that should be obscured by the closer parts. The image is just made of light, there is no real matter there to block the light from the far side of the image. So how could we fix this? Well if we only had one user, then he or she could wear a simple tracking device that allowed the computer to see where the viewer is and which parts of the image should be visible, and which parts should not. Such tracking devices are already used in virtual reality.
Real Life Volumetric Displays
Well, as we have seen we cannot project a hologram into the middle of the room like R2-D2. But its feasible at least to create a 3D image within a confined space, or volumetric display. No volumetric displays are currently available, but there is some research going on in this field. The details on how such systems work is usually kept secret so info is hard to come by. However, they usually fall under two types: Swept Volume, and Static Volume displays.
Swept Volume displays rely upon a moving surface that moves so fast that the eye perceives it as being one solid display. You can see simple examples of these displays in science or novelty stores. One in particular is a wand with a row of LEDs (small red lights) on it, when you wave the wand back and forth rapidly through the air you see a message.
More complex version of this technique can render multi-layered 3D images.
Static Volume displays are like our discussed laser based idea. They contain no moving parts. Here is a quote directly from this article.
"Whereas most volumetric systems under development at present are of the swept-volume type, static-volume displays, which do not employ a moving component to sweep out the display volume, are also under development. One means of generating isolated voxels in a static volume is to utilize a stepwise excitation of fluorescence processes at the intersection of two invisible (usually infrared) laser beams. This requires that the display volume contain atoms, ions or molecules that exhibit this behavior with suitable quantum conversion efficiencies, output fluorescence frequency and decay times. The most promising medium at present employs rare-earth ions doped into an infrared-transparent glass. An alternative static-volume technique has the display volume composed of a 3D array of individually addressable voxel elements. This method may thus achieve ultimate parallelism, with every voxel addressable each refresh."
Errm yeah, in English? Well basically they are using laser beams in a similar way that I discussed earlier, although their idea is better. The infrared laser beams are invisible, but the elements within the volume radiate visible light when struck with infrared laser. Since I have no idea what quantum conversion efficiences are, I will stop talking writing here before I get any further into a big pile of you know what :).
In this section I will cover the techniques I know of display a stereo image that is "natural". By natural I mean that no special glasses are required, and you don't need to train you eyes to do weird things to see the image. This is the Holy Grail of stereo imaging. With a natural technique you can just look at the picture and instantly see a 3D image.
What Are Holograms?
Holograms were discovered in 1947 by Dr. Dennis Gabor. A hologram is like a regular photograph but instead of being a photo of the general mixed up ambient light reflecting from an object, it is a picture of the actual patterns light, their phase and amplitude, reflecting from an object. The resulting hologram will reflect the same patterns of light as the original object when light reflects off it.
Creating a hologram first requires a source of coherent light. Normal light would just mix together and the actual light patterns could not be photographed, a source of coherant light is required, and that means a laser. Laser is a word that stands for "Light Amplification through Stimulated Emission of Radiation". I am not going to go into detail on how lasers work, all you need to know is that the photons (light particles) in laser light are all travelling in an ordered way in the same direction. Some cheap laser pointers can even be used to generate a holographic image (clickhere to read more on the subject).
Light, although consisting of particles called photons, also travels as a wave. A hologram is made using two waves of laser light, but the waves must be in sync, and therefore come from the same laser. The laser beam is split in two using a piece of glass. Some of the beam goes through the glass, some of it reflects off it. Each laser beam is too fine and narrow to illuminate anything larger than a pin-head, so each is widened using a lens. The first wave of laser light, called the reference beam, goes straight from the laser onto a piece of high quality photographic plate. The second wave of laser light is directed at the object to be photographed, the laser light reflects off the object and onto the photographic plate. When both waves of laser light meet they create the pattern that produces the final hologram.
There are two types of hologram. If the reference beam hits the photographic plate at the same side as the object beam, then this creates a transmission hologram. If the reference beam hit the opposite side of the photographic plate then this create a reflection hologram.
Transmission holograms are the most common and best illuminated from behind. Normally this isn't practical, so a reflective piece of foil is attached behind it. Transmission holograms also give a rainbow effect to the color of the picture.
Reflection holograms are light from the front, and are the typically the more expensive and better looking kind that you see at novelty stores. Reflective holograms are normally just one color, but it is possible for a hologram of this kind to contain two or three colors. Still not a full color image though.
Places like Yoshikawa Labs at Nihon University are working on holographic video and computer generated holograms...
"The pixel numbers are 10,240 x 6,144 and the size of the final hologram is 35 mm x 21 mm. When observed about 1 m from the hologram, one can recognize binocular parallax."
That's a lot of pixels for an image little over one inch across! We may see real-time computer generated holographic displays in our lifetime but we may have a little while to wait yet.
This is another example 3D imagery that has been distributed in cereal packets :). A lenticular picture consists of several pictures interweaved together. A piece of plastic that has a prism like structure to it is attached to the surface of the picture. The plastic refracts different lines of the image depending on what angle you view it from. Lenticular pictures were first used to display simply animation like this...
Later, as the technology improved, it started to be used for displaying 3D images. After all, the two eyes of the viewer are looking at the picture from slightly different angles anyway, so why not show a stereo pair of pictures.
Lenticular pictures can display a stereo 3D image and/or a few frames of animation. Lenticular images have been around since the 1940s and were first mass produced by a company called Variview. They are cheap and easy to produce, with a good enough printer and some lenticular sheet (which you can buyhere) you can even make your own. Lenticular images are also usually clearer and more defined than holograms. Nowadays you often see them on the cover of video tapes (Independence Day springs to mind)
One of the exciting things about lenticular displays is that real-time computer generated lenticular video is now possible. Instead of having a static picture behind the lenticular screen, we have a hi resolution liquid crystal display. Its fairly simple to alter a game or other piece of software to render the scene twice and interleave the two images together before displaying them on a screen. With any luck this technology will be noticed by the big industry giants and we will see lenticular displays being available to the average Joe within the next few years.
This page will discuss the current techniques of showing a stereo image to the viewer with the aid of special glasses. These glasses all use different techniques to achieve the same thing: to two separate images of the same scene, one to each eye, that the brain sees as a stereo (3D) image.
"Analglyph" refers to the red/blue or red/green glasses that most of us have seen in comic books and in cereal packets etc. The glasses consist of nothing more than one piece of transparent blue/green plastic and one piece of transparent red plastic. These glasses are easy to manufacture and have been around since the 1850s. You can even make your own with the right colored candy wrappers.
An analglyph stereo picture starts as a normal stereo pair of images, two images of the same scene, shot from slightly different positions. One image is then made all green/blue and the other is made all red, the two are then added to each other. The diagram below will illustrate, click on it for a larger view.
When the image is viewed through the glasses the red parts are seen by one eye and the green/blue parts are seen by the other. This effect is fairly simple to do with photography, and extremely easy to do on a PC, it can even be hand-drawn. The main limitation of this technique is that because the color is used in this way, the true color content of the image is usually lost and the resulting images are in black and white. A few images can retain their original color content, but the photographer has to be very selective with color and picture content. Click here to visit a page with some examples of analglyph 3D pictures with full color content.
Pulfrich 3D Glasses
These glasses are based on a phenomenon discovered by a guy called Carl Pulfrich.
Pulfrich discovered that light takes longer to travel through a certain dark lens than it does through a light one. Not by much, but by a few miliseconds, just enough for a frame of video to arrive one frame later on the eye that it is covering. What use is this? well if the video being watched shows an object moving horizontally across the screen and one eye sees a previous frame than what the other eye sees then you will see the image in two locations.
With a ball moving horizontally across the screen like this, each eye sees a different image, and the disparity between the two images is perceived as depth information. The brain assumes both frames belong to the same object and your eyes focus on the object as if it were closer than it is. The faster the object moves, the more separation there is between the time delayed images and the closer the object appears.
The fact that faster objects appear closers than slower objects also coincides with the principles of motion parallax. The parallax scrolling animation that I showed earlier looks pretty good through these sorts of glasses.
In hi quality Pulfrich glasses the second lens is not required, or is usually just a clear piece of plastic. Since these ones of mine a cheap, the dark lense actually has a purplish color too it which offsets the color balance of the image being viewed. To try and correct this they have put a yellow/green color to the other lense to try and correct the color balance.
Films made for the Pulfrich method are perfectly watchable without any special glasses (minus the 3d effect). The limitation of this technique is that it only works when things are moving horizontally and in the same way. Films made to take advantage of these glasses must contain lots of horizontal tracking shots or rotational panning shots to create the effect.
The only type of games that would benefit from these glasses are horizontal shoot-em-ups or platform games where the player is always heading in the same direction. If the player changes direction then everything in the background will look closer than everything in the foreground, at least until you took off your glasses and put the on back to front, which would correct the order. A few old arcade games like R-Type and Nemesis work well with these glasses.
Polarized 3D Glasses
Polarized glasses are probably the most commonly used in amusement parks and alike. Each lens is polarized at an opposing 45 degree angle.
To display an effect like this requires two projectors. Each projector also has a polarized lens over it, each at opposing 45 degree angles, like the glasses. A polarized lens basically lines up all the light waves so they are in one orientation. These oriented light waves can only pass through a polarized lense that is polarized at the same angle. If the polarization of the lens is different then it wont let that light through. Light coming from the projector oriented at -45 degrees will be seen by one eye, and the light coming from the other projector oriented at +45 degrees will be seen by the other eye. Hence, you can display a stereo pair of images at the same spot and the viewer will see a single 3D image.
The above diagram shows two projectors projecting polarized light onto a screen. The orientation of the lines (vertical or horizontal) indicates the orientation of the light. The color of the light is for illustration purposes only, polarized 3D doesn't colorize the light like that. When the two images are reflected off screen a mix of light (in both orientations) reaches to 3D glasses, the lenses on the 3D glasses only let the matching polarized light through and the light is split into its separate images before it reaches the users eyes.
Optical equipment like this is expensive, even the glasses can cost up to 3 dollars a piece even when ordered in bulks of 50. The projectors and the lenses cost a lot more. The cost is the main reason why you normally only see polarized 3D at special events and amusement parks.
This 3D technique could be possibly be adapted for computer games. We can already buy projectors that you can hook a PC up to. I don't see why you couldn't have a PC with two graphics cards, two projectors, and a couple of polarized lenses to project the image. All you need then is some sort of patch for your favorite game to render each frame as a stereo pair, one frame per graphics card. I don't know of anyone trying this though. It would likely involve several thousand dollars worth of equipment.
Purchasing note: Analglyph, Pulfrich, and Polarized glasses can all be purchased in bulk from the Rainbow Symphony Store... no, I don't get any commission from these guys, I just thought I'd mention a place where you can buy them since they are sometimes hard to find.
The flat plastic that make the lenses is practically colorless, and both lenses look identical. Here is a picture that works well with Chromadepth glasses.
At first I couldn't work out how these glasses worked, but after looking at the example pictures for a short while I noticed a pattern: All red objects appear close to you, all blue objects appear far away, and other colors make up the depths in between according to there location on the spectrum. Colored light that hits the lenses gets redirected according to its hue like so...
As you can see blue light passes almost straight through, green light gets angled slightly, yellow light even more, and red light is angled the most. The other lens does the same thing except it refracts light in the opposite direction, its the same lens just attached back-to-front. By changing the angle of light from a colored object before it reaches the eye we can make the eyes see an object as being at a different distance than it actually is. Hopefully this diagram will explain...
The limitation of this technique is that the colors of the objects in a scene must be chosen accordingly, if you wanted to put a man in the background he would have to be wearing blue. If that man were to walk towards you then his shirt would have to change through all the colors of the rainbow until it became red when he was up against the screen. In a game you would have to change the colors of all the scenery as you moved around in it. This could be trippy, but I am sure the novelty wouldn't last forever.
A funny quirk of Chromadepth glasses is that after using them for a while some people start to see a slight 3D effect from colored images without wearing the glasses. This isn't just imagination, the human eye actually focuses on different colors at different distances because the "cones" (the part of the retina that detect color) in the eye are different lengths depending on what color they are tuned to picking up. The brain normally compensates for this, but I can still consciously convince my brain to perceive a Chromadepth picture as being three dimensional. The effect is only slight and isn't as pronounced as when wearing the glasses but it is noticeable.
You can purchase the glasses from Chromatek, a pack of 10 glasses costs about 10 bucks. Chromodepth glasses are fun to play with, and ideal to keep kids amused at parties since its easy for them to create their own 3d pictures with a few colored crayons.
The idea for shutter glasses has probably been around for a while but it has only been recently that technology has become available to make an effective implementation of it. It works with two images for each image of video, shutters cover or uncover each eye of the viewer, allowing one eye to see, then the other. The shutters are timed with the video so that each eye only sees the image intended for it. Of course the shutters aren't mechanic devices nowadays and instead use a lens that turns opaque when an electric current is passed through it. This is probably a derivative of liquid crystal display technology. Here is a picture of some Crystal Eyes (tm) shutter glasses, manufactured by Stereo Graphics Corporation.
Because shutter glasses only expose each eye to every other frame the refresh-rate of the displayed video is effectively cut in half. On an TV this would give a refresh rate of 30 fps or frames per second (for an NTSC TV) or 25 fps (for a PAL TV). This would be extremely flickery and hard on the eyes, therefore this technique will never take off for TV viewing until new TV standards come out. PC monitors however are beginning to support refresh rates of 120 fps and higher, this would give an effective refresh of 60 fps which is just about sufferable. If you have a monitor that can do 140 fps then you will get 70 fps and at that point the flicker starts to disappear and the glasses can be comfortably used for long periods of time.
With a good monitor the only limitation to shutter-glasses is the hassle of having to wear them, they are heavy (much more so than regular spectacles) and you usually have a dangling wire that can get in the way.
"Virtual Reality" was a phrase we heard a lot of in the early-mid 90s, around the time of movies like "Lawnmower Man" and "Jonny Neumonic". The idea of VR was to create a totally submersive virtual experience. Back-lit Liquid Crystal Displays pretty much kicked off this craze, it allowed the construction of light weight (relatively) headsets with a separate display over each eye.
A Back-lit LCD color display was much more light weight than a traditional cathode ray TV, and strapping two TVs to your head probably wasn't such a good idea. With two LCD displays it was possible for a computer to render a 3d environment and display it as a stereo image to the user. Additionally, motion sensors could be build into the headset so that when the user moved their head the angle of view within the computer generated environment moved with it. To add to the experience, one or two "data gloves" could be worn by the user allowing him or her to pick up and interact with object in the virtual world.
A company called Virtuality was the first to attempt to mass market this sort of technology to the public, they build the first VR arcade machines, where for a ridiculous price you could experience a VR fighter jet in all its flat shaded polygon wonder.
Nowadays VR seems to have gone out of fashion, it made a small insurgence into PC gaming, (there is a VR version of Quake 2). One of the problems with it could have been that it came too soon. It became a craze when PC gaming and LCD technologies were still in their infancy, at a time when dinosaurs like four-eight-sixicaus-ess-exicaus roamed the earth. The 3D in these machines featured little or no texture mapping and very slow frame rates. The display generated by the pre-TFT bulky LCD displays was also blurry and low res. Had this technology been thought of today, with the power of the Quake 3, UT or DMZG game engines behind it things may have been very different.
There were two other limitations of this technology, firstly, if you moved your eyes the illusion would be ruined. The system wasn't able to track eye movement, only movement of the head. Therefore you had to get into the habit of staring straight ahead and moving just your head to look around you. Secondly, the headsets where very bulky and heavy. Today's models are improved but are still too much of a hassle to wear for most people to really get used to.
These techniques are similar to the stereo viewing techniques in that no glasses are required but you have to train your eyes to view them. Some sort of special processing of the image is required to create one.
In 1844 Sir David Brewster discovered that in repeating patterns with small differences a three-dimensional effect can be seen. But it wasn't until 1979 that the first Stereogram was produced by Christoph Taylor on an Apple II. Stereograms were commercialized in the USA in 1990 and fast became a craze, which has all but died out.
Stereograms come in two flavors, the pattern stereogram - which looks like wallpaper, and the random dot stereogram - which doesn't look like anything. Stereograms are just pictures with their content scattered in such a way that they look 3D when one eye looks through one particular part of the image and the other eye looks through another part. The algorithm for generating a stereogram is long and complex and I wont bother describing it here.
Above is an example pattern stereogram from Magiceye Inc. Like most stereograms, this is viewed with the parallel viewing method (you look through the image). Pick out two objects in the picture that are standing next to each other. Now make your eyes focus through the screen so that your two objects merge into one. If one object is above the other tilt your head slightly left or right. Once the objects are perfectly aligned then with any luck your eyes will focus correctly on the 3D image.
Don't be suprised if you can't see it. I have great difficulty getting it myself. If you can align the objects but it still looks blurry then this is because your eyes are pointing in the right angles, but the pupils of your eyes are not focusing correctly. This is your conscious mind getting in the way, try and convince yourself that the blurry image is actually farther away from you than the screen, imagine focusing on something far away, and hopefully your eyes will bring the blurry 3D image into sharp focus.
The repeating pattern of these stereograms gives you the clue as to how they work. Really the pattern need only repeat once, aslong as you have two images you can look though them and see a single 3D image in the distance.
Hidden Image Stereogram
Here is another example, this time the dots that make up the stereo image appear random when viewed normally. But if you focus through the picture the dots will not appear random, they will make up a stereo image. These images are generated on computer using a depth map to determine how to scatter the dots, only computing power makes these sort of Stereograms practical to produce.
Did you see it? no? don't worry, I can't see those things either. I still remember a British comedian called Jasper Carrot talking about Stereograms, and how he couldn't see the them and how left out it made him feel around his friends...
Friend: Can you see it? Carrot: Errrr Friend: Come, you must be able to see it.. Carrot: Errrmmm Friend: Come on, are you stupid or something? its the statue of liberty, can't you see it? Carrot: Errr.. umm.. yeah.. err yes I can see it. Friend: You can ? Carrot: Errr, yeah yeah, its the statue of liberty. Friend: Can you see the taxi? Carrot: Err, yeah yeah. Friend: There isn't one! hahahahah
The text on the Magiceye books claim that 85-90% of people can see them, hmmm. Magic Carpet is the only PC game I know of that has a real-time stereogram option. And I don't know of anyone who actually got used to playing it that way. The problem with stereograms is that they are simply too much of a hassle to learn to view.