Blog créé grâce à Iblogyou. Créer un blog gratuitement en moins de 5 minutes.

LED-TV

3D Fernseher

Common 3D display technology Posté le Mercredi 19 Mai 2010 à 13h52

Common 3D display technology

3D imaging dates to the beginning of photography. In 1844, Scottish inventor and writer David Brewster introduced the Stereoscope, a device that could take photographic pictures in 3D. It was then improved by Louis Jules Duboscq and a famous picture of Queen Victoria was displayed at The Great Exhibition in 1851. In 1855 the Kinematoscope was invented, i.e., the Stereo Animation Camera. The first anaglyph movie was produced in 1915 and in 1922 the first public 3D movie was displayed - [[3-D film# By the Second World War, stereoscopic (3D) cameras for personal use were already fairly common. Early systems of stereoscopic filmmaking (pre-1952)|The Power of Love]]. In 1935 the first 3D color movie was produced.

In the fifties, when TV became popular in the United States, many 3D movies were produced. The first such movie was Bwana Devil from United Artists that could be seen all across the US in 1952. One year later, in 1953, came the 3D movie House of Wax which also featured 2D sound. Alfred Hitchcock originally made his film Dial M for Murder in 3D, but for the purpose of maximizing profits the movie was released in 2D because not all cinemas were able to display 3D films. The Soviet Union also developed 3D films, with Robinson Crusoe being their first full-length movie in 1947. 3D Fernseher

There are several techniques to produce and display 3D moving pictures.

Common 3D display technology for projecting stereoscopic image pairs to the viewer include[1]:

* Anaglyphic 3D (with passive red-cyan glasses)
* Polarization 3D (with passive polarized glasses)
* Alternate-frame sequencing (with active shutter glasses/headgear)
* Autostereoscopic displays (without glasses/headgear)

Single-view displays project only one stereo pair at a time. Multi-view displays either use head tracking to change the view depending of the viewing angle, or simultaneously project multiple independent views of a scene for multiple viewers (automultiscopic); such multiple views can be created on the fly using the 2D plus depth format.

Various other display techniques have been described, such as holography, volumetric display and the Pulfrich effect, that was used by Doctor Who for Dimensions in Time in 1993, by 3rd Rock From The Sun in 1997, and by the Discovery Channel's Shark Week in 2000, among others. Real-Time 3D TV (Youtube video) is essentially a form of autostereoscopic display.

Stereoscopy is the most widely accepted method for capturing and delivering 3D video. It involves capturing stereo pairs in a two-view setup, with cameras mounted side by side, separated by the same distance as between a person's pupils. If we imagine projecting an object point in a scene along the line-of-sight (for each eye, in turn) to a flat background screen, we may describe the location of this point mathematically using simple algebra. In rectangular coordinates with the screen lying in the Y-Z plane (the Z axis upward and the Y axis to the right) and the viewer centered along the X axis, we find that the screen coordinates are simply the sum of two terms, one accounting for perspective and the other for binocular shift. Perspective modifies the Z and Y coordinates of the object point by a factor of D/(D-x), while binocular shift contributes an additional term (to the Y coordinate only) of s*x/(2*(D-x)), where D is the distance from the selected system origin to the viewer (right between the eyes), s is the eye separation (about 7 centimeters), and x is the true x coordinate of the object point. The binocular shift is positive for the left-eye-view and negative for the right-eye-view. For very distant object points, it is obvious that the eyes will be looking along the same line of sight. For very near objects, the eyes may become excessively "cross-eyed". However, for scenes in the greater portion of the field of view, a realistic image is readily achieved by superposition of the left and right images (using the polarization method or synchronized shutter-lens method) provided the viewer isn't too near the screen and the left and right images are correctly positioned on the screen. Digital technology has largely eliminated inaccurate superposition that was a common problem during the era of traditional stereoscopic films.[2][3]

Multi-view capture uses arrays of many cameras to capture a 3D scene through multiple independent video streams. Plenoptic cameras, which capture the light field of a scene, can also be used to capture multiple views with a single main lens.[4] Depending on the camera setup, the resulting views can either be displayed on multi-view displays, or passed for further image processing.

After capture, stereo or multi-view image data can be processed to extract 2D plus depth information for each view, effectively creating a device-independent representation of the original 3D scene. This data can be used to aid inter-view image compression or to generate stereoscopic pairs for multiple different view angles and screen sizes.

2D plus depth processing can be used to recreate 3D scenes even from a single view and convert legacy film and video material to a 3D look, though a convincing effect is harder to achieve and the resulting image will likely look like a cardboard miniature

0 commentaire - Permalien - Partager
Commentaires