Upload
ani8888
View
219
Download
0
Embed Size (px)
Citation preview
8/8/2019 23618894 3d Tv Technology Seminar
1/30
An Assessment of
3DTV
Technologies
By:
Aniket Singh
B.Tech final year
EC A
Roll No. 0703231020
[1]
8/8/2019 23618894 3d Tv Technology Seminar
2/30
CONTENTS:
3DTV
INTRODUCTION
ARCHITECTURE
MULTIVIEW AND STEROSCOPIC
DISPLAY
3D DISPLAY
CONCLUSION
[2]
8/8/2019 23618894 3d Tv Technology Seminar
3/30
CHAPTER-1
3D TV
Creation of 3D Television
Tokyo - Imagine watching a football match on a TV that not only
shows the players in three dimensions but also lets you experience the
smells of the stadium and maybe even pat a goal scorer on the back.
Japan plans to make this futuristic television a commercial
reality by 2020as part of a broad national project that will bring
together researchers from the government, technology companies and
academia.
The targeted "virtual reality" television would allow people to
view high definition images in 3D from any angle, in addition to
being able to touch and smell the objects being projected upwards
from a screen to the floor.
"Can you imagine hovering over your TV to watch Japan versus
Brazil in the finals of the World Cup as if you are really there?" asked
Yoshiaki Takeuchi, development at Japan's Ministry of Internal
Affairs and Communications.
While companies, universities and research institutes around the
world have made some progress on reproducing 3D images suitable
for TV, developing the technologies to create the sensations of touch
and smell could prove the most challenging, Takeuchi said in an
interview with Reuters.
[3]
8/8/2019 23618894 3d Tv Technology Seminar
4/30
Researchers are looking into ultrasound, electric stimulation and
wind pressure as potential technologies for touch.
Such a TV would have a wide range of potential uses. It could
be used in home-shopping programs, allowing viewers to "feel" a
handbag before placing their order, or in the medical industry,
enabling doctors to view or even perform simulated surgery on 3D
images of someone's heart.
The future TV is part of a larger national project under which
Japan aims to promote "universal communication," a concept
whereby information is shared smoothly and intelligently regardless
of location or language.
Takeuchi said an open forum covering a broad range of
technologies related to universal communication, such as languagetranslation and advanced Web search techniques, could be established
by the end of this year.
Researchers from several top firms including Matsushita
Electric Industrial Co. Ltd. and Sony Corp. are members of a report
on the project last month.
The ministry plans to request a budget of more than 1 billion yen to
help fund the project in the next fiscal year starting in April 2006
[4]
8/8/2019 23618894 3d Tv Technology Seminar
5/30
CHAPTER-2
INTRODUCTION
Three-dimensional TV is expected to be the next revolution in
the TV history. They implemented a 3D TV prototype system with
real-time acquisition transmission, & 3D display of dynamic scenes.
They developed a distributed scalable architecture to manage the high
computation & bandwidth demands. 3D display shows high-
resolution stereoscopic color images for multiple viewpoints withoutspecial glasses. This is first real time end-to-end 3D TV system with
enough views & resolution to provide a truly immersive 3D
experience.
2.1 Why 3D TV
The evolution of visual media such as cinema and television is
one of the major hallmarks of our modern civilization. In many ways,
these visual media now define our modern life style. Many of us are
curious: what is our life style going to be in a few years? What kind of
films and television are we going to see? Although cinema and
television both evolved over decades, there were stages, which, in
fact, were once seen as revolutions:
1) at first, films were silent, then sound was added;
2) cinema and television were initially black-and-white, then color
was introduced;
[5]
8/8/2019 23618894 3d Tv Technology Seminar
6/30
3) computer imaging and digital special effects have been the latest
major novelty.
So the question is: what is the next revolution in cinema and
television going to be?
If we look at these stages precisely, we can notice that all types
of visual media have been evolving closer to the way we see things in
real life. Sound, colors and computer graphics brought a good part of
it, but in real life we constantly see objects around us at close range,
we sense their location in space, we see them from different angles as
we changeposition. This has not been possible in ordinary cinema.
Movie images lack true dimensionality and limit our sense that what
we are being seeing is real.
Nearly a century ago, in the 1920s, the great film director SergeiEisenstein said that the future of cinematography was the 3d motion
pictures. Many other cinema pioneers thought in the same way. Even
the Lumire brothers experimented with three-dimensional
(stereoscopic) images using two films painted in red and blue (or
green) colors and projected simultaneously onto the screen. Viewers
saw stereoscopic images through glasses, painted in the opposite
colors. But the resulting image was black-and-white, like in the first
feature stereoscopic film "Power of Love" (1922, USA, Dir. H.
Fairhal).
[6]
8/8/2019 23618894 3d Tv Technology Seminar
7/30
CHAPTER-3
ARCHITECTURE OF 3D TV
Figure 5 shows the schematic representation of 3D TV system.
Fig.5.1 3D TV System
The whole system consists mainly three blocks:
1. Acquisition
2. Transmission
3. Display Unit
The system consists mostly of commodity components that are
readily available today. Note that the overall architecture of system
accommodates different display types. Let's understand the three
blocks one after another.
[7]
8/8/2019 23618894 3d Tv Technology Seminar
8/30
5.1 Acquisition
The acquisition stage consists of an array of hardware-
synchronized cameras. Small clusters of cameras are connected to the
producer PCs. The producers capture live, uncompressed video
streams & encode them using standard MPEG coding. The
compressed video then broadcast on separate channels over a
transmission network, which could be digital cable, satellite TV or theInternet.
As explain above each camera captures progressive high-
definition video in real time. Generally they are using 16 Basler
A101fc color cameras with 1300X1030, 8 bits per pixel CCD sensors.
The question might be arising in your mind that what are CCD imagesensors & MPEG coding?
5.1.1 CCD Image Sensors
Charge coupled device are electronic devices that are capable of
transforming a light pattern (image) into an electric charge pattern (an
electronic image). The CCD consists of several individual elements
that have the capability of collecting, storing and transporting
electrical charge from one element to another. This together with the
photosensitive properties of silicon is used to design image sensors.
Each photosensitive element will then represent a picture element
(pixel). With semiconductor technologies and design rules, structures
[8]
8/8/2019 23618894 3d Tv Technology Seminar
9/30
are made that form lines, or matrices of pixels. One or more output
amplifiers at the edge of the chip collect the signals from the CCD.
An electronic image can be obtained by - after having exposed the
sensor with a light pattern - applying series of pulses that transfer the
charge of one pixel after another to the output amplifier, line after
line. The output amplifier converts the charge into a voltage. External
electronics will transform this output signal into a form suitable for
monitors or frame grabbers. CCDs have extremely low noise figures.
Figure 6 shows CCD sensors.
Fig.5.2 CCD Image SensorCCD image sensors can be a color sensor or a monochrome
sensor. In a color image sensor an integral RGB color filter array
provides color responsively and separation. A monochrome image
sensor senses only in black and white. An important environmental
parameter to consider is the operating temperature.
[9]
8/8/2019 23618894 3d Tv Technology Seminar
10/30
5.1.2 MPEG-2 Encoding
MPEG-2 is an extension of the MPEG-1 international standard
for digital compression of audio and video signals. MPEG-2 is
directed at broadcast formats at higher data rates; it provides extra
algorithmic 'tools' for efficiently coding interlaced video, supports a
wide range of bit rates and provides for multichannel surround sound
coding. MPEG- 2 aims to be a generic video coding system
supporting a diverse range of applications. Different algorithmic
'tools', developed for many applications, have been integrated into the
full standard. To implement all the features of the standard in all
decoders is unnecessarily complex and a waste of bandwidth, so a
small number of subsets of the full standard, known as profiles and
levels, have been defined. A profile is a subset of algorithmic tools
and a level identifies a set of constraints on parameter values (such as
picture size and bit rate). A decoder, which supports a particular
profile and level, is only required to support the corresponding subset
of the full standard and set of parameter constraints.
Now, the cameras are connected by IEEE-1394 High
Performance Serial Bus to the producer PCs. The maximum
transmitted frame rate at full resolution is 12 frames per seconds. Two
cameras each are connected to one of the eight producer PCs. All PCs
in this prototype have 3 GHz Pentium 4 Processors, 2 GB of RAM, &
run Windows XP.
[10]
8/8/2019 23618894 3d Tv Technology Seminar
11/30
They chose the Basler cameras primarily because it has an
external trigger that allows for complete control over the video
timing. They have built a PCI card with custom programmable logic
device (CPLD) that generates the synchronization signal for all the
cameras. So, what is PCI card?
5.1.3 PCI Card
The power and speed of computer components has increased ata steady rate since desktop computers were first developed decades
ago. Software makers create new applications capable of utilizing the
latest advances in processor speed and hard drive capacity, while
hardware makers' rush to improve components and design new
technologiesto keep up with the demands of high end software.
Fig.5.3 PCI Card
[11]
8/8/2019 23618894 3d Tv Technology Seminar
12/30
There's one element, however, that often escapes notice - the
bus. Essentially, a bus is a channel or path between the components in
a computer. Having a high-speed bus is as important as having a good
transmission in a car. If you have a 700-horsepower engine combined
with a cheap transmission, you can't get all that power to the road.
There are many different types of buses. In this article, you will learn
about some of those buses. We will concentrate on the bus known as
the Peripheral Component Interconnect (PCI). We'll talk about what
PCI is, how it operates and how it is used, and we'll look into the
future of bus technology.
All 16 cameras are individually connected to the card, which is
plugged into the one of the producer PCs. Although it is possible to
use software synchronization, they consider precise hardware
synchronization essential for dynamic scenes. Note that the price of
the acquisition cameras can be high, since they will be mostly used in
TV studios.
They arranged the 16 cameras in regularly spaced linear array. See the
figure 8.
[12]
8/8/2019 23618894 3d Tv Technology Seminar
13/30
Fig.5.4 Arrays of 16 Cameras
The optical axis of each camera is roughly perpendicular to a
common camera plane. It is impossible to align multiple cameras
precisely, so they use standard calibration procedures to determine theintrinsic & extrinsic camera parameters. In general, the cameras can
be arranged arbitrarily because they are using light field rendering in
the consumer to synchronize new views. A densely spaced array
proved the best light fields capture, but high-quality reconstruction
filters could be used if the light field is under sampled.
5.2 Transmission
Transmitting 16 uncompressed video streams with 1300X1030
resolution & 24 bits per pixel at 30 frames per seconds requires 14.4
Gblsec bandwidth, which is well beyond current broadcast
[13]
8/8/2019 23618894 3d Tv Technology Seminar
14/30
capabilities. For compression & transmission o1 dynamic muitiview
video data there are two basic design choices. Either the data from
multiple cameras is compressed using spatial or spatio-temporal
encoding, or each video stream is compressed individually using
temporal encoding. The first option offers higher compression, since
there is a lot of coherence between the views. However, it requires
that a centralized processor compress multiple video streams. This
compression-hub architecture is not scalable, since the addition of
more views will eventually overwhelm the internal bandwidth of the
encoder. So, they decided to use temporal encoding of individual
video stream on distributed processors.
This strategy has other advantages. Existing broadband
protocols & compression standards do not need to be changed for
immediate real world 3D TV experiments. This system can plug into
today's digital TV broadcast infrastructure & co-exist in perfect
harmony with 2D TV.
There did not have access to digital broadcast equipment, they
implemented the modified architecture as shown in figure 9.
[14]
8/8/2019 23618894 3d Tv Technology Seminar
15/30
8/8/2019 23618894 3d Tv Technology Seminar
16/30
of Ethernet (the most widely installed LAN technology), that can
provide data transfer rates of about 1 gigabit per second (Gbps).
Gigabit Ethernet provides the capacity for server
interconnection, campus backbone architecture and the next
generation of super user workstations with a seamless upgrade path
from existing Ethernet implementations.
5.3 Decoder & Consumer Processing
The receiver side is responsible for generating the appropriate
images to be displayed. The system needs to be able to provide all
possible views to the end users at every instance. The decoder
receives a compressed video stream, decode it, and store the current
uncompressed source frame in a buffer as shown in figure 10. Each
consumer has virtual video buffer (VVD) with data from all current
source frames. (I.e., all acquired views at aparticular time instance).
Fig.5.6 Block Diagram of Decoder and Consumer processing
[16]
8/8/2019 23618894 3d Tv Technology Seminar
17/30
The consumer then generates a complete output image by
processing image pixels from multiple frames in the VVB. Due to the
bandwidth 8 processing limitations it would be impossible for each
consumer to receive the complete source of frames from all the
decoders. This would also limit the scalability of the system.
Here is one-to-one mapping between cameras & projectors. But
it is not very flexible. For example, the cameras need to be equally
spaced, which is hard to achieve in practice. Moreover, this method
cannot handle the case when the number of cameras & projectors is
not same.
Another, more flexible approach is to use image-based
rendering to synchronize views at the correct virtual camera positions.
They are using unstructured lurnigraph rendering on the consumer
side. They choose the plane that is roughly in the center of the depth
of field. The virtual viewpoints for the projected images are chosen at
even spacing. Now focus on the processing for one particular
consumer, i.e., one particular view. For each pixel o (u, v) in the
output image, the display controller can determine the view number
v& the position (x, y) of each source pixel s (v, x, y) that contributesto it.
To generate output views from incoming video streams, each
output pixel is a linear combination of k source pixels:
0 (u, v) wts (v, x, y)
............ (1)
[17]
8/8/2019 23618894 3d Tv Technology Seminar
18/30
The blending weights w can be pre-computed by the controller
based on the virtual view information. The controller sends the
position (x, y) of the k source pixels to each decoder v for pixel
selection. The index c of the requesting consumer is sent to the
decoder for pixel routing from decoders to the consumer. Optionally,
multiple pixels can be buffered in to the decoder for pixel block
compression before being sent over the network. The consumer
decompresses the pixel blocks & stores each pixel in VVB number v
at position (x, y). Each output pixel requires from k source frames.
That means that the maximum bandwidth on the network to the VVB
is k times the size of the output image times the number of frames per
second (fps). This can be substantially reduced if pixel block
compression is used, at the expense of more processing. So to provide
scalability it is important that this bandwidth is independent of thetotal number of the transmitted views. . The processing requirements
in the consumer are extremely simple. It needs to compute equation
(1) for each output pixel. The weights are pre computed & stored in a
lookup table. The memory requirements are k times the size of the
output image. Assuming simple pixel block compression, consumers
can easily be implemented in hardware. That means decoders,
networks, & consumers could be combined on the one printed circuit
board. Let's move on to the different types of display.
[18]
8/8/2019 23618894 3d Tv Technology Seminar
19/30
CHAPTER-4
MULTIVIEW AUTO STEREOSCOPIC DISPLAY
6.1 Holographic Displays
It is widely acknowledged that Dennis Gabor invented the
hologram in 1948. he was working on an electron microscope. He
coined the word and received a Nobel Prize for inventing holography
in 1971. The holographic image is true three-dimensional: it can be
viewed in different angles without glasses. This innovation could be a
new revolution a new era of holographic cinema and of holographic
media in whole.
Holographic techniques were first applied to image display by
Leith & Upatnieks in 1962. In holographic reproduction, interference
fringes on the holographic surface to reconstruct the light wave front
of the original object diffract light from illumination source. A
hologram displays a continuous analog field has long been considered
the holy grail of 3D TV. Most recent device, the Mark-2
Holographic Video Display, uses acousto-optic modulators, beam
splitters, moving mirrors & lenses to create interactive holograms. In
more recent systems, moving parts have been eliminated by replacing
the acousto-optic modulators with LCD, focused light arrays, and
optically addressed spatial modulators, digital micro mirror devices.
Figure shows the holographic image.
[19]
8/8/2019 23618894 3d Tv Technology Seminar
20/30
Fig.6.1 Holographic Image
All current holo-video devices use single-color laser light. To
reduce the amount of display data they provide only horizontal
parallax. The display hardware is very large in relation to size of the
image. So cannot be done in real-time.
6.2 Holographic Movies
We have developed the world's first holographic equipment with
the capability of projecting genuine 3-dimensional holographic films
as well as holographic slides and real objects for the multipleviewers simultaneously. Our Holographic Technology was primarily
designed for cinema. However it has many uses in advertising and
show business as well.
At the same time we have developed a new 3d digital image
processing and projecting technology. It can be used for creation the
[20]
8/8/2019 23618894 3d Tv Technology Seminar
21/30
modern 3d digital movie theaters and for the computer modeling of 3d
virtual realities as well. On the same principle we have already tested
a system 3d color TV. In all cases audience can see colorful 3-d
inconvenient accessories.
Developed in the Holographic Laboratories of Professor Victor
Komar (NIKFI), these technologies have received worldwide
recognition, including an Oscar for Technical Achievement in
Hollywood, a Nika Film Award in Moscow, endorsement from MIT's
Media Lab and many others.
On this website you can find general information about our
technology, projects, brief history of 3d and holographic cinema,
investment opportunities and sales. For more specific questions please
check FAQ section on the ENQUIRE page. You can also send us a
message via email: the addresses are on the CONTACT page. Wehave developed the world's first holographic equipment the genuine 3-
dimensional holographic films as well as holographic slides and real
objects for the multiple viewers. Our Holographic Technology was
primarily designed for cinema. However it has many uses in
advertising and show business as well.
6.2.1 Volumetric Displays
It use a medium to fill or scan a three-dimensional space &
individually address & illuminate small voxels. However, volumetric
systems produce transparent images that do not provide a fully
[21]
8/8/2019 23618894 3d Tv Technology Seminar
22/30
convincing three dimensional experience. Furthermore, they cannot
correctly reproduce the light field of a natural scene because of
their limited color reproduction & lack of occlusions. The design of
large size volumetric displays also poses some difficult obstacles.
6.2.2 Parallax Displays
Parallax displays emit spatially varying directional light. Much
of the early 3D display research focused on improvement to Wheat
stone's stereoscope. In 1903, F.Ives used a plate with vertical slits as a
barrier over an image with alternating strips of left-eye/right-eye
images. The resulting device is called a parallax stereogram. To
extend the limited viewing angle 8 restricted viewing position of
stereogram, Kanolt & H.Ives used narrower slits & smaller pitch
between the alternating image strips. These multiview images are
called parallax panorama grams.
Stereogram & panorama grams provide only horizontal parallax.
Lippmann proposed using an array of spherical lenses instead of slits.
This is frequently called a 'fly's eye" lens sheet, & resulting image is
called integral photograph. An integral is a true planar light field with
directionally varying radiance per pixel. Integral sacrifice significant
spatial resolution in both dimensions to gain full parallax. Researchers
in the 1930s introduced the lenticular sheet, a line of array of narrow
cylindrical lenses called Isnticules. Lenticular images found
widespread use for advertising, CD covers, & postcards. To improve
the native resolution of the display, H.Ives invented the multi-
[22]
8/8/2019 23618894 3d Tv Technology Seminar
23/30
projector lenticular display in 1931. He painted the back of a
lenticular sheet with diffuse paint & used it as a projection surface for
39 slide projectors. Finally high output resolution, the large number of
views & the large physical dimensions of or display leads to a very
immersive 3D display. Other research in parallax displays includes
time multiplexed 8 tracking-bass systems. In time multiplexing,
multiple views are projected at different time instances using a sliding
window or LCD shutter. This inherently reduces the frame rate of the
display & may lead to noticeable flickering. Head-tracking designs
are mostly used to display stereo images, although it could also be
used to introduce some vertical parallax in multiview lenticular
displays. Today's commercial auto stereoscopic displays use
variations ofparallax barriers or lenticular sheets placed on the top of
LCD orplasma screens. Parallax barriers generally reduce some ofthe brightness & sharpness of the image. Here, this projector based
3D display currently has a native resolution of 12 million pixels.
[23]
8/8/2019 23618894 3d Tv Technology Seminar
24/30
Fig.6.2 Images of a scene from the viewer side of the display (top row) and
as seen from some of the cameras (bottom row).
6.2.3 Multi Projector
Displays offer very high resolution, flexibility, excellent cost
performance, scalability, & large-format images. Graphics rendering
for multiprojector systems can be efficiently parallelized on clusters
of PCs using, for example, the Chromium API. Projectors also
provide the necessary flexibility to adapt to non-planar display
geometries. Precise manual alignment of the projector array is tedious
8 becomes downright impossible for more than a handful of
projectors or non-planar screens. Some systems use cameras in the
loop to automatically compute relative projectors poses for automatic
alignment. Here they will use static camera for automatic image
alignment & brightness adjustments of the projectors.
[24]
8/8/2019 23618894 3d Tv Technology Seminar
25/30
CHAPTER-5
3D DISPLAY
This is a brief explanation that we hope sorts out some of the
confusion about the many 3D display options that are available today.
We'll tell you how they work, and what the relative tradeoffs of each
technique are. Those of you that are just interested in comparing
different Liquid Crystal Shutter glasses techniques can skip to the
section at the end.
Of course, we are always happy to answer your questions personally,
and point you to other leading experts in the field.
Figure shows a diagram of the multi-projector 3D displays with
lenticular sheets.
Fig.7.1 Projection-type lenticular 3D displays
They use 16 NEC LT-170 projectors with 1024'768 native
output resolution. This is less that the resolution of acquired &
[25]
8/8/2019 23618894 3d Tv Technology Seminar
26/30
transmitted video, which has 1300'1030 pixels. However, HDTV
projectors are much more expensive than commodity projectors.
Commodity projector is a compact form factor. Out of eight consumer
PCs one is dedicated as the controller. The consumers are identical to
the producers except for a dual-output graphics card that is connected
to two projectors. The graphic card is used only as an output device.
For real-projection system as shown in the figure, two lenticular
sheets are mounted back-to-back with optical diffuser material in the
center. The front projection system uses only one lenticular sheet with
a retro reflective front projection screen material from flexible fabric
mounted on the back. Photographs show the rear and front projection.
Fig.7.2 Rear Projection and Front Projection
The projection-side lenticular sheet of the rear-projection
display acts as a light multiplexer, focusing the projected light as thin
[26]
8/8/2019 23618894 3d Tv Technology Seminar
27/30
vertical stripes onto the diffuser. Close up of the lenticular sheet is
shown in the figure 6. Considering each lenticel to be an ideal
Pinhole camera, the stripes capture the view-dependent radiance
of a three-dimensional light field. The viewer side lenticular sheet acts
as a light de-multiplexer & projects the view-dependent radiance back
to the viewer. The single lenticular sheet of the front-projection screen
both multiplexes & demultiplexes the light.
The two key parameters of lenticular sheets are the field-of-view
(FOV) & the number of lenticules per inch (LPI). Here it is used 72" '
48" lenticular sheets with 30 degrees FOV & 15 LPI. The optical
design of the lenticules is optimized for multiview 3D display. The
number of viewing zones of a lenticular display is related to its FOV.
For example, if the FOV is 30 degrees, leading to 180/30 = 6 viewing
zones.
7.1 3D TV for 21st Century
Interest in 3D has never been greater. The amount of research
and development on 3D photographic, motion picture and television
systems is staggering. Over 1000 patent applications have been filed
in these areas in the last ten years. There are also hundreds of
technical papers and many unpublished projects.
I have worked with numerous systems for 3D video and 3D
graphics over the last 20 years and have years developed and
marketed many products. In order to give some historical perspective
[27]
8/8/2019 23618894 3d Tv Technology Seminar
28/30
Ill start with an account of my 1985 visit to Exposition 85 in
Tsukuba, Japan, I spent a month in Japan visiting with 3D researchers
and attending the many 3D exhibits at the Tsukuba Science
Exposition. The exposition was one of the major film and video
events of the century, with a good chunk of its 2 1/2 billion dollar cost
devoted to state of the art audiovisual systems in more than 25
pavilions. There was the worlds largest IMAX screen, Cinema-U (a
Japanese version of IMAX), OMNIMAX (a dome projection version
of IMAX using fisheye lenses) in 3D, numerous 5, 8 and 10
perforation 70mm systems - several with fisheye lens projection onto
domes and one in 3D, single, double and triple 8 perforation 35mm
systems, live high definition (1125 line) TV viewed on HDTV sets
and HDTV video projectors (and played on HDTV video discs and
VTRs), and giant outdoor video screens culminating in Sonys 30meter diagonal Jumbotron (also presented in 3D). Included in the 3D
feast at the exposition were four 3D movie systems, two 3DTV
systems (one without glasses), a 3D slide show, a Pulfrich
demonstration (synthetic 3D created by a dark filter in front of one
eye), about 100 holograms of every type, size and quality (the
Russians were best), and 3D slide sets, lenticular prints and
embossed holograms for purchase. Most of the technology, from a
robot that read music and played the piano to the worlds largest
tomato plant, was developed in Japan in the two years before the
exposition, but most of the 3D hardware and software was the result
of collaboration between California and Japan. It was the chance of a
[28]
8/8/2019 23618894 3d Tv Technology Seminar
29/30
lifetime to compare practically all of the state of the art 2D and 3D
motion picture and video systems, tweaked to perfection and running
12 hours a day, seven days a week. After describing the systems at
Tsukuba, I will survey some of the recent work elsewhere in the
world and suggest likely developments during the next decade.
[29]
8/8/2019 23618894 3d Tv Technology Seminar
30/30
CHAPTER-6
CONCLUSION
Most of the key ideas for 3D TV systems presented in this
paper have been known for decade, such as lenticular screens,
multi projector 3D displays, and camera array for acquisition.
This system is the first to provide enough view points and
enough pixels per view points to produce an immersive and
convincing 3D experience. Another area of future research is
to improve the optical characteristic of the 3D display
computationally. This concept is computational display.
Another area of future research is precise color reproduction
of natural scenes on multiview display.