Vous êtes sur la page 1sur 16

This article has been accepted for inclusion in a future issue of this journal.

Content is final as presented, with the exception of pagination.

Holographic and Light-Field


Imaging as Future 3-D Displays
By J u ng -You ng S on , H you ng L e e , B eom -R y eol L e e , and Kwa ng -H o on L e e

ABSTRACT | Light-field imaging and holographic imaging are the flat panel display is that the 3-D image can be displayed
currently the two mostly investigated 3-D imaging technologies together with the plane image which is a reference plane image
because of their potentials to create the viewing environment of a 3-D image. Hence, 2-D/3-D-compatible displays based on
conforming to a natural viewing condition. The basic optical a flat panel display, and electroholography based on a digital
geometries for image display in these imaging are not differ-
display chip such as SLM, DMD, and LCD are currently major
ent from that of integral photography. The images in the two
research subjects in 3-D imaging. Some of the methods have
type of imaging are a set of different view images. These images
are arranged as a 2-D point image array, and each point image
almost matured technically, but they are still slowly developed
is expanded with a certain angle to form a viewing zone. The in the market. The stereoscopic TV based on polarization and
differences between the two types of imaging are the number shutter glasses, and mobile phones based on parallax bar-
of point images in the array and the physical entities forming rier are already on the market, and the TV has already been
the images. Holographic imaging has many more point images broadcast [3]–[5]. But a decreasing interest in the TV causes
than light-field imaging, and each image in the array consists it to be an extra imaging function of the flat panel display for
of coherent right rays from different positions of an object. In plane images, and the multiview 3-D displays did not even have
light-field imaging, an array of pixels represents a direction a chance to be produced in mass production lines. There are
view of the object. Despite these differences, they share the many reasons for this, but the unnatural viewing conditions
same goal of providing a continuous parallax to viewers and
such as restrictions in the viewing position changes, inconven-
require display panels of almost the same characteristics. It is
ience of wearing glasses, unnatural depth sense, and distorted
expected that in the future these two imaging techniques will be
integrated into the same flat panel along with the plane image.
images accompanied with viewers viewing position changes
[6], [7] are the major reasons for the stereoscopic TV and the
KEYWORDS  |  Continuous parallax; holographic imaging; light-
inferior image quality compared with the plane image for the
field imaging; multiview imaging; point image array
multiview displays. In fact, the continuously increasing panel
size, higher resolution, and brighter and more uniform bright-
ness distribution in flat panel displays have been continuously
I.  INTRODUCTION improving the qualities of plane images and strengthening the
Three-dimensional imaging has been developed to bring the psychologically induced depth sense in images. The quality of
image that we perceive in our everyday life with as few altera- 3-D images has also been improved by the flat panel displays
tions as possible, and it is considered to be the main imaging but the improvement speed is much lower than that of the
technology of the future. The 3-D imaging methods developed plane images. This speed difference makes the relative image
before the late 1980s are stereoscopic imaging for theatrical quality of 3-D images much worse than that of plane images.
use, volumetric imaging for special purposes, and holographic But this is not the only reason why 3-D imaging still struggles
imaging for art and metrological applications. The 3-D imag- in finding a way to stand alone in the market, even with the
ing methods developed since the late 1980s are mainly for the large number of 3-D imaging methods within the matured
home use and they have been mostly realized on a flat panel technologies. The more plausible reason is that 3-D imaging
display since the mid-1990s [1], [2]. The advantage of using has not utilized fully its only merit over the plane image: the
depth sense. The current 3-D imaging can only provide vir-
Manuscript received June 30, 2016; revised November 1, 2016; accepted December 4, tual depth sense which can cause discomfort and eye fatigue
2016. This work was supported by ªThe Cross-Ministry Giga KOREA Projectº
Grant from the Ministry of Science, ICT, and Future Planning, South Korea.
induced by the vergence–accommodation (V–A) conflict [8],
J.-Y. Son and H. Lee are with the Biomedical Engineering Division, Konyang except for holographic and light-field imaging. Three-dimen-
University, Nonsan, Chungnam 32992, South Korea (email: jyson@konyang.ac.kr;
hyoung0708@gmail.com).
sional imaging should provide depth sense conforming to
B.-R. Lee is with the Next Generation Visual Computing Research Section, Electronics natural viewing conditions. This is what the holographic and
and Telecom Research Institute, Daejeon 34129, South Korea (e-mail: lbr@etri.re.kr).
K.-H. Lee is with the 3D Convergence Research Center, Korea Photonics Technology
light-field displays aim for. Natural viewing conditions can also
Institute, Gwangju 61007, South Korea (e-mail: geniuspb@kopti.re.kr). be achieved with the stereoscopic imaging method, wearing
Digital Object Identifier: 10.1109/JPROC.2017.2666538
0018-9219 © 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications standards/publications/rights/index.html for more information.
Proceedings of the IEEE   1
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

Son et al . : Holographic and Light-Field Ima ging a s Future 3-D Displays

the polarization glasses, by covering two pinholes separated and brightness decrease; 4) the range of viewing zone in
by a short distance with different color filters on each eye side depth direction; 5) image clearness and sharpness; 6) vari-
polarizer [9]. But the necessity of wearing eyeglasses and using ous image distortions and crosstalk in both transversal and
color filters is a problem for the method. Holographic imaging depth directions; 7) image resolution in both transversal
is a shorter name for electroholographic imaging which uses and depth directions; and 8) image brightness. The quality
a digital display chip to display a hologram. The imaging has will be better if these parameter values are higher, except
been known as the most complete 3-D imaging method ever the distortions and crosstalk. They should be kept as small as
devised [10]. It can create a reconstructed image with a spatial possible for the quality purpose. All the parameters, except
position and real entity. The imaging can provide images con- 1), 2), and 7), are also applicable to plane images, and their
forming to a natural viewing condition. But with the currently values for 3-D images should be comparable to those of the
available digital display chips, holographic imaging cannot plane images, though depth resolution is applicable only to
demonstrate its actual capability. On the contrary, the light- 3-D images. But none of the currently developed 3-D imag-
field display is one of multiview displays, which can provide a ing can provide parameter values comparable to those of the
continuous parallax and very wide focusable depth range, but plane images and satisfactory of their own requirements.
its name has been very confusingly used so far. This makes Among the above parameters, the focusable image depth
it difficult to imagine what light field actually means. In fact, range is defined as the depth of field of viewers’ eyes, i.e.,
the light fields formed by visible light rays of various wave- the depth range between front and rear spaces of the display
lengths from objects and scenes in our environments enable panel/screen, where viewers can accommodate and verge
humans to view and recognize them. The lights from a display their eyes without losing image clearness [22]. It is known
also form light fields in its front space. Hence, any display can that when diopter ​D​is defined as ​1 / r​, where ​r​is the viewing
be called a light-field display. However, the word “light field” distance from the image panel/screen, expressed in meters
has been used to express the light rays in different spatial posi- [23], the DOF is approximately given as ​D ± 0.3D​for the ste-
tions [11]. It has been adopted in computer graphics for the reoscopic images [24]. So, the floating images within DOF
purpose of representing the rendering process involved with can be viewed without the V–A conflict. Hence, the focusable
light rays [12]. Since then, it has been consistently used in com- depth range should be extended more than the distance range
puter graphics to represent the process. But in 3-D imaging, specified by ​r / 1 . 3~r / 0.7​, as illustrated in Fig. 1. Fig. 1 is a geo-
the “light-field display” has been used to name several types metrical presentation of the focusable depth range. The depth
of 3-D displays such as: 1) utilizing a 360° rotating imager [13], range can be extended to the point at which the distance from
[14]; 2) utilizing multiview images [15]; and 3) utilizing typi- viewers is enough to make them fuse the images from the dis-
cal IP [16]–[18]. The term “light-field display” has also been play panel/screen. The extended focusable image depth range
used by Holografica for its 3-D displays [19], [20]. But in this is specified as super-multiview (SM) zone as in Fig. 1. The
review, the light-field display is used as another term for the extension is achieved by projecting at least two different
super-multiview display [21]. Light-field imaging and electro- view images at the same time to the pupil of a viewer’s each
holographic imaging are considered to be very promising 3-D eye. In the holographic images, the reconstructed image
imaging methods. They are promising because of their abili- has a real spatial position and a certain volume. Hence, the
ties to provide a continuous parallax which is a core require- focusable image depth range can be extended to the distance
ment of fulfilling the natural viewing condition, and a large set by the available display space. If this range increases
focusable image depth that is at least exceeding the focusable more and more, the eye fatigue induced by the V–A conflict
depth range allowed by the depth of field (DOF) of viewers’ will be reduced further. The smoothness in parallax changes
eyes when viewing a stereoscopic image. This large depth implies that the 3-D image should provide as many as pos-
guarantees no eye fatigue by the V–A conflict. sible different view images to a viewer’s each eye at a same
In this review, the parameters defining the image qual-
ity of 3-D imaging are defined, and the development histories
of the holographic and light-field images are described along
with the brief history of other 3-D images. The characteristics
of two types of imaging are compared in detail and future per-
spective of the two types of imaging is also described.

II.  PA R A M ET ER S DEF I N I NG T H E
QUA L I T Y OF 3 -D I M AGE S
In 3-D imaging, the parameters of estimating the image
quality are as follows: 1) the focusable image depth range ∼
causing no eye fatigue; 2) smoothness in parallax changes;
3) the viewing zone angle range with no image distortion Fig. 1. Graphical illustration of DOF and SM zone.

2  Proceedings of the IEEE


This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

Son et al . : Holographic and Light-Field Ima ging a s Future 3-D Displays

time corresponding to the natural viewing condition. The interlaced on–off shutter [40], and those based on VZFO
natural viewing allows the viewer’s each eye to get a continu- such as the parallax barrier [41] and lenticular plates, have
ous parallax, i.e., an almost infinite number of different view been introduced [42]. Along with these stereoscopic imag-
images at the same time. The holographic image can prob- ing methods, holographic and volumetric imaging methods
ably provide enough different view images to meet the con- have also been developed since 1960s [43], [44]. The volu-
dition, but for other 3-D images, the condition can be hardly metric imaging methods allow displaying 3-D images with a
realized because, in practice, it is impossible to display an real volume but they need either a translating or rotating
infinite number of different view images through the dis- screen/panel, an imaging chamber, or a layered screen/
play panel. It is not known how many different view images panel to create their image spaces where images can be dis-
are projected to the pupil of a viewer’s each eye in the light- played. The image spaces require a physical volume filled
field images claimed so far [25]–[27], but four images the with a special gas [45] or other phase materials [46], with a
most [28]. The continuous parallax is what makes the light- moving mechanism of an active or passive screen [47], [48],
field images comparable to the holographic images and or with layered functional plates [49]. The difficulties of cre-
different from the current multiview images. The viewing ating the image spaces are the bottleneck of further progress
zone angle range defines the angle range that allows viewers in this imaging technique. The eyeglasses-based stereo-
standing in front of the display panel to move around to per- scopic imaging methods are still popular but they are not
ceive 3-D images from the image on a display panel/screen. too friendly to viewers because they induce fatigue to view-
This angle is related to the field of view (FOV) of the view- ers’ eyes and cause other problems mentioned before. While
ing zone forming optics (VZFO) such as parallax barrier/ the VZFO-based stereoscopic images have been applied to
lenticular/microlens array plate used in multiview imag- mobile phones and there is a mobile phone equipped with a
ing [29]–[31] and different from the viewing angle which stereo camera [50], [51], they impose a strong restriction on
is related to the panel/screen size [32]. It is within the angle the viewers’ eye positions. To ease the problems in stereo-
range 0°–180°. It should be close to 180° to cover almost scopic images, multiview and electroholographic imaging
an entire front space of the display panel/screen as the flat methods have been developed since late 1980s [52]–[54].
panel display, for an ideal 3-D display. However, the range The multiview imaging methods allow viewers’ viewing
will be hardly attained with 3-D images because the depth position changes without wearing special glasses and mini-
range is achieved by sacrificing the angle. In electroholog- mizing the image distortions accompanied with the changes
raphy based on SLMs or DMDs [33]–[35], the angle can be by providing both binocular and motion parallaxes to view-
less than 30° even with SLM of 1-​μ​m pixel size. The range ers. The methods were realized in many different ways [1],
of the viewing zone in depth direction defines the viewing but currently the contact type, which has the structure of a
distance range within the viewing zone. This range is not flat display panel layered with one of the VZFOs mentioned
well defined in multiview 3-D images but is defined clearly above or a 2-D microlens array which was introduced in the
in holographic images. The image resolution of 3-D images early 20th century [55], on the top of the panel as shown in
in transversal direction is worse than in the plane image Fig. 2, is mainstream. Fig. 2 shows the panel structure of
because 3-D images share pixels for different view images. the contact-type 3-D displays when a lenticular or micro-
For the holographic images, resolution is much worse due lens array is used as the VZFO. The VZFO has a finite thick-
to the presence of astigmatism. Astigmatism also deterio- ness. Its thickness comprises the glass plate to protect the
rates the depth resolution which is unique for 3-D images. liquid crystal layer. The contact-type multiview imaging
Resolution depends on the pixel size and camera distance methods utilizing the parallax barrier or the lenticular
in a camera array for the multiview images in the multi- plate as the VZFO and a multiview camera array as the
view displays [36]. The other parameters are related with source of the multiview images were called multiview
the presence of the continuous parallax, and pixel size and (MV), and when the microlens array was used as both the
resolution. As the size decreases and resolution increases, VZFO and the source of multiview images with an image
the parameter values will be increased. detector, they were called integral photography (IP) [55].

III.  H IST ORY OF 3 -D I M AGI NG: F ROM


ST ER EO I M AGI NG T O L IGH T-F I EL D
DISPL AYS
The principle of achieving the depth sense with a stereo
image pair was known before the 18th century [37]. This
principle was actualized as a stereoscope [38] in the mid-
19th century. Since then several stereoscopic imaging meth-
ods based on special eyeglasses such as anaglyph, polariza-
tion, gray level difference, i.e., the Pulfrich effect [39] and Fig. 2. Typical structure of contact-type multiview 3-D displays.

Proceedings of the IEEE   3


This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

Son et al . : Holographic and Light-Field Ima ging a s Future 3-D Displays

The microlens array combined with the image detector has IP to form the viewing zone. The viewing zone is defined as
been developed as a plenoptic camera [56]–[58]. Also, the the space where the magnified images of all pixel cells (ele-
differences between MV and IP have diminished because mental images) are crossing together. The boundary of this
the lenticular plate was also used as the VZFO in IP under view zone is defined by the crossing between the left (top)
the name of 1-D IP [59]. However, there are still differences and right (bottom) most pixel cells (elemental images). The
between them because their optical configurations and viewing zone in MV is given as the diamond shape space
image compositions on the display panel are different [60]. near the center of geometry, but in IP it is given as the
The MV has a radial configuration and the IP has a parallel pyramidal shape space at the bottom of geometry. The view-
configuration. This configuration difference originated in ing zone of IP will appear to be at a much longer distance
the relative size of a pixel cell (elemental image), which is from the panel as the number of elemental images increases
and its internal pattern matches completely the upper part
the basic image unit of MV (IP) [61], to that of an elemental
of MV’s viewing zone when the viewing zone is divided into
optic which consists of the VZFO. In (IP), the sizes of the
two by the line representing the viewing zone cross section
pixel cell (the elemental image) in both horizontal and verti-
(VZFO) [62]. The incomplete viewing zones are the places
cal directions are slightly bigger than (the same as) those of
where a part of pixel cells (elemental images) are viewed.
the elemental optic, as shown in Fig. 3. Fig. 3 compares the When the chief rays from each pixel cell (elemental image)
optical geometries for forming viewing zones in MV [Fig. are considered, as shown in Fig. 3, it is possible to assume
3(a)] and IP [Fig. 3(a)]. Each pixel cell (elemental image) con- that the pixel cell (elemental image) is focused at the center
sists of six pixels (eight pixels) and is projected to its front of each elemental optic and starts to expand from the
space by its front elemental optics to form the viewing focused point. It is possible to represent the viewing zone
zones. When considering a pixel cell (elemental image) and forming geometry of MV and IP by an array of point images
its front elemental optic, they work like a projector. Hence, corresponding to the array of pixel cells (elemental images).
the projectors are radially aligned in MV and in parallel in Each point image is located at the center of each elemental
optic in the VZFO. However, the optical configuration dif-
ference is not the main feature of discriminating the MV
and IP because IP has also adopted the radial optical con-
figuration [63] due to the difficulties of forming the viewing
zone corresponding to the optimum viewing distance in IP.
In fact, since the difference between the pixel cell and the
elemental optic is just a few micrometers [64], the optical
configuration difference between MV and IP is also not very
prominent. Hence, the remaining difference between them
is the image composition in the pixel cell and the elemental
image. The image composition of an elemental image is the
inverted image of a view image viewed at the position of an
elemental optic in VZFO, and that of a pixel cell is the same
position pixel from each of the multiview images aligned to
the inverted order of the cameras in the multiview camera
array that generates the multiview images. The display pan-
els for MV and IP are composed of pixel cells in the order of
pixels in a view image and of elemental images in the order
of the elemental optics in the VZFO, respectively [65]. As
indicated before, the multiview 3-D imaging methods
improved some problems in stereoscopic imaging, but the
presence of the VZFO introduced other problems such as
reduced brightness, reduced resolution for each view image,
and annoyances in viewing. Adding on these, these are not
free from eye fatigue and image jump caused by inappropri-
ate parallax changes between multiview images incurred by
the finite number of different view images, when viewers
change their viewing positions. To resolve these problems,
active parallax barrier [66] 6lenticular [67], LCD lens array
[68] and electrowet lens array [69] have been introduced.
Fig. 3. Viewing zone forming geometries of contact-type multiview Instead of improving the quality of VZFO, a method of elim-
3-D displays having (a) radial (MV) and (b) parallel configurations. inating the VZFO with the use of a point light source array

4  Proceedings of the IEEE


This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

Son et al . : Holographic and Light-Field Ima ging a s Future 3-D Displays

have also been introduced [70]. The problem of the point VZFO. Hence, the image from a projector was shifted a cer-
light source array method is that the output aperture of each tain distance corresponding to the period of projectors in
light source should be smaller than the basic resolution ele- the array from its adjacent projectors’ images. It claimed
mental in a display panel, i.e., pixel or subpixel to ensure the that the display could provide a continuous parallax. How-
full use of the panel’s pixel resource. With the improve- ever, the basic principle of this display is the same as the IP.
ments in VZFO, the multiview imaging methods are further The compositions of the perceived image from Holovizio are
developed as super-multiview imaging [71]. The basic idea of very similar to those from FLA. This display was followed by
super-multiview is projecting at least two different view a display called multiple images display (MID) which dis-
images at the same time to viewers’ each eye through differ- plays more than 200 different view images [27]. The equiva-
ent pupil positions without overlapping, as shown in Fig. 4. lent optical configuration and image composition of MID
Fig. 4 shows the concept of super-multiview imaging. For are not different from those of FLA, except the number of
stereoscopic imaging, only a view image is projected to each different view images and their multiplexing schemes. The
eye but super-multiview imaging 2, 3, 4 and more view multiview images in MID are presented based on a spatial-
images will be projected to each eye. This idea is to simulate multiplexing scheme but the FLA is presented based on a
the natural viewing condition to achieve a continuous paral- time-multiplexing scheme. The images in MID are horizon-
lax. This condition represents a typical phenomenon appear- tally aligned and each of them is collimated and then pro-
ing at the input aperture of an imaging device such as a cam- jected radially to a diffusive plate. All images have the same
era, a microscope, or a binocular since every point of the size as the diffusive plate and are aligned such that the
input aperture of the device consists of a view image formed center of each image is matched to that of the plate with the
by a ray from each point of an object/a scene. Hence, there same angle distance from its adjacent images. In this way, it
are an infinite number of different view images at the aper- can be considered that the same position pixels in the
ture. The pupil of viewer’s each eye is the same as the aper- images will overlap one another at the diffuser plate. Hence,
ture. The continuous parallax is induced by these infinite the composition of the overlapped pixels is the same as that
numbers of different view images and it can possibly induce of a pixel in the FLA. MID claimed that it can extend the
a monocular depth sense as well [72]. This means that depth of field to the SM zone, as shown in Fig. 1 [76]. How-
accommodation is also obtained with the continuous paral- ever, whether MID conforms to a super-multiview concept
lax. The super-multiview concept was introduced in the has not been verified because it was not known how many
mid-1990s with a display called focused light array (FLA) different view images are getting into the pupil of viewers’
[73]. The equivalent optical configuration and image com- each eye simultaneously. In Fig. 5, the image projection
position of FLA is, in principle, the same as 1-D IP and MV, geometries of Holovizio, and FLA and MID are compared.
respectively. Each of the pixel cells composing a frame of an In the FLA and MID, the multiview images are collimated
image is made as a focused image and then the image points and projected on the diffuser screen with a small angle dif-
are displayed on a directional diffusive screen by the scan- ference, as shown in Fig. 5(a). A 2-D array of point images is
ning action of an x-y scanner. The FLA introduced 45 differ- formed at the screen by the matching pixels from different
ent view images in horizontal direction but no evidence was images. The 2-D array of point images formed at the screen
found that FLA provides a continuous parallax within its has the same number of pixels, and each point image
viewing region [74]. Following the FLA, a display called Hol- expands in parallel with the same expanding angle. In Hol-
ovizio was introduced [75]. This display hired an array of ovizio, the images are projected without collimation by
projectors aligned as the IP’s elemental image array at the shifting one after another for a small distance in the hori-
zontal direction, as shown in Fig. 5(b). The images also form
a 2-D array of point images on the screen by overlapping pix-
els from different images as in Fig. 5(a). The number of pix-
els in a point image reduces as it goes to the both edge sides,
and the expanding angles of the point images are not the
same and their propagation directions are also different.
The projection geometry difference between them induces
just small differences as described above. The FLA, MID,
and Holovizio have the same image projection geometry as
the 1-D IP but pixel composition of each point image is the
same as the pixel cell in MV. The IP image projection geom-
etry is found only in the 2-D stereo-hologram such as the
Zebra hologram [77]. The image recorded on the 2-D stereo-
hologram is a 2-D multiview image array. When a
­reconstruction light illuminates the hologram, the 2-D point
Fig. 4. Concept of super-multiview imaging. hologram array turns to a 2-D point image array as in IP. The

Proceedings of the IEEE   5


This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

Son et al . : Holographic and Light-Field Ima ging a s Future 3-D Displays

of a display media for a hologram. This idea was revived


under the name of holographic video in the mid-1980s
with use of an acousto-optic modulator (AOM) made from
TeO2 [79]. The holographic video continued until the end
of the 1990s [80]–[82], but it was eventually stopped due to
difficulties in finding a better AOM to display a hologram for
bigger and more resolved reconstructed images with a wider
viewing zone angle, and in aligning moving components to
stop image flowing. The high cost of achieving the frame of
a hologram and incompatibility to a flat panel display were
not safe from being stopped. In this period, a new method
of displaying a hologram on LCDs was also developed [83],
though it was hard to view the reconstructed image because
of the LCD’s large pixel size that exceeded more than
30 ​μ​m. This LCD-based electroholography paved the way
for the current electroholography based on digital display
chips mentioned before. However, electroholography has
many problems incurred by the chip’s small size to display
hologram, astigmatism involved with the reconstructed
image [84], and small viewing zone angle caused by a large
pixel size [85], along with the diffraction effect induced by
the digital pixel structure of the chips. But these will be the
solvable problems in the future because a display chip with
1-​μ​m pixel size [86] has already been introduced. In the digi-
tal projection chips, the number of different view images to
be recorded on the chip is smaller than those on a photo-
graphic plate/film. Since each pixel can record a view image,
Fig. 5. Image projection geometries of (a) FLA and MID; and
each chip can record different view images corresponding
(b) Holovizio.
to its pixel resolution. However, since each pixel has only
a recording layer, the number of recordable object points
hologram has the same multiview image arrangement and on the pixel will be much lower than those on the emul-
image cell structure as those of IP, though the number of sion layer in photographic plate/film, which has a finite
image points and the resolution of each image point of the thickness [87]. Hence, for the continuous parallax, the pixel
hologram are much larger than those of IP, and the number size and resolution need to be as small and high, respec-
and the size of the image cell are larger and smaller, respec- tively, as possible and the pixel should have a multilayer
tively, than those in the IP. This is why the hologram has not structure to record phase information.
been displayed on a flat panel ­display so far. But the recon-
structed image from this hologram will be the best subject I V.   T H E CON DI T ION T O BE A ­L IGH T-
for testing the presence of a continuous parallax and focus- F I EL D DISPL AY: T H E T R A NSI T ION
able depth range extension. However, no results in this F ROM I N T EGR A L PHO T O GR A PH Y T O
direction with the hologram have been reported. L IGH T-F IEL D DISPL AY
In the continuous parallax point of view, the super- Our living space is filled with electromagnetic waves of
multiview and the holographic images will not be different, various wavelengths from nature and man-made radiation
because the natural viewing condition is considered as an sources. We are living within the electromagnetic field
inherent property of a hologram. The hologram is recorded formed by the waves. The visible light is part of the waves
at the input aperture plane of a camera’s objective, and it that allows us to see our surrounding space through the light
contains an infinite number of different view images. Hence, field formed by the waves. In the displays, the visible lights
the hologram is considered as providing the continuous par- from a display panel/screen form a light field in its front
allax but it is difficult to verify the presence in the current space. A viewer who watches a display panel at a certain loca-
electroholography based on a digital display chip due to its tion of the front space can only perceive the lights f­orming
small viewing zone angle. The idea of electroholography, i.e., the light field in this position. This is shown in Fig. 6, which
displaying hologram on a display device, was created around compares the light fields formed in the front space of a dis-
1965 by a group of researchers from AT&T Bell lab. [78]. play panel for a plane image in MV and IP. The fields are
However, this idea could not be realized due to the lack shown for seven distances of 100, 150, 250, 330, 450, 500,

6  Proceedings of the IEEE


This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

Son et al . : Holographic and Light-Field Ima ging a s Future 3-D Displays

Fig. 6. Light fields formed in the front space of a display panel for plane image, MV, and IP geometries.

and 750 mm from the panel when different color strips are 750 mm show that the area of each color strip increases as
displayed on the panel as the different view images. For (IP), the distance increases. This indicates that the same color
a pixel cell (elemental image) consists of red, green, blue, strips from different elemental images are overlapped more
and yellow strips, and for the plane image, two color combi- and more. The image at 750 mm show that the red, green,
nations such as the same color arrangement as MV and blue– blue, and yellow color strips from left to right appear almost
green strips as shown in Fig. 6 are displayed. The images are continuously and they repeat to show the presence of side
those on a diffuser plate located at each distance specified viewing zones. This is because each elemental image is not
above. For the plane images, the green and blue (R, G, B, only projected by its immediate front elemental optic but also
and yellow) strips on the panel are diffused together and by adjacent elemental optics to the front [89]. For MV, the
the light field has already turned to a cyan color (white) at color strips from different pixel cells are mixed but they are
100 mm from the panel. It is well known that the mixings separated again when the clear overlapping of the same color
of green and blue, and R, G, B primary colors create cyan strips from different pixel cells appears at 250 mm due to
and white colors, respectively. But the presence of yellow in its radial projection arrangement. After separation, they are
R, G, B and yellow strips is not noticed. The light field does mixed more and more as the distance increases. The color
not change its color with different distances and directions, orders are the same as in the image at 750 mm in IP, and
except its brightness. The brightness reduces as the distance they repeat as mentioned above. The color strips from dif-
increases. Viewers’ two eyes will get the same color, i.e., ferent pixel cells are mixed again as the distance increases.
the same scene. This means that no depth sense will be per- The same color overlapping is also shown in the images
ceived with the plane image. For and IP, the R, G, B, and yel- at 150–330 mm, though they show color mixings at the
low strips in each pixel cell (elemental image) are expanded boundaries of different color strips. Hence, this range can
radially (in parallel) as shown in Fig. 3 by the elemental be considered as the viewing space where different view
optics in VZFO while keeping their colors and relative posi- images are separately viewed. This range corresponds to
tions to other color strips, and mixing with the expanding a diamond-shaped white regions along the line indicating
color strips from other pixel cells (elemental images). The the VZFO and their vicinities in Fig. 3(a). In this range,
mixing informs that different view images will be mixed the light field is divided into the number of different view
[88]. In IP, the mixed color strips will be separated again and images on the panel and each divided light field is formed
the same color strips from ­different pixel cells (elemental by the lights mostly coming from a view image converged
images) will overlap one another as the distance increases. to the ­light-field location. Other than the range within
This separation occurs when a color strip in each pixel cell the viewing zone, the light fields are formed by mixing
(elemental image) is expanded more than the horizontal between the lights coming from pixels of different view
size of the image on the panel. The images at 450, 500, and images. The light-field images of MV and IP inform that

Proceedings of the IEEE   7


This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

Son et al . : Holographic and Light-Field Ima ging a s Future 3-D Displays

if each color represents a view image, each image will be super-multiview concept is called the super-multiview
separately projected to the corresponding eye if the width display [93], i.e., the light-field display.
of each color sector does not exceed a viewer’s interocular
distance. When it is assumed that a viewer locates his/her
V.   T H E DI F F ER E NC E I N L IGH T-F I EL D
left eye in green color light fields and the right eye in red
C H A R AC T ER IST ICS BET W EE N
color light fields, the left and right eyes see only green and
HOL O GR A PH IC A N D L IGH T-F I EL D
red strips, respectively, in all pixel cells (elemental images)
I M AGE S
in the panel. This is a typical way of perceiving depth sense
in the 3-D imaging methods having the parallax as their The light fields in a holographic display are different from
main depth cue. Viewers need to accommodate their eyes those in IP and MV, as shown in Fig. 7. Fig. 7 shows the light
within the focusable depth range surrounding the panel to fields from an on-axis hologram on a DMD with pixel resolu-
see the image corresponding to each eye more clearly. This is tion of 1280 ×​ ​800 and pixel size of 7.637 μ​ ​m [94], along the
why the perceived image has a virtual depth. If the width is propagation direction of the reconstructed image, when a
smaller than a pupil diameter of the viewer’s eye, more than collimated laser beam normally illuminates the hologram.
two different view images can get into the viewer’s each eye The hologram is a computer-generated hologram (CGH)
simultaneously without overlapping. The separation between [95]. The light-field images are for a six-point object aligned
different color strips is represented by each smaller viewing in a horizontal direction. The points are located at 500 mm
space specified as the viewing region within the viewing zone from the hologram. The number in each image represents
in Fig. 3. This smaller viewing space is formed by the cross- the distance from the hologram in millimeters. The images
ings between the pixels of composing the left and right most show the process of forming the reconstructed image: The
pixel cells/elemental images. The number of smaller viewing hologram reconstructs six Fresnel zone patterns (FZPs) cor-
spaces in the viewing zone is equal to the square of differ- responding to the six points when it was illuminated by the
ent view images on the panel for MV and to the number of reconstruction beam. Each of these FZPs independently con-
pixels within the elemental image plus a half of the number verges to form a focused image of its corresponding point at
obtained by squaring the pixels within an elemental image the original location of the point. The images at 260, 300, and
for IP [90]. The pixel composition of the light field at each 390 mm show the converging because the size of each FZP is
smaller viewing space is one pixel or one subpixel shifted reduced as the distance increases. The reconstructed image
from that of its adjacent light fields. So, the pixel composition appears at 470 mm and is the least circle of confusion (LCC)
is an image unit that the viewers will perceive. Hence, each of image [96], because it accompanies the focused images in the
the smaller viewing spaces is called an image cell for its pixel/ 0° direction at 455 mm and in the 90° direction at 490 mm.
subpixel composition representing a view image [91]. If the The focused image is not a point but it has a finite area. After
size of each image cell in the horizontal direction is smaller
than the pupil size of the viewer, more than two image cells,
i.e., more than two different images with the minimal dis-
parity between them, can get into the viewer’s eyes. In that
case, he/she will perceive a continuous parallax. This is the
claim made by MID, FLA, and Holovizio. However, to get
two image cells to the pupil, the pixel size ​p​of the display
panel should be less than 5 ​μ​m for the case when the view-
ing distance ​d​is 1000 mm, the focal length of the elemental
optics ​f​is 3 mm, and the pupil diameter of the viewer’s eye is
3 mm, by the ​pd / f  ≤​ (3.0/2) mm relationship. This relation-
ship is obtained by considering that the panel is at the focal
plane of VZFO as in the typical contact-type 3-D displays
[92]. As the panel size increases, the viewing distance will
also be moved away from the panel. This means that the pixel
size should be smaller than 5 ​μ​m. If more than two different
view images getting into the pupil are required, the pixel size
should be much smaller than 5 ​μ​m. Figs. 3 and 5 also show
that super-multiview imaging can be realized more easily
in MV than in IP, because the sizes of the image cells are
much smaller in MV than in IP due to the converging action Fig. 7. Light fields formed in the front space of a DMD for
of the radial configuration. The display conforming to the holographic imaging.

8  Proceedings of the IEEE


This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

Son et al . : Holographic and Light-Field Ima ging a s Future 3-D Displays

focusing, the images expand more and more as the distance center of each elemental optic in MV and IP, as shown in
increases until recovering their initial FZPs, as shown in the Fig. 3. This is why it is considered that a 2-D multiview
image at 645 mm, and then disappear. Since the rays from image array as in IP and MV will be formed when the holo-
each FZP will trace back to their original paths to recon- gram on the display panel is reconstructed. However, the
struct the image of a corresponding object point, the FZP characteristics of each image from the hologram and MV/IP
should converge to the point. But the presence of astigma- are not the same because the images from the hologram con-
tism causes the rays to hardly focus on a point. This means sist of rays from object points and have their own phase
that the reconstructed images cannot be formed by ideal information, but those from IP and MV have an array of pix-
light points. This is why it is difficult to find a focused image els with no phase information. The light fields of different
position. As a result, the resolution including the depth distances in Fig. 7 are formed by the coherent addition of
direction of the reconstructed image in electroholography each view image, which is reconstructed by the rays from a
strongly relates to astigmatism. The reason for the presence pixel, with those from other pixels. Each of the fields has a
of astigmatism in DMD is considered as the finite size of spatial position and a real entity. But in a light-field display,
each pixel. Adding to this, the digital nature of the device the 3-D image is simply perceived by the image on the dis-
and the operating principle of DMD can be other contribut- play panel through the light field. Only the image on the
ing reasons to astigmatism. Fig. 7 shows that the image at panel has a spatial position and real entity. As mentioned
260 mm has much lower brightness than images at other before, the viewing zone angle is a parameter of determin-
distances. This means that each light field is formed by the ing the quality of 3-D images. In holographic imaging, the
coherent addition of rays from each pixel, i.e., each ray has angle can hardly exceed more than 30°, even with a high
phase information. This is what causes the reconstructed pixel density panel with the pixel size as small as 1 ​μ​m. A
image from the hologram to have a spatial position and a demagnifying optics can be used to increase the angle but
real entity. Each image point composing the reconstructed the optics presence will make the imaging system bulkier
image is formed by a ray from each pixel in DMD and from and the image size much smaller than the panel size. The
all pixels. The number of rays composing each image point light-field imaging does not need the demagnifying optics
corresponds to the pixel resolution of DMD because each and the displayable image may not be smaller than the panel
pixel has the size of less than 10 μ​ ​m and comprises a view size but it is still not free from the angle problem. To project
image of the object. Hence, each image point of the recon- more than two different view images to each eye of viewers,
structed image emits rays the same way as in point images in either FOV of each elemental lens or each projector objec-
Fig. 5, though the image point arrangement is different from tive, or the scanning angle should be minimized or increase
Holovisio and FLA/MID because the arrangement forms the the number of pixels within a pixel cell/elemental image. To
object image. Since the number of rays from each pixel can increase the number of pixels, the pixel size should be
be more than the pixels in each pixel cell or elemental smaller as described by the relationship in Section IV. When
image, and the reconstructed image is viewed through a very the pixel size is considered to be 1,0 μ​ ​m, there will be 1000
narrow viewing zone angle. This is why the reconstructed pixels within an elemental optic with a pitch of 1.0 mm in
image will probably provide continuous parallax when it is the horizontal direction. If each of these pixels creates a
viewed at the viewing zone. When a hologram is displayed viewing space with 1.5-mm width in the horizontal direc-
on a display panel, each pixel of the panel works as a point tion, 200 pixels can create 300-mm-width VZCS. If this
hologram of a view image of an object seen at its position in VZCS appears at around viewing distance of 1000 mm, the
the panel. Hence, the hologram in the display panel is a kind viewing zone angle will be around 17°. But for the holo-
of 2-D stereo-hologram. In fact, all analog holograms on graphic display, it will be less than 6°. The viewing zone
photographic plates/films can also be considered as a 2-D angle of the light-field display is bigger than that of the holo-
point hologram array, i.e., a 2-D stereo-hologram. But the graphic display but the values are still very small compared
distance between adjacent points is too small to discrimi- with those of the plane image which is near 180°. In the
nate against them. In the analog hologram, the image dis- light-field displays, a more serious problem is to find a VZFO
tance is comparable to its grain sizes. This is much smaller which can resolve any number of pixels and any pixel size
than the current pixel size of the display chips. Since the within a pixel cell/elemental image. It is expected that the
distance between images corresponds to a pixel size and the difficulty of resolving pixels or subpixels with a VZFO will
pixel size in the current display chip is in the micrometer grow as the pixel size decreases. As described before, the
range, the disparity between images may be small enough to required pixel size for the light-field display is not different
recognize the disparity between neighboring images. How- from that for the holographic display. This means that both
ever, this micrometer range is still too big compared with holographic and light-field imaging can be displayed on the
the image distance on our pupils in the natural environment same display panel. However, the reconstructed image
and the grain size in the photographic plate/film which is in size of the hologram cannot be as large as the light-field
the 10th nanometer range [97]. The point hologram repre- image. The size will be much smaller compared with the
sents an image point similar to the focused rays on the panel size. This is because the viewing zone angle depends

Proceedings of the IEEE   9


This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

Son et al . : Holographic and Light-Field Ima ging a s Future 3-D Displays

on not only the pixel size but also the relative size of the
reconstructed image to panel and the image distance from
the hologram [98]. As mentioned before, the focusable
depth range and the extension of the viewing zone in depth
direction are different for the two types of imaging. This
means that the image spaces between the two types of imag-
ing will also be different. In the light-field display, the image
space will be defined as the focusable image depth range in
the front and rear spaces of the panel. The range can be
extended up to near the pupil of viewers’ eye by adjusting
the number of simultaneously projected images [28], but
how much is not known yet. Hence, the image space in front
of the display panel can be extended to near the end of the Fig. 9. Image space in a holographic display.
viewing zone because the images perceived at image cells
where viewers’ each eye is located determine the depth of
the perceived images. So, the image space for MV (IP) will different order beams to interfere with each other. This
be determined by its corresponding viewing zone forming interference keeps its status until the diffracted beams reach
geometry as in Fig. 3. The difference is its extension to the the plane where different orders of diffraction beams are
rear space of the panel, as shown in Fig. 8, which represents completely separated because each diffracted beam accom-
the image space of the light-field display, having a diamond- panies a reconstructed image. So, the space between the
shaped space. Geometry comes from the radial-type viewing hologram and the plane will not allow the identifiable recon-
zone forming geometry. In holographic imaging, the image structed images to appear. For the case of the viewing zone
space for a real image is defined as the pyramidal-shape extension in the depth direction, holographic imaging has a
space specified in Fig. 9, which represents the real image much extended viewing zone compared with light-field
reconstructing geometry of the holographic display. The imaging. In holographic imaging, the viewing zone can be
rays form each point hologram, i.e., each pixel is converged theoretically extended to infinity from the starting position
to the common space where the reconstructed image will of the viewing zone. However, for light-field imaging, there
appear. Hence, geometry is very similar to the radial-type is no solid criterion of determining the amount of extension,
viewing zone forming geometry as in Fig. 3(a). However, but it is expected that the extension will be more as the
the viewing zone in Fig. 9 corresponds to the incomplete number of different view images increases. The differences
viewing zone in Fig. 3(a). Fig. 9 shows that the viewing zone in the quality parameters between holographic and light-
will appear at a farther distance from the reconstructed field imaging are summarized in Table 1.
image as the image size increases. When the size is bigger or
equal to the panel size, no viewing zone will be formed. The V I.  PROBL E MS A N D C U R R E N T IS SU E S
reconstructed image will not be identified near the holo- I N HOL O GR A PH IC A N D L IGH T-F I EL D
gram because the diffraction effect by being a digital chip I M AGI NG
causes the reconstructed images to be accompanied with As mentioned before, for the realization of light-field dis-
plays, a display panel having a pixel (or a subpixel size)
size less than 5 μ​ ​m is needed, when the viewing distance
is set to 1000 mm. The commercially available displays
are mostly having the 4k UHD (3840 × ​ ​2160) class. The
smallest size monitor having 4k resolution is 24 in [99].
This gives a pixel size of around 138 ​μ​m. The smallest
size display having 8k UHD (7680 ​×​4320) used to dis-
play IP image was around 27 in [100]. This display has a
pixel size of around 88.5 ​μ​m but it is no longer commer-
cially available. The commercially available 8k UHD dis-
play has 98-in size. Hence, the pixel size is around 282 μ​ ​
m, which is more than two times the 4k monitor. Hence,
building the light-field display with the currently avail-
able monitors or TVs will be very difficult even when the
viewing distance is set to 500 mm. The Liquid Crystal on
Silicon (LCoS) chip with the same 8k UHD resolution
Fig. 8. Image space in a light-field display. with 4.8-​μ​m pixel size has also been introduced but this

10  Proceedings of the IEEE


This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

Son et al . : Holographic and Light-Field Ima ging a s Future 3-D Displays

Table 1  Comparisons of the Quality Parameters Between Holographic and Light-Field Imaging

chip has been used to display a hologram [101] and it is wider than the DOF for stereoscopic images, from −1.66
no longer available. So, the light-field displays introduced to −1.03(0.6) diopters.
so far will not be the actual light-field displays, accord- In holographic imaging, the viewing angle [103]–[105]
ing to the definition in this paper, except the one with and the image size increase [106]–[108] based on various
4.25-​μ​m subpixel size [21]. But a 5.26-in display panel multiplexing methods, and the full color image genera-
with 2250 pixels per inch is currently in development for tion [109], [110] has been the major topic to overcome the
mobile application [102]. This panel will have approxi- limitations imposed by the small size of digital display
mately 11-​μ​m pixels. If each pixel in this panel consists of chips, and their low resolutions and large pixel sizes in
subpixels, the subpixel size will be approximately 3.75 ​μ​m. current display chips. The frequently employed methods
Hence, it will be possible to build a light-field display to resolve the limitations are multiplexing methods. The
with this panel. Since the viewing distance of the mobile methods combine chips with the same characteristics to
display will be at most 600 mm, the subpixel size will be make them work virtually as a chip with bigger size and
good for building a light-field display. Regarding the light- higher resolution. Three different multiplexing methods
field display, it is also very important to find the relation- such as time, spatial, and spatiotemporal are currently
ship between the number of simultaneously projected
images on the pupil of each eye, and focusable depth
range extension and the presence of monocular depth.
As mentioned before, the two simultaneously projected
images can extend the DOF to the SM zone [22], [28],
[76] and as the number of images increases to three and
four, the DOF expands more to the SM zone [28]. This is
shown in Fig. 10, which shows the DOF values for a 23
years old male with more than 1.5 eyesight when two,
three, and four different view images with a pixel dispar-
ity between the images are projected to the subject’s each
eye. The displayed images are a repeatedly moving Malta

cross with a perspective within the diopter ranges from – –
−2.86 to −0.61(2.2). The DOF ranges for two, three, and – –
– – –
four different view images to each eye are from − 0.6 to
−2.0 (−1.4), from −0.4 to −2.1 (−1.7), and from −0.45 –
to −2.5 (−2.05) diopters, respectively. The DOF ranges
are increasing from −1.4, −1.7 to −2.05 as the number Fig. 10. DOF values when two, three, and four different view
of different view images increases, and they are much images are simultaneously projected.

Proceedings of the IEEE   11


This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

Son et al . : Holographic and Light-Field Ima ging a s Future 3-D Displays

to create a viewing zone. The image size relative to the


chip size determines the viewing zone distance from the
reconstructed image; bigger makes longer. In Fig. 11, the
viewing zones forming geometry of both time [Fig. 11(a)]
and spatial [Fig. 11(b)] multiplexing are depicted. In the
case of time, the multiplexed frames (holograms) are on
the circumference of a circle in the center of the scan-
ning mirror. On the opposite side of the arc where the
multiplexed frames are located, the starting point of each
hologram’s viewing zone is located. Fig. 11(a) shows that
the reconstructed image piece from a frame is joined
sequentially with time, with the reconstructed images
from the holograms on other frames, which form a physi-
cally enlarged or more resolved multiplexed image. How-
ever, the viewing zone for each image piece from a frame
does not overlap with those for other image pieces. There
is not any common viewing zone for viewing the mul-
tiplexed image simultaneously, but each image can be
viewed in its own viewing zone. This is why a diffusing
screen is needed to see the multiplexed image. In geome-
try, if a demagnifying lens is located in the viewing zone,
the reconstructed image can be demagnified and each
viewing zone can be enlarged, but there is no common
viewing zone for the contracted multiplexed image. For
the case of spatial multiplexing, in principle, the multi-
plexed chips can have a common viewing zone as shown
in Fig. 11(b). The viewing zone is defined as the common
Fig. 11. Viewing zone forming geometries of (a) time and (b) spatial space formed by the lines connecting the four corners
multiplexing. of multiplexed chips and the edges of the largest dimen-
sions in each directions of the object image in the paral-
lel plane of the chips. In this case, α
​ ​which is the angle
known [1]. Time multiplexing is typical for a high-speed between the line passing to the top edge of the multi-
chip such as DMD, and spatial [107] and spatiotemporal plexed chips and parallel to the normal line of the chips,
[104], [111] for SLM. However, the multiplexing methods and the line connecting the top edge and the bottom of
can bring a physically enlarged reconstructed image and the reconstructed image, should not exceed the crossing
a more resolved multiplexed image but cannot enlarge angle between the reference and object beams. When ​α​ is
the viewing angle because this angle is determined by bigger than the crossing angle, each display chip should
the pixel size in the chip to be multiplexed. In the digi- be individually illuminated. In this multiplexing, the
tal display chips, the viewing angles are the same as the viewing zone can be further expanded by sacrificing the
diffraction angle and are defined by the pixel size for a viewing zone angle. In this sense, a chip with smaller
given laser beam, but this angle defines one directional pixel size will be better to apply spatial multiplexing,
size of the image space where the reconstructed image as shown in Fig. 11(b). There is a possible way of joining
can be located. But the other direction is defined by the the viewing zone of multiplexed chips. This is shown in
crossing angle between the reference and object beams in Fig. 12. The relative position of the reconstructed image
the recording hologram [98]. The viewing angle defines the
possible range of the directional angles to view all image
pieces in the horizontal direction of the multiplexed image,
as shown in Fig. 11(a). The holographic display having
the 360° viewing angle with the use of a rotation spheri-
cal mirror [105] is possible by definition. The viewing
angle is discrete and has nothing to do with the actual
viewing of the reconstructed image. The angle defin-
ing the actual viewing of the reconstructed image is the
viewing zone angle. As mentioned before, the recon- Fig. 12. Possible way of joining the viewing zone of multiplexed
structed image size should be smaller than the chip size display chips.

12  Proceedings of the IEEE


This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

Son et al . : Holographic and Light-Field Ima ging a s Future 3-D Displays

from each chip to be multiplexed is adjusted such that its way of building them with the use of picoprojectors
reconstructed image is joined with others and the start- [113]. The dimensions of a picoprojector are reduced to
ing point of the viewing zone is superposed with those the size of a smartphone and its pixel resolution reaches
of others [104]. In this way, the viewing zone will also be near full HD (1920 ​×​720). So, it is easier to align many
joined but there is still no common viewing zone for the of them within a fixed space to build the 2-D version of
multiplexed images. Hence, a diffuser is also needed to FLA/MID or Holovizio. Building a large dimension light-
view the multiplexed image. Regarding full color display, field display with an array of picoprojectors will be much
the developments are not very noticeable. beneficial than with chips or the display. This is because
the projector array does not need VZFO. As the pixel size
decreases, the VZFO’s resolving power of small size pixels
V II.  F U T U R E PRO GR E S SE S I N T H E
should be increased; the current lenticular and parallax
HOL O GR A PH IC A N D L IGH T-F I EL D
barrier will have difficulties in resolving pixels with sizes
DISPL AYS
less than 20 μ​ ​m [114]. Hence, to realize a light-field dis-
The current developments of light field and holographic play with a display panel specialized for 3-D imaging, a
imaging rely mostly on flat panel displays. In this way, the method of improving the resolving power of the current
two types of imaging can be in the same display as for VZFOs or a new VZFO should be developed. In building
plane images. This display will be the future TV at home. the light-field display, more different view images than
The final goal of the two types of imaging is to create in the current multiview imaging will be required. For
the natural viewing condition through display. The mini- this purpose, the number of pixels should be increased by
mum requirement of achieving this goal is providing increasing the pixel density physically or virtually. How-
both continuous parallax and large extended focusable ever, since the pixel size becomes smaller as the pixel
image depth range. In this regard, the following questions density increases, the pitch size of the VZFO’s elemental
should be answered: 1) Will it be possible to increase the optics can be reduced to smaller than that in the multi-
DOF further and to have a monocular depth with more view imaging. In this case, the image cell may not be dis-
than two neighboring view images? 2) How many differ- criminated against from those of its adjacent cell by the
ent view images are needed to create a natural viewing diffracted beam from each elemental optic. So, the pitch
condition? To satisfy the requirement, a display panel spe- size should be designed to make the diffracted beam size
cialized for the light-field and holographic displays should smaller than the image cell size.
be developed in future. Since the panel requirements for
the two types of imaging are not different, the same panel
can be used for these two types of imaging. The display for V III.   CONCLUSION
these two types of imaging is in development: A display IP’s equivalent optical geometry of arranging multiview
chip with 1-​μ​m pixel size based on giant magnetoresistive images as a 2-D point image array has been the basis of
(GMR) material [86] and an LCoS display chip with 8k all contact-type multiview 3-D imaging introduced so
UHD resolution [103] has already been used to display the far, i.e., FLA/MID and Holovizio, and electroholographic
hologram. Furthermore, a 2250-pixels/in display panel displays based on digital display chips. Hence, it is no
for mobile application is in development to build a light- doubt that the same geometry will be used in future
field display. These chips and displays are still small in holographic and light-field displays. The criteria separat-
size but they will work as the stepping stones for future ing the holographic and light-field imaging from multi-
holographic and light-field displays. Eventually, the panel/ view imaging are the presence of continuous parallax and
chips should have the resolution and pixel density to dis- large focusable image depth. These two quality param-
play 3-D images with image quality comparable to that eters are essential to create the natural viewing condi-
of the plane image. In this regard, the panels and chips tion, which is the ultimate goal of displays in imaging.
with the size for a smartphone and a pixel size close to The first requirement to satisfy these parameters will be
1 ​μ​m with no diffraction beams are desired, especially for a display panel having a pixel size close to 1 μ​ ​m. A pixel
holographic displays. They cause the displays to have a having an irregular shape will be better to increase light
simpler structure and better image quality. The diffrac- efficiency by eliminating the diffraction pattern. Since
tion beams are inevitable in the digital display chips/ the panel requirements for the light-field and holographic
panels. They are annoying and their light efficiency is displays are almost the same and they have the same goal,
reduced tremendously. A way of eliminate them should in the future, they can be on a display panel with plane
be developed. Also, if lasers are continuously used in dis- images. A display panel/chip specialized for 3-D imaging
plays, eye safety should be more thoroughly investigated is a shortcut to the success of 3-D images on the market
[112]. Regarding the light-field displays, there is another in the future. 

Proceedings of the IEEE   13


This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

Son et al . : Holographic and Light-Field Ima ging a s Future 3-D Displays

R EFER ENCES J. Inst. Image Inf. Television Eng., vol. 67, [42] E. H. Ives, “Optical properties of a Lippmann
[1] J.-Y. Son and B. Javidi, “Three-dimensional no. 12, pp. J475–J478, 2013. Lenticulated Sheet,” J. Opt. Soc. Amer.,
imaging methods based on multiview imag- [23] L. S. Pedrotti and F. L. Pedrotti, Optics and vol. 21, pp. 171–176, Sep. 1931.
es,” J. Display Technol., vol. 1, pp. 125–140, Vision. Upper Saddle River, NJ, USA: [43] E. N. Leith and J. Upatnieks, “Reconstructed
Sep. 2005. Prentice-Hall, 1998. wavefronts and communication theory,”
[2] J.-Y. Son, B. Javidi, and K.-D. Kwack, “Meth- [24] D. M. Hoffman, A. R. Girshick, K. Akeley, J. Opt. Soc. Amer., vol. 52, no. 10,
ods for displaying three-dimensional imag- and M. S. Banks, “Vergence–accommodation pp. 1123–1130, 1962.
es,” Proc. IEEE, vol. 94, no. 3, pp. 502–523, conflicts hinder visual performance and [44] K. Langhans, C. Guill, E. Rieper, K.
Mar. 2006. cause visual fatigue,” J. Vis., vol. 8, no. 3, Oltmann, and D. Bahr, “SOLID FELIX: A
[3] [Online]. Available: http://www.monitor4u. pp. 33–33, 2008, DOI: 10.1167/8.3.33. static volume 3D-laser display,” Proc. SPIE,
co.kr/ [25] Y. Kajiki, H. Yoshikawa, and T. Honda, vol. 5006, pp. 161–174, May 2003.
[4] [Online]. Available: http://news.samsung. “Ocular accommodation by super multi-view [45] I. I. Kim, E. Korevaar, and H. Hakakha,
com/kr/3015 stereogram and 45-view stereoscopic “Three-dimensional volumetric display in
display,” in Proc. 11th Int. Display Workshop, rubidium vapor,” Proc. SPIE, vol. 2650,
[5] [Online]. Available: http://www.zdnet.co. 1996, pp. 489–492. pp. 274–284, Mar. 1996.
kr/news/news_view.asp?artice_
id=20110825102246&lo=zv40 [26] Y. Takaki and H. Nakanuma, “Improvement [46] E. Downing, L. Hesselink, J. Ralston, and R.
of multiple imaging system used for natural Macfarlane, “A three-color, solid-state,
[6] A. J. Woods, T. Docherty, and R. Koch, 3D display which generates high-density three-dimensional display,” Science, vol. 273,
“Image distortions in stereoscopic video sys- directional images,” Proc. SPIE, vol. 5243, pp. 1185–1189, Aug. 1996.
tems,” Proc. SPIE, vol. 1915, pp. 36–48, pp. 43–49, Nov. 2003.
Sep. 1993. [47] J.-Y. Son, V. V. Smirnov, L. N. Asnis, V. B.
[27] Y. Takaki and N. Nago, “Multi-projection of Volkonski, and H. S. Lee, “Real-time 3D
[7] J.-Y. Son, Y. Gruts, J.-H. Chun, Y.-J. Choi, lenticular displays to construct a 256-view display with acousto-optical deflectors,”
J.-E. Bahn, and V. I. Bobrinev, “Distortion super multi-view display,” Opt. Exp., vol. 18, Proc. SPIE, vol. 3639, pp. 137–142,
analysis in stereoscopic images,” Opt. Eng., no. 9, pp. 8824–8835, 2010. May 1999.
vol. 41, no. 3, pp. 680–685, 2002.
[28] B.-R. Lee, J.-Y. Son, S. Yano, and I. Jeong, [48] H. Yamada, C. Masuda, K. Kubo, T. Ohira,
[8] Fundamental of 3-D Imaging Techniques, “Increasing the depth of field in multiview and K. Miyaji, “A 3-D display using a laser
T. Izumi, Ed. Ohmsa, Tokyo: NHK Sci. 3D images,” Proc. SPIE, vol. 9867, p. 98670T, and a moving screen,” in Proc. Jpn. Display,
Technol. Lab., 1995. May 2016, DOI: 10.1117/12.2229346. 1986, pp. 416–419.
[9] K. Akşit, A. H. G. Niaki, E. Ulusoy, and [29] J.-S. Jang and B. Javidi, “Three-dimensional [49] S. Tamura and K. Tanaka, “Multilayer 3-D
H. Urey, “Super stereoscopy technique for synthetic aperture integral imaging,” Opt. display by multidirectional beam splitter,”
comfortable and realistic 3D displays,” Opt. Lett., vol. 27, no. 13, pp. 1144–1146, 2002. Appl. Opt., vol. 21, no. 20, pp. 3659–3663,
Lett., vol. 39, no. 24, pp. 6903–6906, 2014. 1982.
[30] A. Stern and B. Javidi, “Three-dimensional
[10] R. J. Collier, C. B. Burckhardt and L. H. Lin, image sensing, visualization, and processing [50] [Online]. Available: https://www.lgmobile.
Optical Holography, 1st ed. New York, NY, using integral imaging,” Proc. IEEE, vol. 94, co.kr/mobile-phone
USA: Academic, 1971. no. 3, pp. 591–608, Apr. 2006. [51] [Online]. Available: https://en.wikipedia.org/
[11] A. Gershun, The Light Field, Cambridge MA, [31] B. Javidi, F. Okano, and J. Y. Son, Eds., Three- wiki/List_of_3D-enabled_mobile_phones
USA: MIT Press, 1939, pp. 51–151. Dimensional Imaging, Visualization, and Display. [52] J. S. Kollin, S. A. Benton, and M. I. Jepsen,
[12] M. Levoy and P. Hanrahan, “Light field New York, NY, USA: Springer-Verlag, 2009. “Real-time display of 3-D computed
rendering,” in Proc. 23rd Annu. Conf. Comput. [32] T. Okoshi, Three-Dimensional Imaging holograms by scanning the image of an
Graph. Interact. Technol., 1996, pp. 31–42. Techniques, (in Japanese). Tokyo, Japan: acousto-optic modulator,” Proc. SPIE,
[13] T. Yendo, N. Kawakami, and S. Tachi, Asakura Book Store, 1991. vol. 1136, pp. 1136–1160, Sep. 1989.
“Seelinder: The cylindrical lightfield [33] S. R. Nesbitt, L. S. Smith, A. R. Molnar, and [53] J. Hamajaki, “3D display technologies in
display,” in Proc. ACM SIGGRAPH Emerg. A. S. Benton, “Holographic recording using a Japan, present status and 3D-TV on a CRT,”
Technol., New York, NY, USA, 2005, Art. digital micromirror device,” Proc. SPIE, vol. in Proc. Annu. Conv., Tokyo, Japan, 1991,
no. 16. 3637, pp. 12–20, Sep. 1999. pp. 587–590.
[14] A. Jones, I. Mcdowall, H. Yamada, M. Bolas, [34] K. Bauchert, S. Serati, and A. Furman, [54] H. Isono, M. Yasudsa, D. Takemori, H.
and P. Debevec, “Rendering for an interactive “Advances in liquid crystal spatial light Kanayama, C. Yasuda, and K. Chiba, “50-
360° light field display,” ACM Trans. Graph., modulators,” Proc. SPIE, vol. 4734, inch autostereoscopic full-color 3D TV
vol. 26, pp. 338–343, Aug. 2007. pp. 35–43, Oct. 2002. display system,” Proc. SPIE, vol. 1669,
[15] G. Wetzstein, D. Lanman, M. Hirsch, W. [35] Y. Takaki and N. Okada, “Hologram pp. 176–185, Jun. 1992.
Heidrich, and R. Raskar, “Compressive light generation by horizontal scanning of a high- [55] G. Lippmann, “Epreuves reversibles donnant
field displays,” IEEE Comput. Graph. Appl., speed spatial light modulator,” Appl. Opt., la sensation du relief,” J. Phys. Theor. Appl.,
vol. 32, no. 5, pp. 6–11, Sep./Oct. 2012. vol. 48, pp. 3255–3260, Oct. 2009. vol. 71, pp. 821–825, Sep. 1908.
[16] H. Yamada, H. Yabu, K. Yoshimoto, and H. [36] J.-Y. Son, I. V. Bobrinev, and K.-T. Kim, “Depth [56] H. Edward and A. J. Y. Wang, “Single lens
Takahashi, “Three-dimensional light field resolution and displayable depth of a scene in stereo with a plenoptic camera,” IEEE Trans.
display with overlaid projection,” in Proc. IEEE three-dimensional images,” J. Opt. Soc. Amer. A, Pattern Anal. Mach. Intell., vol. 14, no. 2,
10th Int. Conf. Intell. Inf. Hiding Multimedia vol. 229, pp. 1739–1745, Jun. 2005. pp. 99–106, Feb. 1992.
Signal Process., Aug. 2014, pp. 407–410.
[37] A. John Norling, “The stereoscopic art—A [57] R. Ng, M. Levoy, M. Brédif, G. Duval, M.
[17] Y. Masahiro and R. Higashida, “3D touchable reprint,” J. Soc. Motion Picture Television Eng.,
holographic light-field display,” Appl. Opt., vol. Horowitz, and P. Hanrahan, “Light field
vol. 60, pp. 268–307, Sep. 1953. photography with a hand-held plenoptic
55, no. 3, pp. A178–A183, 2016.
[38] C. Wheatstone, “On some remarkable and camera,” Comput. Sci. Tech. Rep. CSTR 2.11,
[18] T. Iwane, “Light field display and 3D image hitherto unobserved phenomena of 2005, pp. 1–11.
reconstruction,” Proc. SPIE, vol. 9867, p. binocular vision,” Phil. Trans. Roy. Soc. Lond.,
98670S, May 2016, DOI: 10.1117/12.2227081. [58] M. Martínez-Corral, A. Dorado, H. Navarro,
vol. 128, pp. 371–394, Jan. 1938. A. Llavador, G. Saavedra, and B. Javidi,
[19] [Online]. Available: www.holografika.com/ [39] A. Lit, “The magnitude of the pulfirch “From the plenoptic camera to the flat
[20] F. Bettio, E. Gobbetti, F. Marton, and G. stereo-phenomenon as a function of integral-imaging display,” Proc. SPIE,
Pintore, “Scalable rendering of massive triangle binocular difference of intensity at various vol. 9117, p. 91170H, Jun. 2014,
meshes on light field displays,” Comput. Graph., levels of illumination,” Amer. J. Psychol., DOI: 10.1117/12.2051119.
vol. 32, no. 1, pp. 55–64, 2008. vol. 62, pp. 159–181, Jan. 1949. [59] [Online]. Available: http://us.toshiba.com/
[21] Y. Takaki, K. Tanaka, and J. Nakamura, [40] H. Isono and M. Yasuda, “Flicker-free field- computers/laptops/qosmio/X770
“Super multi-view display with a lower sequential stereoscopic TV system and [60] J.-Y. Son, B.-G. Chae, W.-H. Son, J. Nam, and
resolution flat-panel display,” Opt. Exp., measurement of human depth perception,” J. B.-R. Lee, “Comparisons of viewing zone
vol. 19, no. 5, pp. 4129–4139, 2011. SMPTE, vol. 99, no. 2, pp. 138–141, 1990. characteristics of multiview and integral
[22] H. Mizushina, I. Negishi, H. Ando, and [41] F. E. Ives, “Parallax stereogram and process photography 3D imaging methods,”
S. Masaki, “Accommodative and vergence of making same,” U.S. Patent 725567, J. Display Technol., vol. 8, no. 8, pp. 464–471,
responses to super multi-view 3D images,” Apr. 14, 1903. Aug. 2012.

14  Proceedings of the IEEE


This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

Son et al . : Holographic and Light-Field Ima ging a s Future 3-D Displays

[61] J.-Y. Son, W.-H. Son, S.-K. Kim, K.-H. Lee, [77] M. A. Klug, C. Newawanger, Q. Huang, and   [96] F. A. Jenkins, H. E. White, Fundamental of
and B. Javidi, “Three-dimensional imaging M. E. Holzbach, “Active digital hologram Optics, Korean Student Edition. New York,
for creating real-world-like environments,” displays,” U.S. Patent 7227674, Jun. 2007. NY, USA: McGraw- Hill, 1976.
Proc. IEEE, vol. 101, no. 1, pp. 190–205, [78] L. H. Enloe, J. A. Murphy, and C. B.   [97] P. Hariharan, Optical Holography: Principle,
Jan. 2013. Rubinstein, “Hologram transmission via Techniques and Applications. New York, NY,
[62] C.-H. Lee, J.-Y. Son, S.-K. Kim, and M.-C. television,” Bell Syst. Tech. J., vol. 45, USA: Cambridge Univ. Press, 1984.
Park, “Visualization of viewing zones formed pp. 333–335, Oct. 1966.   [98] J.-Y. Son, J.-W. Kim, K.-A. Moon, J.-H. Kim,
in a contact-type multiview 3D imaging [79] S. A. Benton, “The second generation of the and O. Chernyshov, “Viewing conditions of
system,” J. Display Technol., vol. 8, no. 9, MIT holographic video system,” in Proc. TAO multiplexed holographic images,” Opt.
pp. 546–551, 2012. 1st Int. Symp. Dimensional Image Commun. Lasers Eng., vol. 71, pp. 63–73, Aug. 2015,
[63] F. Okano, H. Hoshino, J. Arai, and I. Technol., Dec. 1993, pp. S-3-1-1–S-3-1-6. DOI: 10.1016/j.optlaseng.2015.03.014.
Yuyama, “Real-time pickup method for a [80] P. S.-Hilaire, M. E. Lucente, J. D. Sutter, R.   [99] Dell P2415Q. [Online]. Available: http://
three-dimensional image based on integral Pappu, C. D. Sparrell, and S. A. Benton, accessories.ap.dell.com/
photography,” Appl. Opt., vol. 36, “Scaling up the MIT holographic video [ 100] J. Arai et al., “Integral three-dimensional
pp. 1598–1603, Mar. 1997. system,” Proc. SPIE, vol. 2333, pp. 374–380, television with video system using pixel-
[64] J.-Y. Son, V. V. Saveljev, Y.-J. Choi, J.-E. Bahn, Feb. 1995. offset method,” Opt. Exp., vol. 21, no. 3,
S.-K. Kim, and H.-H. Choi, “Parameters for [81] J. Y. Son, S. A. Shestak, S. K. Kim, pp. 3474–3485, 2013.
designing autostereoscopic imaging systems J.-H. Chun, and V. M. Epikhan, “Multichannel [101] K. Yamamoto, Y. Ichihashi, T. Senoh, R.
based on lenticular, parallax barrier, and acousto-optic Bragg cell for real-time Oi, and T. Kurita, “3D objects enlargement
integral photography plates,” Opt. Eng., electroholography,” Appl. Opt., vol. 38, no. 14, technique using an optical system and
vol. 42, no. 11, pp. 3326–3333, 2003. pp. 3101–3104, 1999. multiple SLMs for electronic holography,”
[65] B.-R. Lee, J.-J. Hwang, and J.-Y. Son, [82] S. A. Benton and V. M. Bove, Holographic Opt. Exp., vol. 20, no. 19, pp. 1137–21144,
“Characteristics of composite images in MV Imaging. Hoboken, NJ, USA: Wiley, 2008. 2012.
and IP,” Appl. Opt., vol. 51, no. 21,
[83] K. Maeno, N. Fukaya, O. Nishikawa, K. Sato, [102] Samsung Develops ‘11K’ Super-Resolution
pp. 5236–5243, Jul. 2012.
and T. Honda, “Electro-holographic display Display Along With 13 Companies...Putting
[66] H. Isono, M. Yasuda, and H. Sasaza, using 15-megapixel LCD,” Proc. SPIE, 26.5 Million USD for 5 Years. [Online].
“Autostereoscopic 3-D LCD display using vol. 1996, pp. 15–23. Available: http://english.etnews.
LCD-generated parallax barrier,” in Proc. com/20150710200002
12th Int. Display Res. Conf., 1992, [84] J.-Y. Son, O. Chernyshov, H. Lee, B.-R. Lee,
and M.-C. Park, “Resolution of electro- [103] T. Senoh, T. Mishina, K. Yamamoto, R. Oi,
pp. 303–306. and T. Kurita, “Viewing-zone-angle-
holographic image,” Proc. SPIE, vol. 9867,
[67] J. Harrold, D. J. Wilkes, and G. J. Woodgate, p. 98670G, May 2016, expanded color electronic holography
“Switchable 2D/3D display -solid phase DOI: 10.1117/12.2229345. system using ultra-high-definition liquid
liquid crystal microlens array,” in Proc. 11th crystal displays with undesirable light
Int. Display Workshops, 2004, pp. 1495–1496. [85] M.-C. Park, B.-R. Lee, J.-Y. Son, and O.
elimination,” J. Display Technol., vol. 7, no.
Chernyshov, “Properties of DMDs for
[68] Y. Y. Kao, Y. P. Huang, K. X. Yang, P. C.-P 7, pp. 382–390, Jul. 2011.
holographic displays,” J. Modern Opt., vol. 62,
Chao, C. C. Tsai, and C. N. Mo, “An auto- no. 19, pp. 1600–1607, 2015. [104] T. Kozacki, G. Finke, P. Garbat, W. Zaperty,
stereoscopic 3D display using tunable liquid and M. Kujawińska, “Wide angle
crystal lens array that mimics effects of GRIN [86] K. Machida et al., “Three-dimensional image holographic display system with
lenticular lens array,” in SID Dig., 2009, reconstruction with a wide viewing-zone- spatiotemporal multiplexing,” Opt. Exp.,
pp. 111–114. angle using a GMR-based hologram,” in vol. 20, no. 25, pp. 27473–27481, 2012.
Digit. Holography 3D Imag. Tech. Dig., 2013,
[69] Y. H. Won, J. Kim, C. Kim, D. Shin, J. Lee, paper DTh2A.5. [105] Y. Sando, D. Barada, and T. Yatagai,
and G. Koo, “3D multi-view system using “Holographic 3D display observable for
electro-wetting liquid lenticular lense,” Proc. [87] D. Pasltis and F. Mok, “Holographic memories,” multiple simultaneous viewers from all
SPIE, vol. 9867, p. 986702, May 2016, Sci. Amer., vol. 273, no. 5, pp. 70–76, 1995. horizontal directions by using a time
DOI: 10.1117/12.2225191. [88] J.-Y. Son, Y. Vashpanov, M.-S. Kim, M.-C. division method,” Opt. Lett., vol. 39, no. 19,
[70] J.-Y. Son, V. V. Saveljev, D.-S. Kim, Y.-M. Park, and J.-S. Kim, “Image light distribution pp. 5555–5557, 2014.
Kwon, and S.-H. Kim, “Three-dimensional in the multiview 3-D imaging system,” J. [106] Y. Takaki and N. Okada, “Hologram
imaging system based on a light-emitting Display Technol., vol. 68, no. 8, pp. 336–345, generation by horizontal scanning of a
diode array,” Opt. Eng., vol. 46, no. 10, Aug. 2010. high-speed spatial light modulator,” Appl.
p. 42007, Oct. 2007. [89] J.-Y. Son, V. V. Saveljev, J.-S. Kim, S.-S. Kim, Opt., vol. 48, no. 17, pp. 3255–3260, 2009.
[71] Y. Kajiki, H. Yoshikawa, and T. Honda, and B. Javidi, “Viewing zones in three- [107] J.-Y. Son, B.-R. Lee, O. O. Chernyshov,
“Ocular accommodation by super multi-view dimensional imaging systems based on K.-A. Moon, and H. Lee, “Holographic
stereogram and 45-view stereoscopic lenticular, parallax-barrier, and microlens- display based on a spatial DMD array,” Opt.
display,” in Proc. 11th Int. Display Workshops, array plates,” Appl. Opt., vol. 43, no. 26, Lett., vol. 38, no. 16, pp. 3173–3176, Aug. 2013.
1996, pp. 489–492. pp. 4985–4992, 2004. [108] J.-Y. Son, C.-H. Lee, O. O. Chernyshov,
[90] J.-Y. Son, V. V. Saveljev, K.-T. Kim, M.-C. B.-R. Lee, and S.-K. Kim, “A floating type
[72] S.-K. Kim, D.-W. Kim, Y.-M. Kwon, and
Park, and S.-K. Kim, “Comparisons of holographic display,” Opt. Exp., vol. 21,
J.-Y. Son, “Evaluation of the monocular depth no. 17, pp. 20441–20451, Aug. 2013.
cue in 3D displays,” Opt. Exp., vol. 16, no. 26, perceived images in multiview and integral
pp. 21415–21422, 2008. photography based three-dimensional [109] X. Xu et al., “Development of full-color
imaging systems,” Jpn. J. Appl. Phys., vol. 46, full-parallax digital 3D holographic display
[73] Y. Takaki and H. Nakanuma, “Improvement no. 3A, pp. 1057–1059, 2007. system and its prospects,” Proc. SPIE, vol.
of multiple imaging system used for natural 8644, p. 864409 Mar. 2013.
3D display which generates high-density [91] W.-H. Son, J. Kim, J.-Y. Son, B.-R. Lee, and
M.-C. Park, “The basic image cell in contact- [110] Y. Ito et al., “Four-wavelength color digital
directional images,” Proc. SPIE, vol. 5243, holography,” J. Display Technol., vol. 8, no.
p. 42, Nov. 2003. type multiview 3-D Imaging systems,” Opt.
Eng., vol. 52, no. 10, pp. 103107-1–103107-11, 10, pp. 570–576, Oct. 2012.
[74] K. Susami, S. Abe, Y. Kajiki, T. Endo, T. Oct. 2013. [111] D. Teng et al., “Spatiotemporal
Hatada, and T. Honda, “Ocular vergence and multiplexing for holographic display with
accommodative state to super multi-view [92] J.-Y. Son, S.-H. Kim, D.-S. Kim, B. Javidi, and
multiple planar aligned spatial-light-
stereoscopic image,” in Proc. Dimensional K.-D. Kwack, “Image-forming principle of
modulators,” Opt. Exp., vol. 22, no. 13, pp.
Image Conf. Oper. Committee Dimensional integral photography,” J. Display Technol.,
15791–15803, 2014.
Image Conf., 2000, pp. 155–158. vol. 4, no. 3, pp. 324–331, Sep. 2008.
[112] J.-Y. Son and M. Jeong, “The effects of MPE
[75] T. Balogh et al., “The holovizio system–new [93] B.-R. Lee, J.-C. Park, I.-K. Jeong, and J.-Y. on electro-holographic displays,” Opt. Exp.,
opportunity offered by 3D displays,” in Proc. Son, “Properties of a super-multiview vol. 22, no. 3, pp. 2207–2215, 2014.
TMCE, 2008. image,” Proc. SPIE, vol. 9117, p. 91170Z, Aug. [113] [Online]. Available: http://www.sony.com/
2014. electronics/projector/mp-cl1
[76] H. Mizushina, J. Nakamura, Y. Takaki, and
H. Ando, “Increase in accommodation range [94] [Online]. Available: http://www.ti.com/dlp [114] Experimentally Obtained With a Toppan’s
induced by super multi-view displays,” ITE [95] W. J. Dallas, Computer Generated Holograms. Lenticular Plate With 1 mm Pitch, Which Was
Tech. Rep. 3724, pp. 1–4, 2013. Berlin, Germany: Springer-Verlag, 1980. Made at Year 2000.

Proceedings of the IEEE   15


This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

Son et al . : Holographic and Light-Field Ima ging a s Future 3-D Displays

ABOUT THE AUTHORS Chungnam, South Korea. His interests are 3-D image displays, human fac-
tor in SMV, plenoptic image, and laser-based optical instrumentations and
Jung-Young Son (Member, IEEE) received the
measurements.
B.Eng. degree in avionics from the Korea National
Aviation University, Goyang, South Korea, in 1973
Beom-Ryeol Lee received the B.S. and M.S.
and the M.S. degree in electronics and the Ph.D.
degrees in electronics engineering from Jeon-
degree in engineering science from the Univer-
buk National University, Jeonju, South Korea,
sity of Tennessee, Knoxville, TN, USA, in 1982 and
in 1987 and 1989, respectively, and the Ph.D.
1985, respectively.
degree in electronics and information engi-
From 1980 to 1985, he was a Graduate
neering from the Kunsan National University,
Research Assistant, and from 1985 to 1989, a
Kunsan, South Korea, in 2013.
Research Scientist at the University of Tennes-
From 1989, he worked as a Principal
see Space Institute. From 1989 to 2002, he worked at the Korea Institute
Researcher for ETRI. He is currently an Assistant
of Science and Technology as a Principal Research Scientist in Optics. He
Professor at the University of Science and Tech-
is currently a Professor at the Biomedical Engineering Department, Kon-
nology, Daejeon, South Korea. His interests include super multiview image
yang University, Nonsan, Chungnam, South Korea. His primary interests
and digital holographic content.
are focused on 3-D image displays, electroholography, millimeter, IR and
spectral images for medical applications, and laser-based optical instru-
Kwang-Hoon Lee received the M.S. degree
mentations and measurements.
in physics from Soonchunhyang University,
Dr. Son is Doctor of Technical Science, and Academician of Academy
Chungcheongnam-do, South Korea, in 2002
of Technological Sciences of Ukraine, and a Fellow of the International
and the Ph.D. degree in advanced technology
Society for Optics and Photonics (SPIE) and the Optical Society of Korea,
fusion from Konkuk University, Seoul, South
and is a member of Sigma Xi, Phi Kappa Phi, and the Optical Society of
Korea, in 2012.
America. He is also an Associate Editor of the Journal of Display Technol-
He is a Senior Research Scientist at the
ogy and Optical Engineering.
Spatial Optical Information Research Center,
Korea Photonics Technology Institute (KOPTI),
Hyoung Lee received the M.S. degree in signal
Gwangju, South Korea. His research interests
processing and multimedia engineering from
include general optics, Fourier optics, light-field display and digital holo-
Daegu University, Gyeongsangbuk-do, South
graphic display, and human factors in space display based on human
Korea, in 2010 and completed a Doctoral pro-
visual systems.
gram course in electrical and electronic engi-
neering from Yonsei University, Seoul, South
Korea, in 2013.
From 2010 to 2016, he worked at the Imaging
Media Research Center, Korea Institute of Science
and Technology as. He is currently a Researcher
at the Universal Media Laboratory, Konyang University, Nonsan,

16  Proceedings of the IEEE

Vous aimerez peut-être aussi