Vous êtes sur la page 1sur 4

2011 International Conference on Computer Science and Network Technology

Large Screen Multi-Touch System Integrated With


Multi -Proj ector
Xiaoqiong Wang
School of Information
Guizhou Institute of Finance and Economics
Guiyang, China
E-mail: 125298835@qq.com
Abstract-A large screen multi-touch system integrated with
multi-projector is presented in this paper. It uses multiple
infrared cameras behind the screen to capture the image changes
in multi-user fnger contact surface, flters the visible light
images projected by projectors, collects the remaining infrared
image and transfers it into a computer, then detects and tracks
various operations of multi-fngers via image recognition
algorithm. The experimental results show that this system can
support multi-users to use their fngers to accurately simulate
actions of mouse clicks, synchronous positioning and trigger
detection, and operations of selecting, moving, rotating and
zooming target objects on large screen. This system has wide
application prospects in real-time human-machine interaction.
Keywords multi-touch control; large screen; integrated with
multi-projector; K^algorithm
I INTRODUCTION
With increasing popularity of applications of information
technology, multimedia, visualization and large screen display
technology, traditional human-machine interaction devices
such as keyboards, mice and joysticks have big limitations in
naturalness and fiendliness of human-machine interaction. So
it is extremely necessary to develop appropriate
human-machine interaction mode that comply with
communication habits of consumers. The multi-touch
technology that controls contents displayed in the screen via
fngers can enable computer to accept multi-touch
human-machine interaction operations in the screen without
traditional input devices.
At present the typical main products in multi-touch control
technology industry are Diamond Touch, FTIR-Touch, and
Microsof Surface, etc.
Diamond Touch
[l]
is a HW/SW platform that support
multi-user concurrent input and gesture interaction, which
developed by Mitsubishi Electric Research Laboratories in
2004. It uses electricity induction principle and mounts large
number of antennae under the touch panel, where each
antenna passes a specifc signal. By using the conductivity of
the user's own and through his chair, the signal can be
transmitted to a separate receiver that each user has. When
the user touches the panel, there is a minimum amount of
signal transmission among the antennae near the contact
978-1-4577-1587-7111/$26.00 20111EEE 455
Liwei Wang
School of Economics and Management
Guizhou Normal University
Guiyang, China
E-mail: l_w_wang@163.com
point, the user's body and the receiver. The platform has
lower touch accuracy, its signal transfer mode restricts the
scope of user activities and screen display area, and it has
signifcant limitations.
FTIR (Frustrated Total Interal Refection) Touch
[2]
is a
multi-touch hardware platform designed by New York
University in 2006, it adopts the fustrated total interal
refection techniques, uneven surface of the fngers causes
light beam scattering, scattering light reaches a photoelectric
sensor through touch screen, the sensor converts light signals
into electrical signals, then the system can obtain the
corresponding touch information. The platform does not
require a closed box and can obtain the contact with high
contrast. But it requires high performance hardware (LED and
compatibility layer), and does not recognize the objects
marked. It can only be used in projection display systems
today.
Microsof Surface
[3]
is a smart desktop that supports
multi-touch and gesture input, introduced by Microsof in
2007. It adopts image processing technology to implement
Multi-touch. There are fve cameras inside the Surface
platform which capture infared refected light generated by
fngers to touch the desktop, and then locate the touch point
according to the position of the image spot captured. These
cameras can also identif some objects with special mark on
the table.
From the above, the current multi-touch control
technology is mostly based on single-projector system, its
system platform scale and screen display area is smaller. The
large screen multi-touch system stated in this paper can
project and integrate the light to a large screen by
multi-projector, use multiple infared cameras behind the
screen to capture image changes in multi-fmger contact
surface, identif the fnger touch point, and complete
multi-touch operation on large screen. It uses the algorithms
based on machine vision and image detection tracking,
analyzes the screen area contacted by the fngers with image
contour transform algorithm frst, determines the center
coordinates of feature points by center computational
algorithms, identifes and tracks corresponding feature points
in two adjacent images using k-adjacent (KN) algorithm,
then does event detection and semantic association, and
December 24-26, 2011
accomplish the operations to select, move, rotate and zoom
the contents on large screen via fngers.
II SYSTEM COMPONENTS
A large screen multi-touch system integrated with
multi-projector consists of seven units; the hardware
architecture is shown in Figure 1:

Figure 1. Multi-Touch System


First is a multi-touch box structure which loads system
components and constructs optical closed environment (Box
for the width 6.67m, height 2.0m, depth 1.5m);
Second are three infared cameras installed on the bottom
of the box, which confgure infared flters to capture and
flter the image inside the screen behind the visible light
image;
Third is a set of four infared light lamps installed around
the infared cameras. The auxiliary infared light source can
increase the brightness and contrast of pictures taken, and
enhance the precision of coordinate analysis;
Fourth are three parallel projectors, each one can project
onto the screen area that is 2.67m * 2.0m with 25% junction
overlap. Using the edge fsion technology to implement
seamless splicing, the screen area projected by the three
parallel projectors is up to 6.67m*2.0m;
Fifh are three projection optical mirrors installed on the
bottom of the box at certain angle, using the mirrors to project
the projector image on the rear projection screen can narrow
box thickness;
Sixth is a large rear projection screen, which is a kind of
tempered glass afxed with a rear projection flm;
Seventh is an image processing computer with a 4-core
CPU, 2 graphics cards, 10 memory, and a four-way video
capture card.
Applications include: Three-channel edge-blending and
infared image acquisition module; RS232 port control
module for projector; Image enhancement flter
module; Touch detection mobile tracking module; Action
Semantic interpretation dictionary modules; Interactive
image control module; System management module, etc.
The work principle of the system is as follow: The host
computer outputs the graphic image to the projectors, which
project the image onto the rear projection screen in font of
the box via the mirror. The cameras located on the back wall
of the box zone capture screen images in corresponding part.
When the operator touches the image on the rear projection
screen, his fngers' corresponding contacts in the image
captured by the infared camera will change to bright spots,
these bright spots orderly record the touch event process on
the curent screen. The host computer collects infared camera
456
signal to digitize it, calculates center positions of many bright
spots (there are more than one fnger touch points when many
operate) and gives each one a touch 1, executes mobile track
on every bright spot, and fmally maps each active touch ID to
the corresponding image control action such as press, move,
pop-up, etc. through action semantic analysis and
interpretation dictionary pre-established.
III THE DETECTING AD TRCKING OF TOUCH POINTS
The system sofware development environment is
vs2005+0penCY. OpenCY is a computer vision open source
library used to process images and video streams. Sofware
processing generally includes 3 steps: frst, it preprocesses the
original input image captured by the cameras with smoothing,
fltering, and quantifing fnctions; secondly, by detecting
and tracking of feature points being processed, it calculates
center position and interprets it as a variety of input states;
third, it sends the input position, status and other information
to the upper application which converts touch actions into
messages like press, move and pop-up, then projects the fnal
results onto the surface of display screen, fmish the operations
of moving, rotating and zooming the objects on the touch
screen. The processing fow '' is shown in Figure 2.
[
LOJ 1OQ+
J

[
teg
J
W
[
1II KOOC11+
J

[
+
J

(
M11WWW+
J

L1OO 1OSO MO+


S gDl I-J~

11O} tog
, tot:.ugD1u
Figure 2. Flowchart of Multi-Touch System
.. Capture Images
When the fngers touch the large screen, contact area and
non-contact area will be diferent gray levels or colors shown
in the image, as shown in Figure 3. Among them, the left
image is the original image captured by a camera, the right
one is the results afer pre-process, gray value of its feature
points is closes to 255.
Figure 3. Images Collected From the Multi-Point Touch Screen
/. Point Detecting
The purpose of point detecting is to determine the fnger's
contact area in the screen. This system sets up an infared
light source to detect the objects touching the screen. At the
same time, the cameras behind the screen with infared flter
collect the infared light refected or scattered back to. When
the fngers touch the screen, the infared light will be
signifcantly enhanced, and the feature points can easily be
detected. The feature points detected by the pixel value of its
own are divided into two categories through the algorithm:
prospects (generally pixel value is not 0) and background
(pixel value 0). During processing, the feature point detection
fnction is mainly used to distinguish fom the image and
extract the pixels with the same gray level, then produces
contour map of feature points by calling contour fnction, as
shown in Figure 4. Each bright spot in the lef image
represents the possible feature point, the right image is the
contour map generated by the lef feature points.
,
. ,

l
;
v
,

l
. ...
.
.
-.,
Figure 4. Detected Feature Points and Contours Generated
The detected feature points are dispersed throughout the
image, this system determines the coordinates of their
respective through a central point calculation algorithm shown
in Figure 5. Xc enter , Y center represents the center coordinates
of feature points
[5]
, calculated by using the formula (1) shown
in Figure 6
M
Figure 5. Calculation of the Center Coordinates
? ___= LXd / /+
= J ' -

Figure 6. Equation (\)


| Point Tracking
tJ
In order to identif the fnger touch points and the actions,
the cameras must track the movement and status of the touch
points afer detecting the touch points. Touch point tracking
mainly includes identifing the appropriate touch points,
detecting the touch event given out by the contact, and
457
conversing fom lower driven to higher application event.
Identifing the appropriate touch points means to identif
the touch point generated by the same fnger. This requires
identifing the appropriate touch points generated by the same
fnger on the touch screen based on known information in
many touch points of two consecutive images. The system
uses the k-Nearest Neighbor algorithm
[6]
to identif the
corresponding feature points and do the corresponding cluster
processing. The basic idea ofKNN algorithm is: afer giving a
new touch point, considers the original touch points and the
most recent (most similar) K points in new touch points, and
determines the category of new touch points according to the
type of this K touch point, specifc algorithm
[7]
is as follows:

Suppose the set of all touch points fom the previous


fame as A, the set of all touch points in the curent
fame as B;

For all points a in A, calculate all the distance length


fom a to the points b in B, and store it to an aray E,
here using Euclidean distance.

Weed out the long distance in aray E;

Sort the aray E according to the order fom small to


large of the distance;

Starting fom the head in aray E, before removing a


distance, frst look at whether the two end points of the
distance have been successflly paired, if have not,
then mark the both ends as the same spot;

Repeat the previous step until remove all distances.


This will mark all corresponding touch points fom the
two consecutive images, and can produce the corresponding
touch event. The remaining points not being matched between
A and B indicate that they exist only in one of fames, they are
just disappeared or new generated points, and can be
separately corresponded to the pop-up or press event. This can
solve the problem of matching feature points, and determine
the corresponding operation.
IV SEMNTIC INTERPRETATION OF TOUCH EVENTS AND
IMAGE MANIPULATION CONTROL
[8]
By tracking the touch points, each touch feature point on
the screen will get a unique 1. As long as the fnger
continues to touch the screen, the system will associate the
same touch 1 with the fnger. When the fnger leaves the
screen surface, the system will release the touch ID. Now a
dictionary will be needed to map the active touch 1 to the
corresponding image manipulation actions such as press,
move, pop-up, etc. The system creates a class called
PictureTrackerManager that contains the dictionary and
handles a variety of touch events. Whenever a touch event is
triggered, it will try to fnd an instance associated with
PictureTracker to handle the touch events. The process needs
to consider the following different scenarios.
1) When thefinger press event occurs, there are 3 options:
a) Using fnger to touch an empty place, any event does
not occur.
b) Using fnger to touch a new image will create a new
Picture Tracker instance and a new entry in touch 1 mapping.
c) Using second (or more) fnger to touch the image
tracked, it needs to associate a new touch ID with the same
Picture Tracker instance.
2) When the finger moving event occurs, there are 2
options:
a) Finger touching the ID is not associated with a
Picture Tracker, any event should not be occurred.
b) Finger touching the ID is associated with a
Picture Tracker, it needs to forward the event to it.
3) When the finger pop-up Up event occurs, there are 2
options:
a) Delete a fmger touching ID, but there is an associated
touch ID at least, it needs to delete the entry fom the touch ID
mapping.
b) Delete the last touch ID related, it needs to delete the
entry fom the touch ID mapping. [mage tracker is no longer
used and treated as garbage collected.
V CONCLUSIONS
This article describes the work principle, the hardware
architecture and the main sofware blocks about the large
screen multi-touch system. It uses the edge blending
technology to seamlessly splice a very large screen produced
by three projectors, efectively confgures spatial location of
multiple cameras and auxiliary infared light source, and
optimizes the sofware such as flm image fsion, touch point
identifcation and tracking algorithm, better solves the
real-time and accuracy problem of the operations that the
fngers select, move, rotate and zoom the objects on
multi-touch large screen. It would be widely used in
intelligent interactive, media display, monitoring scheduling
and other many areas of human-computer real-time
interaction. Currently, the large screen multi-touch system
integrated with multi-projector has been successfully applied
in our part of the Museum, Planning Museum and other
multimedia presentations, digital entertainment occasions.
REFERENCES
[1] Dietz P and Leigh D, "Diamond Touch: A Multi-user Touch
Technology," Proceedings of ACMUIST' 01, Orlando F L, November
2001, pp.IIS - 124.
[2] lefrson YR, "Low-cost multi-touch sensing tbrough fustrated total
internal refection," ACM UIST, 2005, pp.l15 - liS.
[3] Ringel M, Ryall K, Shen C,etal. Release,Relocate, Reorient, Resize :
"Fluid Techniques for Document Shaing on Multi-user Interactive Sur
face," PPCHI 2004 Extended Abstracts, 2004, pp. 1441-1444.
[4] Lu Ruxi, "Design and Implementation of Large interactive touch
screen Smart," Modern Display, 2007, S2(12), pp. 31-35.
[5] Wang Feng, Ren Xiagshi ad Liu Zhen, "A Robust Blob Recognition
and Tracking Method in Vision-based Multi-touch Technique," ISPA
apos;OS.lnternational Symposium. [s.l.]:[s.n.], 200S, pp. 971-974.
[6] Li Hua ad liao lianmin,"Simplifed PSO KN classifcation
algorithm," Computer Engineering and Applications,200S,44 (32) :
pp.57-60.
[7] Xu Xianglin,"multi-touch research on Infared scanning combined with
camera technology, " Electronics, 2009, pp. S-I 0
[S] Xue Yuliag,"Interactive large-screen projection display system
design," Master Thesis, Zhejiag University,2006.5
45S

Vous aimerez peut-être aussi