Vous êtes sur la page 1sur 13

MULTI TOUCH SCREENS

Abstract

The way we use computers today will soon change. The technology of the future will allow
us to interact with the computer on a whole different level from what we are used to. The
tools we use to communicate with the computer – such as the mouse and the keyboard, will
soon disappear and be replaced with tools more comfortable and more natural for the human
being to use. That future is already here. This report describes about the multi touch
technology and their applications.

The increase rate of how touch screen hardware and applications are used is growing rapidly
and will break new grounds in years to come. This new technology requires new ways for
detecting inputs from the user – inputs which will be made out of onscreen gestures rather
than by pressing of buttons or rolling mouse wheels.

The traditional way of interacting with a computer is by using a mouse or a key board. We
provide the computer with inputs more or less by the use of buttons. Regardless of the input
type, the computer can more or less only handle one input at the time which makes the input
handling and sorting very easy.

However, multi touch is as far from single input handling as one can come. The amount of
concurrent events in this interface is limited only by the data type holding the number of
finger inputs. The amount of simultaneous users is pretty much unlimited in the same way,
which of course in handy for larger scale display systems. That amount of potential
synchronous inputs requires new ways to detect the inputs. Since they aren’t the kind of
“on/off” inputs we are used in a traditional sense, they are needs for new ways interpret and
analyze the input type and gesture(s) they make out.

For an example of where using more than one button or device at a time is important in the
physical world, just think of having to type without being able to push the SHIFT key at the
same time as the character that you want to appear in upper case. There are a number of
cases where this can be of use in touch interfaces.

1
History

Multi-touch technology dates back to 1982, when Nimish Mehta at the UNIVERSITY OF
TORONTO developed the first finger pressure multi-touch display.

In 1983, Bell labs at Murray hill published a comprehensive discussion of touch-screen based
interfaces. In 1984 Bell Labs engineered a touch screen that could change images with more
than one hand. The group at the University of Toronto stopped working on hardware and
moved on to software and interfaces, expecting that they would have access to the Bell Labs
work.

A breakthrough occurred in 1991, when Pierre Wellner published a paper on his multi-touch
“Digital Desk”, which supported multi-finger and pinching motions.

Various companies expanded upon these discoveries in the beginning of the twenty-first
century. Mainstream exposure to multi-touch technology occurred in the year 2007, when
Apple unveiled the iPhone and Microsoft debuted surface computing. The iPhone in
particular has spawned a wave of interest in multi-touch computing, since it permits greatly
increased user interaction on a small scale. More robust and customizable multi-touch and
gesture-based solutions are beginning to become available, among them True Touch, created
by Cypress semiconductor. The use of multi-touch technology is expected to rapidly become
common place. For example, touch screen telephones are expected to increase from 200,000
shipped in 2006, to 21 million in 2012.

Introduction

Touch screen

All of the touch screens basically work like a mouse. Once the software driver for the touch
screen is installed, the touch screen emulates mouse functions. Touching the screen is
basically the same as clicking your mouse at the same point at the screen. When you touch
the touch screen, the mouse cursor will move to that point and make a mouse click. You can
tap the screen twice to perform a double click, and you can also drag your finger across the
touch screen to perform drag-and-drops. The touch screens will normally emulate left mouse
clicks. Through software, you can also switch the touch screen to perform right mouse clicks
instead.

2
TOUCH SCREEN

Multi touch screen

Multi-touch consists of a touch screen (screen, overlay, table, wall, etc.) or touchpad, as well
as software that recognizes multiple simultaneous touch points, as opposed to the single touch
screen (e.g. computer touchpad, ATM), which recognizes only one touch point. This effect is
achieved through a variety of means, including: heat, finger pressure, high capture rate
cameras, infrared light, optic capture, tuned electromagnetic induction, ultrasonic receivers,
transducer microphones, laser rangefinders, and shadow capture.

MULTI TOUCH SCREEN

3
COMPARISIONS

• Touch-tablets vs Touch screens: In some ways these are two extremes of a


continuum. If, for example, you have paper graphics on your tablet, is that a display
(albeit more-or-less static) or not? What if the “display” on the touch tablet is a tactile
display rather than visual? There are similarities, but there are real differences
between touch-sensitive display surfaces, vs touch pads or tablets. It is a difference of
directness. If you touch exactly where the thing you are interacting with is, let’s call it
a touch screen or touch display. If your hand is touching a surface that is not overlaid
on the screen, let's call it a touch tablet or touch pad.
• Discrete vs Continuous: The nature of interaction with multi-touch input is highly
dependent on the nature of discrete vs continuous actions supported. Many
conventional touch-screen interfaces are based discrete items such as pushing so-
called "light buttons", for example. An example of a multi-touch interface using such
discrete actions would be using a soft graphical QWERTY keyboard, where one
finger holds the shift key and another pushes the key for the upper-case character that
one wants to enter. An example of two fingers doing a coordinated continuous action
would be where they are stretching the diagonally opposed corners of a rectangle, for
example. Between the two is a continuous/discrete situation, such as where one
emulates a mouse, for example, using one finger for indicating continuous position,
and other fingers, when in contact, indicate mouse button pushes, for example.
• Degrees of Freedom: The richness of interaction is highly related to the
richness/numbers of degrees of freedom (DOF), and in particular, continuous degrees
of freedom, supported by the technology. The conventional GUI is largely based on
moving around a single 2D cursor, using a mouse, for example. This results in
2DOF. If I am sensing the location of two fingers, I have 4DOF, and so on. When
used appropriately, these technologies offer the potential to begin to capture the type
of richness of input that we encounter in the everyday world, and do so in a manner
that exploits the everyday skills that we have acquired living in it. This point is
tightly related to the previous one.
• Size matters: Size largely determines what muscle groups are used, how many
fingers/hands can be active on the surface, and what types of gestures are suited for
the device. The ability to sense the size of the area being touched can be as important
as the size of the touch surface

4
• Orientation Matters - Horizontal vs Vertical: Large touch surfaces have
traditionally had problems because they could only sense one point of contact. So, if
you rest your hand on the surface, as well as the finger that you want to point with,
you confuse the poor thing. This tends not to occur with vertically mounted surfaces.
Hence large electronic whiteboards frequently use single touch sensing technologies
without a problem.
• There is more to touch-sensing than contact and position: Historically, most
touch sensitive devices only report that the surface has been touched, and where. This
is true for both single and multi touch devices. However, there are other aspects of
touch that have been exploited in some systems, and have the potential to enrich the
user experience:
1. Degree of touch / pressure sensitivity: A touch surfaces that can
independently and continuously sense the degree of contact for each touch
point has a far higher potential for rich interaction. Note that I use “degree of
contact” rather than pressure since frequently/usually, what passes for pressure
is actually a side effect – as you push harder, your finger tip spreads wider
over the point of contact, and what is actually sensed is amount/area of
contact, not pressure, per se. Either is richer than just binary touch/no touch,
but there are even subtle differences in the affordances of pressure vs degree.
2. Angle of approach: A few systems have demonstrated the ability to sense the
angle that the finger relative to the screen surface. In effect, this gives the
finger the capability to function more-or-less as a virtual joystick at the point
of contact. It also lets the finger specify a vector that can be projected into the
virtual 3D space behind the screen from the point of contact - something that
could be relevant in games or 3D applications.
3. Stylus and/or finger: Some people speak as if one must make a choice
between stylus vs finger. It certainly is the case that many stylus systems will
not work with a finger, but many touch sensors work with a stylus or
finger. But any user of the Palm Pilot knows that there is the potential to use
either. Each has its own strengths and weaknesses.

5
BASIC IDEAS OF MULTI TOUCH DISPLAY

A basic touch screen has three main components: a touch sensor, a controller, and a software
driver. The touch screen is an input device, so it needs to be combined with a display and a
PC or other device to make a complete touch input system.

1. Touch Sensor

A touch screen sensor is a clear glass panel with a touch responsive surface. The touch
sensor/panel is placed over a display screen so that the responsive area of the panel covers the
viewable area of the video screen. There are several different touch sensor technologies on
the market today, each using a different method to detect touch input. The sensor generally
has an electrical current or signal going through it and touching the screen causes a voltage or
signal change. This voltage change is used to determine the location of the touch to the
screen.

2. Controller

The controller is a small PC card that connects between the touch sensor and the PC. It takes
information from the touch sensor and translates it into information that PC can understand.
The controller is usually installed inside the monitor for integrated monitors or it is housed in
a plastic case for external touch add-ons/overlays. The controller determines what type of
interface/connection you will need on the PC. Integrated touch monitors will have an extra
cable connection on the back for the touch screen. Controllers are available that can connect
to a Serial/COM port (PC) or to a USB port (PC or Macintosh). Specialized controllers are
also available that work with DVD players and other devices.

3. Software

The driver is a software update for the PC system that allows the touch screen and computer
to work together. It tells the computer's operating system how to interpret the touch event
information that is sent from the controller. Most touch screen drivers today are a mouse-
emulation type driver. This makes touching the screen the same as clicking your mouse at the
same location on the screen. This allows the touch screen to work with existing software and
allows new applications to be developed without the need for touch screen specific
programming. Some equipment such as thin client terminals, DVD players, and specialized

6
computer systems either do not use software drivers or they have their own built-in touch
screen driver.

SEVERAL DIFFERENT APPROACHES

4-Wire Resistive

4-Wire Resistive touch screen technology is used in the touch add-ons for PC monitors and
notebooks. It is a reliable and affordable technology that is widely used by individuals and in
less demanding workplace applications. It is pressure sensitive so it responds to any input
device, including finger, gloved hand, or pen stylus.

5-Wire Resistive

5-Wire Resistive touch screen technology with the CRT and LCD touch monitors. It is a
durable and accurate technology that is widely used in demanding workplace applications
such as point-of-sale systems, industrial controls, and medical systems. It is pressure sensitive
so it responds to any input device, including finger, gloved hand, or pen stylus.

Capacitive

Capacitive touch screen technology with the CRT and LCD touch monitors. It is a durable
technology that is used in a wide range of applications including point-of-sale systems,
industrial controls, and public information kiosks. It has a higher clarity than Resistive

7
technology, but it only responds to finger contact and will not work with a gloved hand or
pen stylus.

Pen Touch Capacitive

Pen Touch Capacitive screen is the technology with the CRT and LCD touch monitors. This
screen combines durable technology with a tethered pen stylus. The screen can be set to
respond to finger input only, pen input only, or both. The pen stylus is a good choice for
signature capture, on-screen annotations, or for applications requiring precise input.

Surface Acoustic Wave

Surface Acoustic Wave touch screen technology with the CRT and LCD touch monitors. It is
a very durable screen that is widely used in applications such as computer based training and
information kiosk displays. The SAW screen is a good choice for applications where image
clarity is important, but it may not perform well in extremely dirty or dusty environments.
Responds to finger or soft rubber tipped stylus.

Near Field Imaging

Near Field Imaging touch screen technology as one of the custom LCD touch monitor
solutions. It is an extremely durable screen that is suited for use in industrial control systems
and other harsh environments. The NFI type screen is not affected by most surface
contaminants or scratches. Respond to finger or gloved hand.

Infrared

Infrared touch screen technology with the Plasma display solutions. This is the only type of
touch technology that is available for large displays such as 42-inch Plasma screens. It is a
durable technology that offers high image clarity. Respond to any input device or stylus.

Compared to the above all approaches, the most widely spread representation of multi
touch technology today used is Frustrated Total Internal Reflection (FTIR).

8
Technology explanation of FTIR

FTIR describes the internal reflection of light, inside a certain material. Infrared light is
shined into the side of the pane of a somewhat transparent higher index medium (acrylic,
glass, plastic) and is trapped inside this medium by the refraction index of the material. The
light sources in the FTIR case are a number of infra red diodes attached to the side of pane,
while touch light uses one infra red illuminant shinning on to the surface. TIR occurs in the
light ray is travelling inside a higher index medium and strikes a surface boundary into a
lower index medium. When a finger touches the surface of the pane, the light is frustrated,
causing the light to scatter downwards where it is picked up by an IR camera.

On the side opposite to the user is a camera with visible light filter that registers the
scattered light there is also a projector which projects the image on to the projection screen.

9
APPLICATIONS

Use of multi-touch screens replaces traditional inputs like mouse, joystick and Keyboard.

Used in electronic-music instruments.

A multi-touch sensor designed for robotics to enable sensing of shape, orientation, etc.

Multi touch input picture drawing.

Sensor frames for ex. sensing the identity.

Multi touch screens are used as digital desks.

For smart phones ex. Simon, iPhone, etc.

Future directions

Multi-touch interaction is still in the early stages of development.


The use of multi-touch technology is expected to rapidly become common place. For
example, touch screen telephones are expected to increase from 200,000 shipped in 2006, to
21 million in 2012.Developers of the technology have suggested a variety of ways that multi-
touch can be used including:

• Enhanced dining experience

• Security services

• Locate landmarks, plan day, uplink info to cellular phone

• Gaming

• Governmental use

• Concept mapping

• An enhanced multimedia experience

10
• Music composition, recording and mixing.

11
Conclusion

Many linguists believe that language evolved from manual gestures. Yet despite our aptitude
in using spoken language, we still use our hands to communicate. The role of the computer
has shifted from number-crunching machine to communication device, but the part played by
hand movements in communicating information to a computer and in computer-mediated
communication, has largely been limited to that of a single pointing finger. This dissertation
has established the thesis that multi-touch interaction allows users to communicate
information to a computer faster and more fluently than single-point interaction techniques.
As computing devices permeate an ever-growing portion of our lives, new interaction
methodologies must rise to meet challenges that lie beyond the realm of today’s mouse-and-
keyboard paradigm. We believe that the versatility of multi-touch interaction will make it an
integral part of future interfaces.
Here by we conclude that multi touch is definitely an area which will expand in future. Since
the technology is here, we can’t say why we should remain at the traditional single input
interfaces for long.

12
BIBLOGRAPHY:

 www.multitouchscreen.com

 http://www.billbuxton.com/multitouchOverview.html

 www.ted.com

 http://www.multitouch.nl/documents/multitouchdisplay_howto_070523_v02.pdf

 http://multi-touchscreen.com/multitouchscreens.com

13

Vous aimerez peut-être aussi