Vous êtes sur la page 1sur 4

BlueEyes Technology and its Applications-Research post

BlueEyes and Its Applications BlueEyes is the name of a human recognition venture initiated by IBM to allow people to interact with computers in a more natural manner. The technology aims to enable devices to recognize and use natural input, such as facial expressions. The initial developments of this project include scroll mice and other input devices that sense the user's pulse, monitor his or her facial expressions, and the movement of his or her eyelids. I have prepared a 4-page handout on BlueEyes and Its Business Applications from several cited sources. BlueEyes and Its Applications Animal survival depends on highly developed sensory abilities. Likewise, human cognition depends on highly developed abilities to perceive, integrate, and interpret visual, auditory, and touch information. Without a doubt, computers would be much more powerful if they had even a small fraction of the perceptual ability of animals or humans. Adding such perceptual abilities to computers would enable computers and humans to work together more as partners. Toward this end, the BlueEyes project aims at creating computational devices with the sort of perceptual abilities that people take for granted. How can we make computers "see" and "feel"? BlueEyes uses sensing technology to identify a user's actions and to extract key information. This information is then analyzed to determine the user's physical, emotional, or informational state, which in turn can be used to help make the user more productive by performing expected actions or by providing expected information. For example, a BlueEyes-enabled television could become active when the user makes eye contact, at which point the user could then tell the television to "turn on". Most of us hardly notice the surveillance cameras watching over the grocery store or the bank. But lately those lenses have been looking for far more than shoplifters. Applications: 1.Engineers at IBM's ffice:smarttags" Research Center in San Jose, CA, report that a number of large retailers have implemented surveillance systems that record and interpret customer movements, using software from Almaden's BlueEyes research project. BlueEyes is developing ways for computers to anticipate users' wants by gathering video data on eye movement and facial expression. Your gaze might rest on a Web site heading, for example, and that would prompt your computer to find similar links and to call them up in a new window. But the first practical use for the research turns out to be snooping on shoppers. BlueEyes software makes sense of what the cameras see to answer key questions for retailers, including, How many shoppers ignored a promotion? How many stopped? How long did they stay? Did their faces register boredom or delight? How many reached for the item and put it in their shopping carts? BlueEyes works by tracking pupil, eyebrow and mouth movement. When monitoring pupils, the system uses a camera and two infrared light sources placed inside the product display. One light source is aligned with the camera's focus; the other is slightly off axis. When the eye looks into the camera-aligned light, the pupil appears bright to the sensor, and the software registers the customer's attention.this is way it captures the person's income and buying preferences. BlueEyes is actively been incorporated in some of the leading retail outlets. 2. Another application would be in the automobile industry. By simply touching a computer input device such as a mouse, the computer system is designed to be able to determine a person's emotional state. for cars, it could be useful to help with critical decisions like: "I know you want to get into the fast lane, but I'm afraid I can't do that.Your too upset right now" and therefore assist in driving safely. 3. Current interfaces between computers and humans can present information vividly, but have no sense of whether that information is ever viewed or understood. In contrast, new real-time computer vision techniques for perceiving people allows us to create "Face-responsive Displays" and "Perceptive Environments", which can sense and respond to users that are viewing them. Using stereo-vision

techniques, we are able to detect, track, and identify users robustly and in real time. This information can make spoken language interface more robust, by selecting the acoustic information from a visually-localized source. Environments can become aware of how many people are present, what activity is occuring, and therefore what display or messaging modalities are most appropriate to use in the current situation. The results of our research will allow the interface between computers and human users to become more natural and intuitive. 4. We could see its use in video games where, it could give individual challenges to customers playing video games.Typically targeting commercial business. The integration of children's toys, technologies and computers is enabling new play experiences that were not commercially feasible until recently. The Intel Play QX3 Computer Microscope, the Me2Cam with Fun Fair, and the Computer Sound Morpher are commercially available smart toy products developed by the Intel Smart Toy Lab in . One theme that is common across these PC-connected toys is that users interact with them using a combination of visual, audible and tactile input & output modalities. The presentation will provide an overview of the interaction design of these products and pose some unique challenges faced by designers and engineers of such experiences targeted at novice computer users, namely young children. 5. The familiar and useful come from things we recognize. Many of our favorite things' appearance communicate their use; they show the change in their value though patina. As technologists we are now poised to imagine a world where computing objects communicate with us in-situ; where we are. We use our looks, feelings, and actions to give the computer the experience it needs to work with us. Keyboards and mice will not continue to dominate computer user interfaces. Keyboard input will be replaced in large measure by systems that know what we want and require less explicit communication. Sensors are gaining fidelity and ubiquity to record presence and actions; sensors will notice when we enter a space, sit down, lie down, pump iron, etc. Pervasive infrastructure is recording it. This talk will cover projects from the Context Aware Computing Group at MIT Media Lab.

Questions: 1. In the future, ordinary household devices -- such as televisions, refrigerators, and ovens -- may be able to do their jobs when we look at them and speak to them. Creatively think up a application of the IBM 'emotion mouse' specifically in the houshold devices? / 2. Several correlation studies have be discussed between color and ones emotion status, provide an application wherein you could utilize BlueEyes in this context./ 3. The emotion mouse could be used to display the feelings and reactions of e-commerce customers. Do you think this is a good idea or not? Substantiate. References: 1. Retrived on nov 10,2005 from www.alphaworks.ibm.com/awap.nsf/alltype/86ECE8569056DDC788256AEE007A9702 2. Retrieved on Nov 10,2005 from www.cse.ogi.edu/CHCC/Series/text2001.html 3.BlueEyes Technology,Computer Edge,Oct.2002,pages 23-27. 4.BlueEyes Technology and Apllications,Business Solutions, Nov 2001,pages 95-99.

Blue Eyes: Scalable and Reliable System Management for Cloud Computing

With the advent of cloud computing, massive and automated system management has become more important for successful and economical operation of computing resources. However, traditional monolithic system management solutions are designed to scale to only hundreds or thousands of systems at most. In this paper, we present Blue Eyes, a new system management solution with a multi-server scale-out architecture to handle hundreds of thousands of systems. Blue Eyes enables highly scalable and reliable system management by running many management servers in a distributed manner to collaboratively work on management tasks. In particular, we structure the management servers into a hierarchical tree to achieve scalability and management information is replicated into secondary servers to provide reliability and high availability. In addition, Blue Eyes is designed to extend the existing single server implementation without significantly restructuring the code base. Several experimental results with the Blue Eyes prototype have demonstrated that our multi-server system can reliably handle typical management tasks for a large scale of endpoints with dynamic load-balancing across the servers, near linear performance gain with server additions, and an acceptable network overhead. By: Sukhyun Song; Kyung Dong Ryu; Dilma Da Silva Published in: RC24721 in 2009

PONG was designed and built to demonstrate a set of ideas and technologies that were developed as part of the Blue Eyes Group at IBMs Almaden Research Center in San Jose. Blue Eyes began as an Adventurous Research project into the use of sensing technologies, such as video, at the human-computer interface. The group consists of researchers in computer vision and humancomputer interface. By: David Koons Published in: RJ10213 in 2001

Animal survival depends on highly developed sensory abilities. Likewise, human cognition depends on
highly developed abilities to perceive, integrate, and interpret visual, auditory, and touch information. Without a doubt, computers would be much more powerful if they had even a small fraction of the perceptual ability of animals or humans. Adding such perceptual abilities to computers would enable computers and humans to work together more as partners. Toward this end, the BlueEyes project aims at creating computational devices with the sort of perceptual abilities that people take for granted.

How can we make computers "see" and "feel"?


BlueEyes uses sensing technology to identify a user's actions and to extract key information. This information is then analyzed to determine the user's physical, emotional, or informational state, which in turn can be used to help make the user more productive by performing expected actions or by providing expected information. For example, a BlueEyes-enabled television could become active when the user makes eye contact, at which point the user could then tell the television to "turn on".

In the future, ordinary household devices -- such as televisions, refrigerators, and ovens -- may be able to do their jobs when we look at them and speak to them.
Improve

A researcher at Stanford has created an alternative to the mouse that allows a person using a computer to click links, highlight text, and scroll simply by looking at the screen and tapping a key on the keyboard. By using standard eye-

tracking hardware--a specialized computer screen with a high-definition camera and infrared lights--Manu Kumar, a doctoral student who works with computer-science professor Terry Winograd, has developed a novel user interface that is easy to operate. "Eye-tracking technology was developed for disabled users," Kumar explains, "but the work that we're doing here is trying to get it to a point where it becomes more useful for able-bodied users." He says that nondisabled users tend to have a higher standard for easy-to-use interfaces, and previously, eye-tracking technology that disabled people use hasn't appealed to them. At the heart of Kumar's technology is software called EyePoint that works with standard eye-tracking hardware. The software uses an approach that requires that a person look at a Web link, for instance, and hold a "hot key" on the keyboard (usually found on the number pad on the right) as she is looking. The area of the screen that's being looked at becomes magnified. Then, the person pinpoints her focus within the magnified region and releases the hot key, effectively clicking through to the link. Kumar's approach could take eye-tracking user interfaces in the right direction. Instead of designing a common type of gaze-based interface that is controlled completely by the eyes--for instance, a system in which a user gazes at a given link, then blinks in order to click through--he has involved the hand, which makes the interaction more natural. "He's got the right idea to let the eye augment the hand," says Robert Jacob, professor of computer science at Tufts University, in Medford, MA. Rudimentary eye-tracking technology dates back to the early 1900s. Using photographic film, researchers captured reflected light from subjects' eyes and used the information to study how people read and look at pictures. But today's technology involves a high-resolution camera and a series of infrared light-emitting diodes. This hardware is embedded into the bezel of expensive monitors; the one Kumar uses cost $25,000. The camera picks up the movement of the pupil and the reflection of the infrared light off the cornea, which is used as a reference point because it doesn't move. Even the best eye tracker isn't perfect, however. "The eye is not really very stable," says Kumar. Even when a person is fixated on a point, the pupil jitters. So he wrote an algorithm that allows the computer to smooth out the eye jitters in real time. The rest of the research, says Kumar, involves studying how people look at a screen and figuring out a way to build an interface that "does not overload the visual channel." In other words, he wanted to make its use feel natural to the user. One of the important features of the interface, says Kumar, is that it works without a person needing to control a cursor. Unlike the mouse-based system in ubiquitous use today, EyePoint provides no feedback on where a person is looking. Previous studies have shown that it is distracting to a person when she is aware of her gaze because she consciously tries to control its location. In the usability studies that Kumar conducted, he found that people's performance dropped when he implemented a blue dot that followed their eyes.

Read more: http://wiki.answers.com/Q/Blue_eyes_technology#ixzz1R4OPEWHi

This work explores a new direction in utilizing eye gaze for computer input. Gaze tracking has long been considered as an alternative or potentially superior pointing method for computer input. We believe that many fundamental limitations exist with traditional gaze pointing. In particular, it is unnatural to overload a perceptual channel such as vision with a motor control task. We therefore propose an alternative approach, dubbed MAGIC (Manual And Gaze Input Cascaded) pointing. With such an approach, pointing appears to the user to be a manual task, used for fine manipulation and selection. However, a large portion of the cursor movement is eliminated by warping the cursor to the eye gaze area, which encompasses the target. Two specific MAGIC pointing techniques, one conservative and one liberal, were designed, analyzed, and implemented with an eye tracker we developed. They were then tested in a pilot study. This early- stage exploration showed that the MAGIC pointing techniques might offer many advantages, including reduced physical effort and fatigue as compared to traditional manual pointing, greater accuracy and naturalness than traditional gaze pointing, and possibly faster speed than manual pointing. The pros and cons of the two techniques are discussed in light of both performance data and subjective reports. Attentive systems pay attention to what users do so that they can attend to what users need. Such systems track user behavior, model user interests, and anticipate user desires and actions. Because the general class of attentive systems is broad ranging from human butlers to web sites that profile users we have focused specifically on attentive information systems, which observe user actions with information resources, model user information states, and suggest information that might be helpful to users. In particular, we describe an implemented system, Simple User Interest Tracker (Suitor), that tracks computer users through multiple channels gaze, web browsing, application focus to determine their interests and to satisfy their information needs. By observing behavior and modeling users, Suitor finds and displays potentially relevant information that is both timely and non-disruptive to the users' ongoing activities.

Vous aimerez peut-être aussi