Vous êtes sur la page 1sur 13

EXPERIMENT

Participants T o e v a l u a t e t h e p e r f o r m a n c e o f o u r s ys t e m , w e r e c r u i t e d 1 3 participants (7 female) from the Greater Seattle area. These participants represented a diverse cross -section of potential a g e s a n d b o d y t yp e s . A g e s r a n g e d f r o m 2 0 t o 5 6 ( m e a n 38.3), and computed body mass index es (BMIs) ranged from 20.5 (normal) to 31.9 (obese).

Experimental Conditions

We selected three input groupings from the multitude of possible location combination s to test. We believe that these groupings, illustrated in Figure 7, are of particular interest with respect to interface design, and at the same t i m e , p u s h t h e l i m i t s o f o u r s e n s i n g c a p a b i l i t y. F r o m t h e s e three groupings, we derived five different experim ental conditions, described below. One set of gestures we tested had participants tapping on the tips of each of their five fingers (Figure 6, Fingers). The fingers offer interesting affordances that make them compelling to appropriate for input. Fo remost, they provide clearly discrete interaction points, which are even already well-named (e.g., ring finger). In addition to five finger tips, there are 14 knuckles (five major, nine minor), which, taken together, could offer 19 readily identifiable input locations on the fingers alone. Second, we have exceptional finger -tofinger d e x t e r i t y, a s d e m o n s t r a t e d w h e n w e c o u n t b y t a p p i n g o n o u r f i n g e r s . F i n a l l y, t h e f i n g e r s a r e l i n e a r l y o r d e r e d , w h i c h i s p o t e n t i a l l y u s e f u l f o r i n t e r f a c e s l i k e n u m b e r e n t r y, magnitude control (e.g., volume), and menu selection. At the same time, fingers are among the most uniform appendages o n t h e b o d y, w i t h a l l b u t t h e t h u m b s h a r i n g a s i m i l a r skeletal and muscular structure. This drastically reduces acoustic variation and makes differentiating among them d i f f i c u l t . A d d i t i o n a l l y, a c o u s t i c i n f o r m a t i o n m u s t c r o s s a s many as five (finger and wrist) joints to reach the forearm, which further dampens signals. For this experimental condition, w e t h u s d e c i d e d t o p l a c e t h e s e n s o r a r r a ys o n t h e f o r e a r m , just below the elbow.

Despite these difficulties, pilot experiments showed measureable acoustic differences among fingers, which we theorize is primarily related to finger length and thickness, interactions with the complex structure of th e wrist bones, and variations in the acoustic transmission properties of the muscles extending from the fingers to the forearm.

Whole Arm (Five Locations) Another gesture set investigated the use of five input locations on the forearm and hand: arm, wrist, palm, thumb and middle finger (Figure 7, Whole Arm). We selected these locations for two important reasons. First, they are distinct and named parts of the body (e.g., wrist). This allowed participants to accurately t ap these locations without training o r m a r k i n g s . A d d i t i o n a l l y, t h e s e l o c a t i o n s p r o v e d t o b e acoustically distinct during piloting, with the large spatial spread of input points offering further variation. We used these locations in three different con ditions. One

condition placed the sensor above the elbow, while another placed it below. This was incorporated into the experiment to measure the accuracy loss across this significant articulation p o i n t ( t h e e l b o w ) . A d d i t i o n a l l y, p a r t i c i p a n t s r e p e a t e d the lower placement condition in an eyes -free context: participants were t o l d t o c l o s e t h e i r e ye s a n d f a c e f o r w a r d , both for training and testing. This condition was included to gauge how well users could target on -body input locations i n a n e ye s - f r e e c o n t e x t ( e . g . , d r i v i n g ) . Forearm (Ten Locations) In an effort to assess the upper bound of our approachs sensing resolution, our fifth and final experimental condition used ten locations on just the forearm (Figure 6, Forearm). Not only was this a very hi gh density of input locations (unlike the whole-arm condition), but it also relied on an input surface (the forearm) with a high degree of physical uniformity (unlike, e.g., the hand). We expected that these factors would make acoustic sensing difficult. Moreover, this location was compelling due to its large and flat s u r f a c e a r e a , a s w e l l a s i t s i m m e d i a t e a c c e s s i b i l i t y, b o t h v i s u a l l y a n d f o r f i n g e r i n p u t . S i m u l t a n e o u s l y, t h i s m a k e s f o r a n i d e a l p r o j e c t i o n s u r f a c e f o r d yn a m i c i n t e r f a c e s . T o m a x i m i z e t h e s u r f a c e a r e a f o r i n p u t , w e p l a c e d the sensor above the elbow, leaving the entire forearm free. Rather than naming the input locations, as was done in the previously described c o n d i t i o n s , w e e m p l o ye d s m a l l , c o l o r e d stickers to mark input targets. This was both to reduce confusion (since locations on the forearm do not have common n a m e s ) a n d t o i n c r e a s e i n p u t c o n s i s t e n c y. A s m e n t i o n e d p r e v i o u s l y, w e b e l i e v e t h e f o r e a r m i s i d e a l f o r p r o j e c t e d interface elements; the stickers served as low -tech placeholders for projected buttons. Design and Setup W e e m p l o ye d a w i t h i n - s u b j e c t s d e s i g n , w i t h e a c h p a r t i c i p a n t performing tasks in each of the five conditions in randomized order: five fingers with sensors below elbow; five points on the whole arm with the sensor s above the elbow; the same points with sensors below the elbow, both sighted and blind; and ten marked points on the forearm with the sensors above the elbow. Participants were seated in a conventional office chair, in front of a desktop computer that presented stimuli. For conditions with sensors below the elbow, we placed the armband ~3cm away from the elbow, with one sensor package near the radius and the other near the ulna. For conditions

with the sensors above the elbow, we placed the armband ~7cm above the elbow, such that one sensor package rested on the biceps. Right-handed participants had the armband placed on the left arm, which allowed them to use their dominant hand for finger input. For the one left -handed participant, we flipped the setup, which had no apparent effect on t h e o p e r a t i o n o f t h e s ys t e m . T i g h t n e s s o f t h e a r m b a n d w a s adjusted to be firm, but comfortable. While performing tasks, participants could place their elbow on the desk, t u c k e d a g a i n s t t h e i r b o d y, o r o n t h e c h a i r s a d j u s t a b l e a r m r e s t ; most chose the latter.

PROCEDURE For each condition, the experimenter walked through the input locations to be tested and demonstrated finger taps on each. Participants practiced duplicating these motions for approximately one minute with each gesture set. This allowed participants to familiarize themselves with our naming c o n v e n t i o n s ( e . g . p i n k y , w r i s t ) , a n d t o p r a c t i c e t a p p i n g their arm and hands with a finger on the opposite hand. It also allowed us to convey the app ropriate tap force to participants, who often initially tapped unnecessarily hard. T o t r a i n t h e s ys t e m , p a r t i c i p a n t s w e r e i n s t r u c t e d t o c o m f o r t a b l y tap each location ten times, with a finger of their choosing. This constituted one training round. In total, three rounds of training data were collected per input location set (30 examples per location, 150 data points total). An exception to this procedure was in the case of the ten forearm locations, where only two rounds were collected to save time (20 examples per location, 200 data points total). Total training time for each experimental condition was approximately three minutes. We used the training data to build an SVM classifier. During the subsequent testing phase, we presented participants w i t h s i m p l e t e x t s t i m u l i ( e . g . t a p yo u r w r i s t ) , w h i c h i n s t r u c t e d them where to tap. The order of stimuli was randomized, with each location appearing ten times in total. T h e s ys t e m p e r f o r m e d r e a l - t i m e s e g m e n t a t i o n a n d c l a s s i f i c a t i o n , and provided immediate feedback to the participant ( e . g . yo u t a p p e d y o u r w r i s t ) . W e p r o v i d e d f e e d b a c k s o that participants could see where the system was making errors (as they would if using a real application). If an input was not segmented (i.e. the tap was too quiet), participants could see this and would simply tap again. Overall,

mentation error rates were negligible in all conditions, and n o t i n c l u d e d i n f u r t h e r a n a l ys i s .

Figure

Figure

RESULTS In this section, we report on the classification accuracies for the test phases in the five different conditions. Overall, classification rates were high, with an average accuracy a c r o s s c o n d i t i o n s o f 8 7 . 6 % . A d d i t i o n a l l y, w e p r e s e n t p r e l i m i n a r y results exploring the correlation between cla ssification accuracy and factors such as BMI, age, and sex. Five Fingers Despite multiple joint crossings and ~40cm of separation between the input targets and sensors, classification accuracy remained high for the five -finger condition, averaging 87.7% (SD=10.0%, chance=20%) across participants. Segmentation, as in other conditions, was essentially perfect. I n s p e c t i o n o f t h e c o n f u s i o n m a t r i c e s s h o w e d n o s ys t e m a t i c errors in the classification, with errors tending to be evenly distributed over the ot her digits. When classification was i n c o r r e c t , t h e s ys t e m b e l i e v e d t h e i n p u t t o b e a n a d j a c e n t finger 60.5% of the time; only marginally above prior probability (40%). This suggests there are only limited acoustic continuities between the fingers. The only potential exception t o t h i s w a s i n t h e c a s e o f t h e p i n k y, w h e r e t h e r i n g f i n g e r constituted 63.3% percent of the misclassifications. Whole Arm Participants performed three conditions with the whole -arm location configuration. The below -elbow placement performed the best, posting a 95.5% (SD=5.1%, chance=20%) a v e r a g e a c c u r a c y. T h i s i s n o t s u r p r i s i n g , a s t h i s c o n d i t i o n placed the sensors closer to the input targets than the other conditions. Moving the sensor above the elbow reduced accuracy to 88.3% (SD=7.8%, chance=20%), a drop of 7.2%. This is almost certainly related to the acoustic loss at the elbow joint and the additional 10cm of distance between the sensor and input targets. Figure 8 shows these results. T h e e ye s - f r e e i n p u t c o n d i t i o n yi e l d e d l o w e r a c c u r a c i e s than other conditions, averaging 85.0% (SD=9.4%, chance=20%). This represents a 10.5% drop from its visionassisted, but otherwise identical counterpart condition. It

was apparent from watching participants complete this condition that targeting precision was reduced. In sighted conditions, participants appeared to be able to tap locations with perhaps a 2cm radius of error. Although not formally captured, this margin of error a p p e a r e d t o d o u b l e o r t r i p l e w h e n t h e e ye s w e r e c l o s e d . W e b e l i e v e t h a t additional training data, which better covers the increased input v a r i a b i l i t y, w o u l d r e m o v e m u c h o f t h i s d e f i c i t . W e w o u l d a l s o c a u t i o n d e s i g n e r s d e v e l o p i n g e ye s - f r e e , o n - b o d y i n t e r f a c e s t o c a r e f u l l y c o n s i d e r t h e l o c a t i o n s p a r t i c i p a n t s c a n t a p a c c u r a t e l y. Forearm Classification accuracy for the ten -location forearm condition stood at 81.5% (SD=10.5%, chance=10%), a surprisingly strong result for an input set we devised to push our s ys t e m s s e n s i n g l i m i t ( K = 0 . 7 2 , c o n s i d e r e d v e r y s t r o n g ) . F o l l o w i n g t h e e x p e r i m e n t , w e c o n s i d e r e d d i f f e r e n t w a ys t o improve accuracy by collapsing the ten locations into larger input groupings. The goal of this exercise was to explore the tradeoff between classification accuracy and number of input locations on the forearm, which represents a particularly valuable input surface for application designers. We grouped targets into sets based on what we believed to be logical spatial groupings (Figure 9, A - E a n d G ) . I n a d d i t i o n t o e x p l o r i n g c l a s s i f i c a t i o n a c c u r a c i e s f o r l a yo u t s that we considered to be intuitive, we also performed an exhaustive s e a r c h ( p r o g r a m m a t i c a l l y) o v e r a l l p o s s i b l e g r o u p i n g s . F o r m o s t l o c a t i o n counts, this search confirmed that our intuitive groupings were optimal; h o w e v e r , t h i s s e a r c h r e v e a l e d o n e p l a u s i b l e , a l t h o u g h i r r e g u l a r , l a yo u t with high accuracy at six input locations (Figure 9, F). Unlike in the five-fingers condition, there appeared to be shared acoustic traits that led to a higher likelihood of confusion with adjacent targets than distant ones. This effect w a s m o r e p r o m i n e n t l a t e r a l l y t h a n l o n g i t u d i n a l l y. F i g u r e 9 illustrates this with lateral groupings consistently outperforming similarly arranged, longitudinal groupings (B and C vs. D and E). This is unsurprising given the morphology of the arm, with a high degree of b i l a t e r a l s ym m e t r y a l o n g t h e l o n g a x i s . BMI Effects Early on, we suspected that our acoustic approach was susceptible to variations in body composition. This included, m o s t n o t a b l y, t h e p r e v a l e n c e o f f a t t y t i s s u e s a n d t h e d e n s i t y/ m a s s o f b o n e s . T h e s e , r e s p e c t i v e l y, t e n d t o d a m p e n o r f a c i l i t a t e t h e t r a n s m i s s i o n o f a c o u s t i c e n e r g y i n t h e b o d y. T o a s s e s s h o w t h e s e v a r i a t i o n s a f f e c t e d o u r s e n s i n g a c c u r a c y, w e c a l c u l a t e d e a c h p a r t i c i p a n t s body mass index (BMI) from self-reported weight and height. Data and

observations from the experiment suggest that high BMI is correlated with decreased accuracies. The participants with the three highest BMIs (29.2, 29.6, and 31.9 representing borderline obese to obese) produced the three lowest average accuracies. Figure 10 illustrates this significant disparity - here participants are separated into two groups, those with BMI greater and less than the US national median, age and sex adjusted [5] (F1,12=8. 65, p=.013). Other factors such as age and sex, which may be correlated to BMI in specific populations, might also exhibit a correlation w i t h c l a s s i f i c a t i o n a c c u r a c y. F o r e x a m p l e , i n o u r p a r t i c i p a n t p o o l , m a l e s yi e l d e d h i g h e r c l a s s i f i c a t i o n a c c u r a c i e s than females, but we expect that this is an artifact of BMI correlation in our sample, and probably not an effect of sex d i r e c t l y. SUPPLEMENTAL EXPERIMENTS We conducted a series of smaller, targeted experiments to explore the feasibility of our approach for other applications. In the first additional experiment, which tested performance o f t h e s ys t e m w h i l e u s e r s w a l k e d a n d j o g g e d , w e r e c r u i t e d o n e m a l e ( a g e 23) and one female (age 26) for a single-purpose experiment. For the rest of the experiments, we recruited seven new participants (3 female, mean age 26.9) from within our institution. In all cases, the sensor armband was placed just below the elbow. Similar to the previous experiment, each additional experiment consisted of a training phase, where participants p r o v i d e d b e t w e e n 1 0 a n d 2 0 e x a m p l e s f o r e a c h i n p u t t yp e , a n d a t e s t i n g phase, in which participants were prompted to provide a particular input ( t e n t i m e s p e r i n p u t t yp e ) . A s b e f o r e , i n p u t o r d e r w a s r a n d o m i z e d ; segmentation and classification were performed in real-time.

Walking and Jogging A s d i s c u s s e d p r e v i o u s l y, a c o u s t i c a l l y - d r i v e n i n p u t t e c h n i q u e s are often sensitive to environmental noise. In regard to bio-acoustic sensing, with sensors coupled to the body, noise created during other motions is particularly troublesome, and walking and jogging represent perhaps the most c o m m o n t yp e s o f w h o l e - b o d y m o t i o n . T h i s e x p e r i m e n t e x p l o r e d t h e a c c u r a c y o f o u r s ys t e m i n t h e s e s c e n a r i o s . E a c h p a r t i c i p a n t t r a i n e d a n d t e s t e d t h e s ys t e m w h i l e w a l k i n g and jogging on a treadmill. Three input locations were u s e d t o e v a l u a t e a c c u r a c y: a r m , w r i s t , a n d p a l m . A d d i t i o n a l l y, the rate of false positives (i.e., the system believed there was input when in fact there was not) and true positive s

( i . e . , t h e s ys t e m w a s a b l e t o c o r r e c t l y s e g m e n t a n i n t e n d e d input) was captured. The testing phase took roughly three minutes to complete (four trials total: two participants, two conditions). The male walked at 2.3 mph and jogged at 4 . 3 m p h ; t h e f e m a l e a t 1 . 9 a n d 3 . 1 m p h , r e s p e c t i v e l y. I n b o t h w a l k i n g t r i a l s , t h e s ys t e m n e v e r p r o d u c e d a f a l s e p o s i t i v e input. Meanwhile, true positive accuracy was 100%. Classification accuracy for the inputs (e.g., a wrist tap was recognized as a wrist tap) was 100% for the male a n d 8 6 . 7 % f o r t h e f e m a l e ( c h a n c e = 3 3 % ) . I n t h e j o g g i n g t r i a l s , t h e s ys t e m had four false-positive input events (two per participant) over six minutes o f c o n t i n u o u s j o g g i n g . T r u e - p o s i t i v e a c c u r a c y, a s w i t h w a l k i n g , w a s 100%. Considering that jogging is perhaps the hardest input filtering and segmentation test, we view this result as extremely positive. C l a s s i f i c a t i o n a c c u r a c y, h o w e v e r , d e c r e a s e d t o 8 3 . 3 % a n d 6 0 . 0 % f o r t h e male and female participants respectively (chance=33%). Although the noise generated from the jogging almost certainly degraded the signal (and in turn, lowered classification a c c u r a c y) , w e b e l i e v e t h e c h i e f c a u s e f o r t h i s d e c r e a s e was the quality of the training data. Participants only provided ten examples for each of three teste d input locations. Furthermore, the training examples were collected while participants were jogging. Thus, the resulting training data was not only highly variable, but also sparse neither of which is conducive to accurate machine learning classification. We believe that more rigorous collection of training d a t a c o u l d yi e l d e v e n s t r o n g e r r e s u l t s .

Single-Handed Gestures In the experiments discussed thus far, we considered onl y bimanual gestures, where the sensor-free arm, and in particular the fingers, are used to provide input. However, there are a range of gestures that can be performed with just the fingers of one hand. This was the focus of [2], although this work did not evaluate classif ication a c c u r a c y. W e c o n d u c t e d t h r e e i n d e p e n d e n t t e s t s t o e x p l o r e o n e h a n d e d gestures. The first had participants tap their index, middle, ring and pinky fingers against their thumb (akin to a pinching gesture) ten times each. O u r s ys t e m w a s a b l e t o i d e n t i f y t h e f o u r i n p u t t yp e s w i t h a n o v e r a l l accuracy of 89.6% (SD=5.1%, chance=25%). We ran an identical experiment using flicks instead of taps (i.e., using the thumb as a catch, then rapidly flicking the fingers forward). This yielded an impressive 96.8% (SD= 3.1%, chance=25%) accuracy

in the testing phase. This motivated us to run a third and independent experiment that combined taps and flicks into a single gesture set. P a r t i c i p a n t s r e - t r a i n e d t h e s ys t e m , a n d c o m p l e t e d a n i n d e p e n d e n t testing round. Even with eight input classes in very c l o s e s p a t i a l p r o x i m i t y, t h e s ys t e m w a s a b l e t o a c h i e v e a r e m a r k a b l e 8 7 . 3 % ( S D = 4 . 8 % , c h a n c e = 1 2 . 5 % ) a c c u r a c y. This result is comparable to the aforementioned tenlocation f o r e a r m e x p e r i m e n t ( w h i c h a c h i e v e d 8 1 . 5 % a c c u r a c y) , lending credence to the possibility of having ten or more functions on the hand alone. Furthermore, proprioception of our fingers on a single hand is quite accurate, suggesting a m e c h a n i s m f o r h i g h - a c c u r a c y, e ye s - f r e e i n p u t .

Surface and Object Recognition D u r i n g p i l o t i n g , i t b e c a m e a p p a r e n t t h a t o u r s ys t e m h a d s o m e a b i l i t y t o i d e n t i f y t h e t yp e o f m a t e r i a l o n w h i c h t h e user was operating. Using a similar setup to the main experiment, we asked participants to tap their index finger against 1) a finger on their other hand, 2) a paper pad approximately 80 pages thick, and 3) an LCD screen. Results show that we can identify the contacted object with about 8 7 . 1 % ( S D = 8 . 3 % , c h a n c e = 3 3 % ) a c c u r a c y. T h i s c a p a b i l i t y w a s n e v e r c o n s i d e r e d w h e n d e s i g n i n g t h e s ys t e m , s o s u p e r i o r acoustic features may exist. Even as accuracy stands now, there are several interesting applications that could t a k e a d v a n t a g e o f t h i s f u n c t i o n a l i t y, i n c l u d i n g w o r k s t a t i o n s or devices composed of different interactive surfaces, or recognition of different objects grasped in the environment.

Identification of Finger Tap Type Users can tap surfaces with their fingers in several distinct w a ys . F o r e x a m p l e , o n e c a n u s e t h e t i p o f t h e i r f i n g e r (potentially even their finger na il) or the pad (flat, bottom) o f t h e i r f i n g e r . T h e f o r m e r t e n d s t o b e q u i t e b o n e y, w h i l e t h e l a t t e r m o r e f l e s h y. I t i s a l s o p o s s i b l e t o u s e t h e k n u c k l e s (both major and minor metacarpophalangeal joints). To evaluate our approachs ability to distinguish these input t yp e s , w e h a d p a r t i c i p a n t s t a p o n a t a b l e s i t u a t e d i n f r o n t o f t h e m i n t h r e e w a ys ( t e n t i m e s e a c h ) : f i n g e r t i p , f i n g e r p a d , a n d m a j o r k n u c k l e . A c l a s s i f i e r t r a i n e d o n t h i s d a t a yi e l d e d an average accuracy of 89.5% (SD=4.7%, chance=33%)

during the testing period. This ability has several potential uses. Perhaps the most notable is the ability for interactive touch surfaces to distinguish d i f f e r e n t t yp e s o f f i n g e r c o n t a c t s ( w h i c h a r e i n d i s t i n g u i s h a b l e i n e . g . , c a p a c i t i v e a n d v i s i o n - b a s e d s ys t e m s ) . O n e e x a m p l e i n t e r a c t i o n c o u l d b e that double-knocking on an item opens it, while a pad -tap activates an options menu.

Segmenting Finger Input A pragmatic concern regarding the appropriation of fingertips for input was that other routine tasks would generate f a l s e p o s i t i v e s . F o r e x a m p l e , t yp i n g o n a k e yb o a r d s t r i k e s the finger tips in a very similar manner to the finger -tipinput w e p r o p o s e d p r e v i o u s l y. T h u s , w e s e t o u t t o e x p l o r e whether finger-to-finger input sounded sufficiently distinct such that other actions could be disregarded. As an initial assessment, we asked participants to tap their index finger 20 times with a finger on their other hand, and 20 times on the surface of a table in front of them. This data was used to train our classifier. This training phase was f o l l o w e d b y a t e s t i n g p h a s e , w h i c h yi e l d e d a p a r t i c i p a n t w i d e average accuracy of 94.3% (SD=4.5%, chance=50%). EXAMPLE INTERFACES AND INTERACTIONS W e c o n c e i v e d a n d b u i l t s e v e r a l p r o t o t yp e i n t e r f a c e s t h a t d e m o n s t r a t e o u r a b i l i t y t o a p p r o p r i a t e t h e h u m a n b o d y, i n this case the arm, and use it as an interactive surface. These i n t e r f a c e s c a n b e s e e n i n F i g u r e 1 1 , a s w e l l a s i n t h e a c c o m p a n yi n g v i d e o . While the bio-acoustic input modality is not strictly tethered t o a p a r t i c u l a r o u t p u t m o d a l i t y, w e b e l i e v e t h e s e n s o r f o r m factors we explored could be readily coupled with visual output provided by an integrated pico -projector. There are two nice properties of wearing such a projection device on the arm that permit us t o sidestep many calibration issues. First, the arm is a relatively rigid structure - the projector, w h e n a t t a c h e d a p p r o p r i a t e l y, w i l l n a t u r a l l y t r a c k w i t h t h e arm (see video). Second, since we have fine -grained control of the arm, making minute adjustments to align the projected image with the arm is trivial (e.g., projected horizontal stripes for alignment with the wrist and elbow). To illustrate the utility of coupling projection and finger input on the body (as researchers have proposed to do with projection and computer vision -based techniques [19]), we developed three proof-of-concept projected interfaces built

o n t o p o f o u r s ys t e m s l i v e i n p u t c l a s s i f i c a t i o n . I n t h e f i r s t interface, we project a series of buttons onto the forearm, on which a user can finger tap to navigate a hierarchical menu (Figure 11, left). In the second interface, we project a scrolling menu (center), which a user can navigate by tapping at t h e t o p o r b o t t o m t o s c r o l l u p a n d d o w n o n e i t e m r e s p e c t i v e l y. Tapping on the selected item activates it. In a third i n t e r f a c e , w e p r o j e c t a n u m e r i c k e yp a d o n a u s e r s p a l m a n d allow them to tap on the palm to, e.g., dial a phone number (right). To emphasize the output flexibility of approach, we also coupled our bio -acoustic input to audio output. In this case, the user taps on preset locations on their forearm and hand to navigate and interact with an audio interface. FUTURE

fIGURE The skinput technology has been thus far successfully implemented for the game of Tetris(see fig) and the complete control of the i -pod.The average success rate found is around 96%.Efforts are on to raise this figure to as close to 100 as possible.according to the of ficial word, the Skinput technology should be launched in the market within the next 2 -7 years. The combination of these two leaves plenty of room for d e v e l o p m e n t o f d i f f e r e n t a p p l i c a t i o n s , o r , i f w e ' r e i n c r e d i b l y l u c k y, games. I personally am looking forward to a future where all those cool spy games with arm bands that do amazing things are literally on my own arm. It will feel very J ames Bond like!!.

CONCLUSION In this paper, we have presented our approach to appropriating the human body as an input surface. We have described a novel, wearable bio acoustic sensing array that we built into an armband in order to detect and localize finger taps on the forearm and hand. Results from our experiments h a v e s h o w n t h a t o u r s ys t e m p e r f o r m s v e r y w e l l f o r a s e r i e s o f g e s t u r e s , e v e n w h e n t h e b o d y i s i n m o t i o n . A d d i t i o n a l l y, we have presented initial results demonstrating other potential uses of our approach, which we hope to further explore in future work. These include single -handed gestures, taps with different parts of the finger, and differentiating between materials and objects. We conclude with descriptions o f s e v e r a l p r o t o t yp e a p p l i c a t i o n s t h a t d e m o n s t r a t e t h e r i c h design space we believe Skinput enables.

Vous aimerez peut-être aussi