Vous êtes sur la page 1sur 2

3d-Space Handwriting recognizing interactive user interface

AbstractWe present an input method which enables complex hands-free interaction through 3d handwriting recognition. Users can write text in the air as if they were using an imaginary blackboard. Motion sensing is done wirelessly by accelerometers and gyroscopes which are attached to the back of the hand. We propose a two-stage approach for spotting and recognition of handwriting gestures. The spotting stage uses a Support Vector Machine to identify data segments which contain handwriting. The recognition stage uses Hidden Markov Models (HMM) to generate the text representation from the motion sensor data. Individual characters are modeled by HMMs and concatenated to word models. Our system can continuously recognize arbitrary sentences, based on a freely definable vocabulary with over 8000 words. A statistical language model is used to enhance recognition performance and restrict the search space. We report the results from a nine-user experiment on sentence recognition for person dependent and person independent setups on 3d-space handwriting data. For the person independent setup, a word error rate of 11% is achieved, for the person dependent setup 3% are achieved. We evaluate the spotting algorithm in a second experiment on a realistic dataset including everyday activities and achieve a sample based recall of 99% and a precision of 25%. We show that additional filtering in the recognition stage can detect up to 99% of the false positive segments.

In my work, I introduce a wearable computing device for recognition of text written in the air, like on an imaginary blackboard. For that purpose, I designed and implemented a data glove, based on inertial sensors. The data glove is equipped with three orthogonal gyroscopes and three orthogonal accelerometers to measure hand motion. Data is processed and sent to a computer via Bluetooth. Based on the signals the glove delivers, I developed an HMM based recognizer, using the existing Janus Recognition Toolkit. Several experiments were performed to optimize and evaluate the system. Specifically, experiments were performed on single digit, single character, and word recognition. For the task of character recognition, ten test persons contributed to the data collection. Writer-dependent, as well as writerindependent recognition was evaluated and arising problems were analyzed in detail. The writer-dependent recognition rate on single character recognition was 95.3% on average. The average rate of writer-independent recognition was 81.9%. Based on a small vocabulary of 100 words, first experiments in word recognition were conducted and recognition rates of 96% were reached for the writer-dependent case and 82% for the writer-independent case. Finally, a real-time demonstration system was implemented to show the functionality of the system in practice. While there has already been some research in the field of airwriting recognition, using wireless sensor-equipped pens instead of a data glove, to the best of my knowledge, this is the first work on recognizing whole words written in the air

Abstract We present a wearable input system which enables interaction through 3-dimensional handwriting recognition. Users can write text in the air as if they were using an imaginary blackboard. The handwriting gestures are captured wirelessly by motion sensors

3d-Space Handwriting recognizing interactive user interface


applying accelerometers and gyroscopes which are attached to the back of the hand. We propose a twostage approach for spotting and recognition of handwriting gestures. The spotting stage uses a Support Vector Machine to identify those data segments which contain handwriting. The recognition stage uses Hidden Markov Models (HMM) to generate a text representation from the motion sensor data. Individual characters are modeled by HMMs and concatenated to word models. Our system can continuously recognize arbitrary sentences, based on a freely de_nable vocabulary. A statistical language model is used to enhance recognition performance and to restrict the search space. We show that continuous gesture recognition with inertial sensors is feasible for gesture vocabularies that are several orders of magnitude larger than traditional vocabularies for known systems. In a _rst experiment, we evaluate the spotting algorithm on a realistic dataset including everyday activities. In a second experiment, we report the results from a nine-user experiment on handwritten sentence recognition. Finally we evaluate the end-to-end system on a small but realistic dataset.

AbstractInteraction with mobile devices that are intended for everyday use is challenging since such systems are continuously optimized towards small outlines. Watches are a particularly critical as display size, processing capabilities, and weight are tightly constraint. This work presents a watch device with an integrated gesture recognition interface. We report the resource-optimized implementation of our algorithmic solution on the watch and demonstrate that the recognition approach is feasible for such constraint devoices. The system is wearable during everyday activities and was evaluated with eight users to complete questionnaires through intuitive one-hand movements. We developed a procedure to spot and classify input gestures from continuous acceleration data acquired by the watch. The recognition procedure is based on hidden Markov models (HMM) and was fully implemented on a watch. The algorithm achieved an average recall of 79% at 93% precision in recognizing the relevant gestures. The watch implementation of continuous gesture spotting showed a delay below 3ms for feature computation, Viterbi path processing, and final classification at less than 4KB memory usage. Index TermsEvent spotting, algorithm implementation, intelligent wristwatch, recognition performance, mobile interaction, eWatch

Vous aimerez peut-être aussi