Académique Documents
Professionnel Documents
Culture Documents
VERITAS
Virtual and Augmented Environments and Realistic User
Interactions To achieve Embedded Accessibility DesignS
247765
Status F (Final)
4 Draft version created and sent for peer review (December 2012).
Table of Contents
Version History Table ...................................................................... iii
Table of Contents ............................................................................ iv
List of Figures ................................................................................. vii
List of Tables ................................................................................... ix
List of Abbreviations ........................................................................ x
List of Abbreviations ........................................................................ x
List of Abbreviations for VERITAS tools........................................... xi
Executive Summary....................................................................... 12
Document Overview ..................................................................................... 14
1 Introduction ............................................................................. 15
1.1 VerMIM and the VERITAS framework................................................ 15
2 Acceptability and Usability Indicators....................................... 17
2.1 Generic User Interfaces Indicators ..................................................... 17
2.1.1 User Performance................................................................................ 17
2.1.2 User Satisfaction.................................................................................. 18
2.2 Indicators for Multimodal User Interfaces Addressed to People with
Special Needs............................................................................................... 20
2.2.1 Speech- and Audio-based Interaction .................................................. 20
2.2.2 Tactile and Haptic Interaction .............................................................. 21
2.2.3 Eye- and Head- tracking Controlled Interaction .................................... 22
2.2.4 Hand and Gesture Recognition Interaction .......................................... 23
2.2.5 Vestibular Interaction ........................................................................... 24
2.2.6 Brain Controlled Interaction ................................................................. 24
2.2.7 Visual Interaction ................................................................................. 25
2.2.8 Special Issues concerning the Multimodal Interfaces Indicators........... 25
3 In Depth Analysis of the Usability and Acceptability Indicators 27
3.1 Recommendations for Modalities ....................................................... 28
3.2 Recommendations for Combined and Single Usability and
Acceptability Indicators ................................................................................. 29
3.2.1 Using target goals ................................................................................ 29
3.2.2 Using percentages ............................................................................... 29
3.2.3 Using z-scores ..................................................................................... 30
3.2.4 Using SUM: Single Usability Metric ...................................................... 30
7 Conclusions ............................................................................. 86
References .................................................................................... 87
8 Appendix ................................................................................. 91
8.1 TaskSheet Heuristic Evaluation: Multimodal Interfaces Manager
(VerMIM Evaluator) ....................................................................................... 91
8.1.1 A. Scenario: Normal users without any disabilities ............................. 92
8.1.2 B. Scenario: User with severe visual impairments blind users........... 93
8.1.3 C. Scenario: Users with mild visual impairments Myopia .................. 94
8.1.4 D. Scenario: Motion impaired users upper limb paralysis .................. 95
List of Figures
Figure 1: The VerMIM connections with the rest of the VERITAS tools. As it is
depicted the VerMIM communicates with several modality sub-modules (listed
in the right rectangle column). .......................................................................... 16
Figure 2: Example of a speech and audio-based device to interact hands-free
while on the go [25]. ......................................................................................... 20
Figure 3: Example of haptic interface, an exoskeleton [13]. ............................. 22
Figure 4: Example of an eye- and head-controlled interface [12]. .................... 23
Figure 5: Sign language recognition using Kinect [35]. .................................... 23
Figure 6: Wii balance board used as assistive technology [31]. ....................... 24
Figure 7: Matt Nagle, the first person to ever be implanted with a BrainGate
[27]. .................................................................................................................. 25
Figure 8 - Evaluation process of the heuristic evaluation ................................. 36
Figure 9: The expert RB running through Scenario 3 in that a mild visual
impairment is simulated. He is interacting with the computer using the Novint
Falcon haptic device and gets haptic feedback. ............................................... 42
Figure 10: Expert running through scenario 4 in which he wants to perceive how
a user with motor impairments (upper limb paralysis) would interact with the
SmartHome tool. He is wearing glasses on which an infrared lamp is fixed.
Behind the laptop screen the WiiMote appreciates the movement of the infrared
lamp and as a consequence the head movement. Moving his head he is able to
move the cursor on the screen. ........................................................................ 43
Figure 11 - Relation between number of evaluators and problems identified
according to Molich and Nielsen (1991) ........................................................... 43
Figure 12: Percentage of agreement to the questions in the first category of the
evaluation guidelines: VerMIM. ........................................................................ 45
Figure 13: Percentage of agreement in the VerMIM GUI Design category. ..... 47
Figure 14: Screenshot of the VerMIM Evaluation tool. ..................................... 48
Figure 15: Percentage of agreement for the modality Myopia. ......................... 51
Figure 16: Percentage of agreement for the Modality motor impairments. ....... 53
Figure 17: The main screen of the Smart Home Application interface which was
be used as the base for the user interaction scenario steps. ........................... 55
Figure 18: The VerMIM Evaluator tool that was used for the user test recordings
and the management of the simulation platform, responsible for simulating the
impairments to the subjects. ............................................................................. 56
Figure 19: The VerSim-GUI, which is communicating with the VerMIM Evaluator
and is responsible for simulating the various impairments to the test-users. .... 57
Figure 20: The VerMIM Evaluator report dialog that is displayed after each test-
simulation session. Durations, errors and velocity of the mouse pointer (per
each scenario task) are depicted. The user is also able to save these statistics,
along with other metrics, to a file for further process. ....................................... 58
Figure 21: The testing procedure data flow. The integration with the VerSim tool
is necessary in order to perform the simulation of the impairment in the testing
environment. VerMIM and VerMIM Evaluator exchange several data during the
simulation session, such as current device state, task completion checks, etc. 59
Figure 22: The Smart Home Application that was used for the scenario. Here
the interface is depicted unfiltered, just as it was used in the Normal and
Motor Impairment sessions. ........................................................................... 66
Figure 23: The Smart Home application as it appears after the simulation of the
myopia impairment, that was applied in the mild vision impairment case. ..... 66
Figure 24: The severe glaucoma vision impairment case; most of the visual
field is occluded by blind spot areas. In such cases the virtual user is
considered as almost blind............................................................................. 67
Figure 25: The test-user using the head tracking device; the user was instructed
not to use his hands; thus any interaction with the application was based on
head motion (via the infrared led glasses) combined with voice commands
(captured by the microphone)........................................................................... 67
Figure 26: The total durations of each test. The session percentages (to the
total users test time) are also depicted. It is clear that a great amount of time
was consumed for the 4th session, i.e. the head tracking for the Motor
Impairment test................................................................................................. 70
Figure 27: Average session duration (indicated by the number in seconds on
top of each bar). The red lines indicate the standard deviation of the duration
distribution. ....................................................................................................... 71
Figure 28: The duration overhead as a percentage relative to the Normal
session. The overhead of the haptic (Vision Mild session) is relative small to
the rest, especially when compared to the usage of the head tracker (Motor
session). ........................................................................................................... 71
Figure 29: Distribution of the user errors per session. The majority (12 out of 13)
of the users performed the tests making an almost negligible amount of errors.
......................................................................................................................... 72
Figure 30: The average number of user errors per session; even the Motor
session, which involved the head tracker, manages to achieve a mere mean of
2.0 errors. The standard deviation is indicated with the read line segments. ... 73
Figure 31: The normalize distance (as a percentage of the total point distance
travelled through the tests). The results indicate that the distances travelled are
comparable through the usage of different modalities. ..................................... 74
Figure 32: The distance average overhead (compared to the Normal case) of
the Vision-Mild and Motor sessions. As it is shown the average overhead is
small. ................................................................................................................ 74
Figure 33: The pointer velocity of each user of the Normal, Vision-Mild and
Motor sessions. .............................................................................................. 75
Figure 34: The pair of glasses attached with the LED transmitter, that were used
as tracking device. The depicted system can be considered as low cost, as it
total cost is less than five Euros. ...................................................................... 80
List of Tables
Table 1: Usability and Acceptability Indicators ................................................. 27
Table 2: Mostly used questionnaires regarding usability, usefulness, satisfaction
and easy of use. ............................................................................................... 28
Table 3: Usability and acceptability indicator recommendations considering
modalities ......................................................................................................... 29
Table 4: Examples of usability and acceptability indicators for Veritas project
tasks ................................................................................................................. 31
Table 5: Categories of our evaluation guidelines, their descriptions and
subcategories ................................................................................................... 38
Table 6 The four scenarios with the exact task descriptions. These Instructions
got our experts printed out on instruction sheets. ............................................. 39
Table 7: Specifications of our chosen experts: Education, specialisation,
research focus .................................................................................................. 39
Table 8: Users specifications table. ................................................................. 61
Table 9: Multimodal experience of the users before the test. ........................... 62
Table 10: Test session types; each session is a different combination of a VUM
and set of activated modality tools. .................................................................. 63
Table 11: The scenario followed at each test-session. ..................................... 68
Table 12: The system usability questionnaire; The scale is from 1 to 5, where 5
indicates strong agreement to the statement. The number of the test-subject is
reported in each cell (along with its translation to percentage). ........................ 76
Table 13: Technology acceptance model questionnaire for the VerMIM tools.
Each statement answer is scaled from 1 (favourable opinion of the system) to 7
(unfavourable opinion of the system). .............................................................. 77
List of Abbreviations
Abbreviation Explanation
VR Virtual Reality
VUM Virtual User Model
Executive Summary
This documents purpose is to describe and justify the process that has been
followed for the testing and evaluation of the VERITAS Multimodal Interfaces
Tools, which have been developed as part of the work defined in WP2.8. Before
consulting this document, the reader is advised to read more about the
implemented tools and their integration, in the contents of the deliverables
D2.8.1 [1] and D2.8.2 [2].
The documents objective is threefold: a) to present a list of accessibility and
usability indicators which are commonly established as suitable for evaluation
for Multimodal Interfaces systems; b) to select which of these indicators can be
applied for the evaluation of our Multimodal Interfaces system, which is destined
to assist the interactions between elderly or impaired users and users with
special needs; and c) the most important: to define proper scenarios and test
the VERITAS multimodal toolset for its acceptability and usability.
The inclusion in this document of the accessibility and usability indicators and
factors was necessary and critical for setting properly the basis of the evaluation
that has been followed. Matters such as which indicator is suitable for which
modality have to be presented at first to give the reader a bibliographic
completeness on the field. The selection of the proper indicators, suitable for
testing the interactions with impaired users, was also a necessary step in order
to adapt the evaluation process to the VERITAS special needs.
During the evaluation process, several problems were encountered. The most
difficult one was the definition of a proper scenario for the testing sequence.
The reason for this was that the simulated user-group contains mostly impaired
users, with the inclusion of models with severe impairments (almost blind users,
users who cannot use their limbs). This resulted into the addition of several
constrains in the scenario, as it should use as many modalities as possible
which would assist a great number people having totally different impairment
types (from the vision, hearing, cognitive and motor domains) and in all these
parameters, the scenario should stay as realistic as possible.
Another problem that was met is that the usage of pre-existing applications
could not be used without any alteration or integration with the VERITAS
Multimodal Interfaces Manager (in short: VerMIM). So either a new application
framework should been created as a testing bed of the toolset or a special
low-level integration should be made. As it will be shown in the respective
section, the testing bed method was followed and was applied to a closed-
source Smart Home controller application.
Proper user selection was the key to the evaluation process for the testing
pilots. Both experts and non-expert users were included in the tests in order to
provide a spherical evaluation of the provided toolkit. So, two VerMIM
evaluations have taken place: a) a heuristic evaluation by exprerts and b) a
typical-user study. Both processes are fully described in this manuscript and
their results are reported.
First, the heuristic evaluation took place, from which several problems have
been depicted from experts in the area of multimodal user interfaces. Their
commendation has been taken into account and several fixes have been
Document Overview
The document is split into seven Sections. Section 1 is the introduction to this
document; it defines its purpose and positions the VerMIM with the rest
VERITAS simulation tools.
Section 2 presents a list of generic acceptability and usability indicators of user
interfaces systems. The generic indicators inclusion in this document is a
necessary step of understanding how an evaluation of a system is performed.
The indicators in Section 2 are generic and independent from the activated
modalities.
Section 3 moves one step further by taking the acceptability and usability
indicators and assigning them to the different modality needs and to the
different target groups. Each indicator is analysed and is matched to one or
more modalities. The questionnaires types that are used for such evaluations
are also discussed in this section.
Section 4 contains the heuristic evaluation of the VerMIM. This evaluation was
conducted by experts on the field and provides a thorough review of the
VerMIM and its integration with the VERITAS framework in real application
scenarios. This is the first evaluation that took place and important feedback
was received.
Section 5 contains the user study that has been performed as part of the
VERITAS Multimodal Interfaces Manager (VerMIM) testing procedure. In this
section, the user study parameters, the subjects properties and the application
scenario are presented, as well as any quantitative and qualitative results.
Section 6 includes the toolkit refinement actions that were necessary in order to
perform successfully the user study tests, mostly for the second evaluation
process. Moreover, in this section the limitations of the VerMIM system are also
discussed.
Finally, any conclusions that have been deducted after the VerMIM framework
tests are discussed in Section 7.
1 Introduction
This document is the third and final VERITAS deliverable document of the
WP2.8. It is inextricably tied with deliverables D2.8.1 [1] and D2.8.2 [2] and
continues their work by describing the testing procedure that has been followed
in order to evaluate the performance and acceptance of Multimodal Interfaces
Toolset.
The Multimodal Interfaces toolset and its basic tool: the VERITAS Multimodal
Interfaces Manager, or shortly VerMIM1, has a two-fold purpose:
a) To integrate into one entity, a list of multimodal tools (or modules) which
will be used to help impaired (virtual) user or a (virtual) user with
restricted capabilities has to interact with the application under
development. This entity is called VerMIM and has been integrated
properly with the rest of the VERITAS simulation platform and provides
solutions were the typical unimodal interaction ways fail.
b) To automatically select the proper modality tools which have to be
activated in order to run specific scenarios where a virtual or real
impaired user interacts with applications. Such procedure is called the
modality compensation process [2] and takes into account the Virtual
User Models (VUM) created in SP1 as well as the Multimodal Interaction
models created as part of the A2.8.2.
By following an iterative approach two main test sessions were planned and
executed to validate the developed Multimodal Interaction Tools and verify the
effectiveness of the overall VERITAS framework in testing multimodal
interfaces. The first session was based on a typical user study and the other is
a heuristic evaluation based on experts in the multimodal interaction field. The
design of the VerMIM toolset have been refined and optimized according to the
outcomes of these tests.
1
VerMIM and Multimodal Interfaces Toolset Manager are going to be used interchangeably in
this document if anything that defines that otherwise is not declared.
models data are needed to find which users modalities are affected in
order to perform the modality compensation and replacement process or
to select the appropriate multimodal interfaces.
The Simulation Model files: these files describe in an abstract manner
the sequence of the tasks of an application scenario. The simulation
model scenario results into one task sequence that is specific per
application area. The VerMIM uses this file to perform the analysis of the
modalities required in each task and then, using the Multimodal
Interfaces Models, produces alternative task sequences that depend on
alternative users modalities.
The Multimodal Interface Models: these UsiXML files are used to
produce alternative task sequences of the default sequence described in
the Simulation Model. Every Multimodal Interface Model contains
alternative task paths of a simple task. Moreover, it contains information
about the required modalities of each task.
As it also depicted in Figure 1, the VerMIM manages eight modules, each one
applied to a different modality domain and used in different impairment
situations.
Veritas Simulation Component
Immersive
Core Simulation VerSEd-3D/GUI IVerSim-3D
Simulation
Platform VerSim-3D/GUI VerIM
Platform
Speech Recognition
Module
Virtual Haptics
User Module
Model
(UsiXML)
Speech Synthesis
Module
Sign Language
Simulation Multimodal Interfaces Synthesis Module
Model Manager (VerMIM)
(UsiXML)
Symbolic Module
Screen Reader
Module
Multimodal
Interfaces
Models Screen Magnifier
Repository Module
Head Tracking
Module
Figure 1: The VerMIM connections with the rest of the VERITAS tools. As it is depicted
the VerMIM communicates with several modality sub-modules (listed in the right
rectangle column).
2.1.1.4 Efficiency
One could use task completion time (2.1.1.2) to measure efficiency, i.e., the
amount of effort to complete a task. However, there are is another, more
suitable way to measure efficiency. Typically the number of actions and steps a
participant needs to complete a task is used to measure the amount of effort. In
order to measure cognitive and physical effort, it is important to identify the
actions to be measured, the start and end states of a the task that is to be
completed and only successful task should be taken into account.
2.1.1.5 Learnability
New products require some amount of learning, as experience increases
learning happens, however can be time consuming. It can be measured by
looking at how much time it takes to become proficient with an interface (e.g.,
completing task with an interface). Learnability is very important for interfaces
and tasks that are supposed to be used over a long period and regularly. For
example, when something is learned, it can become a habit and habits usually
require less conscious interaction and though cognitive effort. For multimodal
interfaces this can be crucial; For example, if a person is proficient in using a
joystick, this person will be able to combine joystick interaction with an
additional interaction modality. In order to measure learnability data has to be
collected multiple times. Expected frequency of use should serve as a basis on
how often data should be collected. Learnibility can be measured through
comparing the performance data (e.g. efficiency, errors, task completion time
and task success) of repeated measurements.
There are many more factors that can be measured through self-reporting and
may be relevant for the usability and user experience with an interface; E.g.
level of perceived pain, trust, aestehtic, emotions, fatigue, stress, workload, etc.
The main indicator for the usability and acceptability of speech-based input is
error rates. Error rates are highly related to the task that needs to be completed.
For example, Karat et al found that speech-based navigation and error
correction are tasks that can be problematic during composing text documents
[15]. Speech is an interaction modality that highly relies on context information.
Speech-based interaction is fundamentally complex, however users are trained
in using speech for communication in interpersonal dialogs. There are some
In addition, the gesture recognition systems can be used not only by users that
employ a sign language to communicate but also to provide people with
physical disabilities a new way of interacting with computers [16]. Users must
learn the gestures to interact with the systems. These gestures must fit their
December 2012 23 CERTH/ITI
VERITAS D2.8.3 PU Grant Agreement # 247765
skills and be easy to learn and repeat. Thus one of the indicators of usability is
the ease of learning and repeating the gesture, taking into account that the user
might get exhausted. Other indicators of usability and acceptability of the
gesture interaction modality are the success rates in recognition. If the system
adequately recognizes 99% of the gestures, the system will have a greater
acceptance by the user, than if it only recognizes correctly 50% of the gestures
made.
parameters in the EEG rhythms over sensorimotor cortex and that these
rhythms can be used to control a cursor on a computer screen in one or two
dimensions [34].
Figure 7: Matt Nagle, the first person to ever be implanted with a BrainGate [27].
One of the biggest challenges facing this type of interaction is to eliminate the
need for surgery for implantation of the sensors to acquire the electrical signals
emitted by the brain (Figure 7). This makes this method to be considered as a
last resource when other modalities of interaction are not possible. Apart from
requiring an invasive interface, the interface needs to be calibrated and the user
trained in order to be able to coordinate vision with the focus needed to move
the cursor with enough accuracy. The accuracy of movement of the cursor is
the main indicator of success in using this type of interaction modality.
layout and visual context are important indicators for the usability and
acceptability. Being not able to see important aspects of the screen can
produce frustration. Many interfaces and modalities require implicitly vision
capabilities for input; i.e., vision is in many situations mandatory to locate
interactive devices and understand their meaning. For example, when speaking
into a microphone a sighted person will identify the microphone using vision.
Same applies for using a labelled keyboard.
data error can be tricky to transform into percentages; for example, if the
desired minimum is 0 error and no predefined maximum of errors exists. In that
case it is possible to take the maximum number of errors a user ever produced;
error rates can be transformed into percentages by dividing the observed error
rate from the maximum number of errors ever observed and subtracting that
from 1. Depending on the task and the modality can be reasonable to weight
the data from indicators non-equally; for example, error rates could be weighted
double than task completion time for speech interfaces.
3.2.3 Using z-scores
The z-scores are a technique to transform scores from different scales (e.g.,
metrics from different indicators) so that they can be combined into one metric.
z-scores are based on the normal distribution and indicate how many units a
given score is away from the mean value.
The formula to transform scores from indicators into a z-score is as follows:
z
x
(1)
where x is the value to be transformed; is the mean; and is the standard
deviation.
One should keep in mind that in order to use z-scores mean and standard
deviation need to be known (i.e., approximated based on multiple observations).
Similar to using percentages z-scores can be averaged to get a single
combined value; however, the obtained single combined value cannot be
treated as some type of an overall usability score. z-scores should be used in
iterative test to compare different sets of data; For example, one iteration to
another iteration of a design, or data from one user group and data from
another user group.
3.2.4 Using SUM: Single Usability Metric
SUM is a single usability metric that standardize the four usability metrics: a)
task success; b) task completion time; c) error rate and d) post-task satisfaction
rating [23]. Jeff Sauro provides the SUM score calculator at his web site [24].
3.2.5 Summary
Using target goals to assess the usability of a system is perhaps the best way. It
is important to define the target goals beforehand. Goals should be task specific
and clearly defined; for example:
at least 95% of the target users will be able to succeed the task
the average SUS rating will be at least 70%
the average number of errors will be 3
4.1 Introduction
A heuristic evaluation is used to reveal usability problems in computer software
and focuses on the design of user interfaces (UI). This includes on one hand
usability parameters targeting the graphical design as well as the interaction
design itself. Examples for such parameters are e.g. design consistency,
intuitiveness, etc. This is done in a structured way by following a number of
heuristics defined based on the 10 usability heuristics of Nielsen (1994). Such
heuristics are mainly design and interaction principles that are used mainly to
capture the usability and accessibility of a system.
To evaluate the VERITAS tools we selected five experts in Human Computer
Interaction. We designed different scenarios and adopted Nielsens heuristics
which are defined as follows:
1. Visibility of system status
2. Match between system and the real world
3. User control and freedom
4. Consistency and standards
5. Error prevention
6. Recognition rather than recall
7. Flexibility and efficiency of use
4.2.1 Process
The evaluation process used for the heuristic evaluation of the multimodal
toolset is depicted in Figure 8. In order to be able to conduct the particular
process a number of materials is required. This includes the setup of the system
itself as well as the heuristics that have been defined for the particular
evaluation.
Heuristic based The experts evaluates the system based on the predened heuristics
evaluation
Good bye
Closing
4.2.2 Material
In the following paragraphs, the material used for the heuristic evaluation of the
VerMIM Evaluator tool will be described.
4.2.2.5 Instructions
The experts got information about the evaluation itself and three task sheets
(see Appendix 8.1) explaining the workflow that the expert should follow in order
to get an overview of the VERITAS multimodal interaction tool set.
The first sheet contained general information about the evaluation and its
context as well as about the different disabilities simulated by the tool and first
instructions.
On the other sheets (sheet 2-4) different scenario descriptions explained step
by step are given to the user. The different interaction scenarios are related to
different types of handicaps and thus VERITAS user models that are to be
applied using the VerMIM tool.
On the scenario description for the severe visual impairment condition (almost
blind) there were additionally the exact voice commands for the speech
recognition.
The excel sheet with the evaluation guidelines was also filled on the MacBook
in order to immediately have the collected data in electronic form for further
analysis purposes.
4.2.3 Scenarios
The four created scenarios in the heuristic evaluation are the same as used in
the user study. They consist of different interaction steps required in order to
accomplish the scenarios developed for the particular evaluation session.
Table 6 gives an overview about the different steps required and links the
particular tasks to the specific commands. The commands are the same for the
different scenarios the experts need to conduct using the appropriate interaction
devices based on the selected VERITAS user model.
Table 6 The four scenarios with the exact task descriptions. These Instructions got our
experts printed out on instruction sheets.
Instruction for the experts Voice command
Change language to English English language
Go to control room Go to control room
Open television control Control television
Turn on television Turn on television
Increase volume Increase volume
Go back to control Go back
Open blinds control Control blinds
Close blinds Close blinds
Go back to control Go back
Go back to Intro Go back
Open settings Go to settings
Activate outdoor control lights Activate automatic
entrance lights
Go to intro Go back
Our goal was to recruit HCI and usability experts with different backgrounds in
order to cover different usability aspects. Criteria were at least a bachelor
degree in Computer Science or a master degree in psychology, communication
Studies or another relevant field of study as well as several years of experience
in the field of human computer interaction. Another requirement was an
employment at the ICT&S Center as researcher and experience with heuristic
evaluations. Furthermore we aimed to have a balanced gender ratio for the
evaluation.
Figure 9: The expert RB running through Scenario 3 in that a mild visual impairment is
simulated. He is interacting with the computer using the Novint Falcon haptic device and
gets haptic feedback.
After running through the four scenarios the experts were instructed to evaluate
the system based on heuristics collected in an Excel spreadsheet. During the
evaluation they were free to re-evaluate and verify different functionality and
steps to be done using the the VerMIM Evaluator. Further they were told that in
case of unclear situations and open questions they are free to ask the
researcher leading the heuristic evaluation. The qualitative information we got
from the notes taken while the evaluation and from the filled excel sheet were
the base of our analysis.
Figure 10: Expert running through scenario 4 in which he wants to perceive how a user
with motor impairments (upper limb paralysis) would interact with the SmartHome tool.
He is wearing glasses on which an infrared lamp is fixed. Behind the laptop screen the
WiiMote appreciates the movement of the infrared lamp and as a consequence the head
movement. Moving his head he is able to move the cursor on the screen.
4.2.6 Analysis
An expert evaluation with five experts was conducted. As shown in Figure 11,
such an evaluation discovers a percentage of approximately 75% of usability
problems which is a satisfying approach for identifying issues in formative
evaluations.
After getting familiar with the system the experts had the task to evaluate the
system based on the heuristics (Figure 12). The percent scale refers to the
degree of approval of the experts to the conformity of the system regarding the
fulfillness of the particular heuristic by the system. 100 % is the best value, 0 %
the worst. In addition of better understanding, we coloured the percentages.
The Criteria contains the particular evaluation guideline (heuristic) the
evaluator should address during the heuristic evaluation.
The evaluation guidelines are divided in the following categories:
VerMIM
contains general questions that relate to the first impression,
completeness of functionality, applicability, etc. of the VerMIM tool.
VerMIM GUI Design
contains questions regarding the graphical user interface design of the
VerMIM GUI. This category mainly relates to issues, like intuitiveness,
consistency, improvement potential, etc.
Modality Blind User
The category describes the combination of the selected user model and
the appropriate multimodal simulation of the handicap and the interaction
capabilities defined.
Modality Myopia
The category describes the combination of the selected user model and
the appropriate multimodal simulation of the handicap and the interaction
capabilities defined.
Modality Motion impaired
The category describes the combination of the selected user model and
the appropriate multimodal simulation of the handicap and the interaction
capabilities defined.
The results were collected according to these categories and the particular
heuristics defined.
4.2.6.1 VerMIM
This category addresses general questions concerning the first impressions
about the User Interface, the functionality and the validity and correctness of the
different user models. In Figure 12 the results of this category are shown.
Figure 12: Percentage of agreement to the questions in the first category of the
evaluation guidelines: VerMIM.
The first impression of the VerMIM Tool is various. One expert thinks that its
good and easy to understand. Another one is at the opinion, that its rather
technical the terminal window in the background gives the engineer information
about the system he probably shouldn't have (or need), additionally its unclear
what the vision impaired condition means. Anyway the overall impression is
positive, as the experts rated the VerMIM tool with 83% on a percentage scale.
GUI as appropriation for the particular task of simulating the effects of a user's
handicap:
Evaluating a single screen of a software will not yield interesting results. The
experts only ever saw the Start screen. It can simulate well but in the case of
vision impaired usage it is not clear how to interact with the VerMIM tool itself.
The experts are not sure if the simulation in case of the motor impairment is
appropriate. Multiple interaction possibilities for such would be desireable. Also
the head tracker did not always work 100% as intended. The general
impression about the design of the VerMIM tool is acceptable although some
improvements could be identified. This results in an overall rating of 64.8%
which is acceptable.
Another criticized aspect was that the speech production and recognition stand
alone and are not combined with other interaction methods that are appropriate
for almost blind users.
One expert noted that it would be difficult to make the blind users familiar with
the structure and general commands to interact with the app.
Positive aspects
Generally the experts agreed with speech recognition and production as
appropriate interaction tools for almost blind users. One expert noted that it
would be difficult to make the blind users familiar with the structure and general
commands to interact with the app.
4.2.7.2 Simulation
Critics
The experts did not find the simulation very useful, as it just simulates the
almost blindness only on the screen but keyboard, mouse and other
surroundings are still seen very good. However, for full impairment simulation
such kind of immersion would need special head mounted display in order to
occlude what the user sees. It has been agreed by the experts that providing
on-screen simulation has a potential for vision deficiencies.
Positive aspects
All of the experts agreed that the response times of the simulation are
appropriate to the task with a percentage of 96%.
4.2.7.3 System/Modality
Critics
One of the main problems detected by the experts is that the acoustic feedback
tells you your location in the system but not which actions are available next.
The question is, how knows the blind user his next possibilities? Furthermore
the wording of the speech control commands and the audio feedback could be
improved in terms of consistency.
The commands do not feel very familiar to the experts. They are too technical
and short. It doesnt reflect natural interactions. They noted if they wouldnt have
had the list of available commands they would have been lost in the system.
Positive aspects
The experts did like that the system kind of responded to their actions which
provided them with feedback about the success of what they did. That means
when they gave a command the system confirms that it has recognized and
processed it. Furthermore the voice commands were judged as very concrete.
The third positive aspect they found is that the response time of the VerMIM is
very fast, so you can navigate through the menus quickly.
Definition of Myopia
Nearsightedness, or myopia, as it is medically termed, is a vision condition in
which close objects are seen clearly, but objects farther away appear blurred.
Nearsightedness occurs if the eyeball is too long or the cornea, the clear front
cover of the eye, has too much curvature. As a result, the light entering the eye
isnt focused correctly and distant objects look blurred, Association, 2013.
Problems
Sometimes, the head tracker didnt work properly. Some of the experts didnt
know how to work with the head tracker without any instructions. The speech
control didnt react all the time and that was why some of them werent able to
finish the scenario.
Positive Aspects
The idea of the scenario is a good one. The experts got the impression that it is
easy to change the speed of the mouse according to what is needed for the
task. The way of using head tracking for people having severe motion
impairments and who could only move their heads, was much appreciated.
Such technologies enable such people interacting with systems, although the
performance, accuracy, etc. would be less.
4.3 Conclusion
The VerMIM tool is a first approach for providing the designer and developer
with means of simulating a handicapped users behaviour. The heuristic
evaluation tries to identify problems with a system in order to improve it. Thus a
number of problems that could be addressed in future iterations of the tool were
identified. The type of the results depicted in this section, were mainly
qualitative in terms of applicability, usability, and design.
Although some issues regarding the VerMIM user interface, as well as the
appropriate presentation of the particular VERITAS user model, the reaction of
the experts were positive regarding the VerMIM tool. The tool was found easy to
use and it does not require extensive training.
Improvement potential regarding the combination of the selected modalities and
the chosen VERITAS user model was identified mainly in terms of flexibility.
This means that designers would like to try out different combinations of
multimodal interaction tools with different handicaps. Such could be an
improvement potential to not only simulate specific handicaps and workflows to
the designer but also providing him with a tool that supports him during his
design process. In general, the experts were impressed by the potential of the
tool and agreed that they would use it in their daily work if specific design issues
appear. Moreover, the experts were very strict in their judgement and managed
to detect many system flaws. This fact turned to have positive impact to the
improvement of the VerMIM tool, as it helped very much in its refinement
process.
Figure 17: The main screen of the Smart Home Application interface which was be used
as the base for the user interaction scenario steps.
The VerMIM Evaluator tool had been used as the testing tool that validated the
performance of the Multimodal Interfaces toolset. A screenshot of the main
screen of this tool is depicted in Figure 18.
Figure 18: The VerMIM Evaluator tool that was used for the user test recordings and the
management of the simulation platform, responsible for simulating the impairments to
the subjects.
The VerMIM Evaluator tool is responsible for the following actions:
1. To load the external application in our case the Smart Home Controller
and to manage the test-users interactions with it via the several
multimodal tools.
2. To observe the test-user actions and manage the scenario that has to be
followed. It must be said here that for the description of the scenario is
described using the Task Model structure, that has been used thoroughly
as a scenario task-base for the rest VERITAS Tools.
3. To enable the communication with the VERITAS Simulation tools, either
GUI or 3D, which with their turn enable the simulation of the various
impairments that will be necessary for the tests. In our case the VerMIM
evaluator communicates with the VerSim-GUI tool [38][37] (Figure 19),
because the Smart Home Controller is a 2D application which runs on a
desktop pc.
4. To select the Virtual User Model that the real test-user will be simulated
as.
Figure 19: The VerSim-GUI, which is communicating with the VerMIM Evaluator and is
responsible for simulating the various impairments to the test-users.
Figure 20: The VerMIM Evaluator report dialog that is displayed after each test-simulation
session. Durations, errors and velocity of the mouse pointer (per each scenario task) are
depicted. The user is also able to save these statistics, along with other metrics, to a file
for further process.
Before describing the test configurations and the scenario steps, it is wise to
depict the architecture of the integration of the VerMIM with its Evaluator tool
and with the Simulation Tool (VerSim-GUI).
Figure 21: The testing procedure data flow. The integration with the VerSim tool is
necessary in order to perform the simulation of the impairment in the testing
environment. VerMIM and VerMIM Evaluator exchange several data during the simulation
session, such as current device state, task completion checks, etc.
The data flow starts with the Test-Coordinator person, who is responsible for
the configuration of the test. The Coordinator selects the impairment category
from the VerMIM Evaluator interface, performs the calibration of any activated
device (if such action is needed) and then initiates the testing procedure. The
Coordinator is also responsible to guide the test-user if she/he is lost or
anything goes wrong.
Moreover, the VerMIM Evaluator is responsible for loading the scenario to be
used in the testing procedure. As already mentioned, the scenario is stored in a
format which is similar to a task-models.
Just after the Coordinator initiates the session, the VerMIM Evaluator reads the
Virtual User Model file which the Coordinator has selected and sends two data
signals:
1. The first signal is targeted to the VerMIM, where the suitable external
devices have to be initiated. Which devices will be activated and which
wont is defined by the modality compensation process, described
thoroughly in the Deliverable D2.8.2 [2]. Shorty described, this procedure
involves a) the parsing of the VUM file; b) identifying the respecting ICF
codes which apply to the specific impairment and c) matching the ICF
code to a modality tool.
2. The second signal is destined to the Simulation tool. In our case, it is
destined to the VerSim-GUI tool. With this procedure the path of the
Virtual User Model file is passed into the VerSim-GUI and the latter starts
the simulation of the impairments described in it. It must be written here,
that the VerSim-GUI has the simulation already platform integrated in it,
in order to perform the interactive visual, hearing and motor impairment
simulation.
The VerMIM Evaluator constantly check the states of both the VerMIM and the
VerSim and if any error takes place, reports the corresponding message to the
test coordinator.
After the initialisation of both the Multimodal Interfaces Tools and the Simulation
Environment, the Test-User may start performing the pre-defined scenario
steps, described in the loaded task-model. During the session the VerMIM
Evaluator records any wrong clicks or voice commands, as well as the time
needed by the test-user for performing each task.
In this point it must be declared that during the testing session, both the VerMIM
and the Evaluator windows are hidden, so that the user interacts with the tested
application with the virtual impairments activated without dividing his attention to
unneeded interfaces.
After the scenario steps are finished, the VerMIM Evaluator sends signals to all
other tools to stop and display to the Coordinator the report window. Using the
report window dialog the Coordinator can save the user-recorded data, for
evaluation of the toolkit.
Finally, one thing that has to be mentioned is that the integrated system of the
VerMIM, VerMIM Evaluator and VerSim-GUI, uses the external application (i.e.
the Smart Home Application), just as it is, without any alterations. That means
that any of the events are handled by the Evaluator tool, and when needed they
passed to the application running below it. This is a procedure ensures two
things:
First, the source code of the application below is not needed, as any of
the extra multimodal functionality is added by the VerMIM. This results
into the usage of the VerMIM Evaluator with an infinite range of computer
programs.
Secondly, the sophisticated task management allows the test-coordinator
to handle the interaction events with precision and a) either allow every
event to be passed below to the application (normal test behaviour) or b)
to be consumed by the VerMIM Evaluator, when the user has performed
an action that is outside of the scenario sequence (strict test behaviour).
As it will be described later, both normal and strict test behaviours were used.
Before performing the tests the users were asked if they had any experience in
using haptic devices or involved in any recording that included a head tracker.
Although most of the test-users are experienced programmers, who at least
once have developed some kind of graphical user interfaces, their answers
showed that none of them had used a haptic device or a head tracker for any
kind of interaction. In fact many of them asked what a haptic device is and why
is it used. Moreover, only three (3) users have an expertise in the multimodal
interaction field: two users have been involved in the development of a body
tracker (using the Microsoft Kinect device) and another being involved with the
development of a speech recognition system (Table 9). The fact that none of
the users had any experienced in using a haptic device or a head tracker has to
be considered as an important fact which must be taken into account for the
evaluation of the VERITAS multimodal toolkit.
Any experience with a None of the users had ever used a haptic device.
haptic device.
Any experience with a None of the users had ever used a specialized
head tracking device. head tracker. However, 2 users had experience of
using a full-body tracking.
Any experience with a All users have at least used once a speech
voice recognition recognition system. One user had even been
system. involved in the development of such system.
Four testing sessions have been performed by each test-subject. Vision and
motor impairment based VUM definition were used. Hearing impairment VUM
models were not used, as the Smart Home Application did not have any sound
feedback.
In the normal type session, the VerMIM tools were inactive, as the user
interacted with the application using only the mouse. This type was measured
as a performance comparison basis for the rest three testing sessions. The
testing sessions took place in the order they are mentioned in Table 10. As it
will be presented later, this order is also the order of the difficulty of each testing
session.
Before the recordings the users had at least 3 minutes each, to freely interact
with the application using the mouse in order to get familiar with what each
control/button does. Also the users were instructed to perform the scenario
steps as fast as they could while trying to make as less as possible wrong
actions. The users were also given a short period of adapting themselves to the
new devices, e.g. the haptic and the head tracking pair of glasses. For the
latter, if the subject wore glasses was instructed not to take them off and just to
wear the tracking glasses over them.
Table 10: Test session types; each session is a different combination of a VUM and set of
activated modality tools.
Before the recording process, each subject was given a set of demographic
questions, the answers of which are depicted in Table 8 and Table 9. After the
final session a System Usability Scale questionnaire was filled by the subject in
order to provide qualitative metrics and feedback for the test. Additionally, the
users answered a list of six questions concerning the technology acceptance
model integrated into the VerMIM tools. The answers to these questionnaires
along with the quantitative metrics recorder during each session can be found
and discussed in subsection 5.3.
Figure 22: The Smart Home Application that was used for the scenario. Here the interface
is depicted unfiltered, just as it was used in the Normal and Motor Impairment
sessions.
Figure 23: The Smart Home application as it appears after the simulation of the myopia
impairment, that was applied in the mild vision impairment case.
Figure 24: The severe glaucoma vision impairment case; most of the visual field is
occluded by blind spot areas. In such cases the virtual user is considered as almost
blind.
Figure 25: The test-user using the head tracking device; the user was instructed not to
use his hands; thus any interaction with the application was based on head motion (via
the infrared led glasses) combined with voice commands (captured by the microphone).
User Action
Cases: Case:
Normal (input: mouse) Severe vision impairment
Mild vision impairment (input: (voice recognition)
haptic device)
Motor Impairment (input: head
tracker, voice recognition for
the click command)
11 Go to Settings Go to Settings
A paper sheet with the scenario tasks was given to the test-subjects and this
sheet was in front of them during all the test sequence; this aimed to remind the
user of what to do next, in case she/he had forgotten it. Here, it must be
reminded to the user that this scenario does not aim to perform an accessibility
assessment of the Smart home interface, but to measure how easily and
efficiently a user can interact with it while applying the impairment virtual
symptoms.
2
In the test setup a 17 inch monitor was used with a desktop resolution of 1280x1024,
meaning that 38 pixels are converted into 1cm distance.
5.3.1.1 Duration
The results regarding the test session durations are depicted in Figure 26. All
the users were able to succeed in performing the whole scenario in less than
three minutes.
As it is depicted in Figure 27 the average duration is increased when the
simulation impairment advances to a more severe case, from a 48% overhead
of the Mild Vision impairment case, to the 226% of the more sever vision
impaired to the 412% of the severe case of the motor impairment (Figure 28).
Seconds
The fact that the head tracker had lasted at least four times longer compared to
the normal session indicates at first that the users having difficulties using the
head tracker efficiently. However, none of them had any experience of using
such device before and moreover the maximum speed of the mouse pointer
was restricted in order to make its use more comfortable.
Figure 27: Average session duration (indicated by the number in seconds on top of each
bar). The red lines indicate the standard deviation of the duration distribution.
450.00%
412.00%
400.00%
350.00%
300.00%
250.00% 226.83%
200.00%
150.00%
100.00%
48.57%
50.00%
0.00%
Vision Mild Vision Severe Motor
Figure 28: The duration overhead as a percentage relative to the Normal session. The
overhead of the haptic (Vision Mild session) is relative small to the rest, especially
when compared to the usage of the head tracker (Motor session).
5.3.1.2 Errors
The error distribution per each session is depicted in Figure 29. As presented in
Figure 30, the errors in the Normal case are almost nonexistent: 0.38 average
errors per user, resulting into an accuracy of 97% (13 total tasks 0.38 errors =
12.62 correct actions). This indicates that the graphic user interface of the smart
December 2012 71 CERTH/ITI
VERITAS D2.8.3 PU Grant Agreement # 247765
home application is very well designed. In the same figure, it becomes clear the
voice recognition (Vision-Severe case) achieves better accuracy than the
haptic controller (Vision-Mild case), probably due to the better experience the
users had previously with speech recognition systems (Table 9). Even so, the
two vision cases achieve very small error rates with accuracies: 92% and 93%
respectively for the mild and severe vision impairments.
Concerning the head tracker the accuracy falls at 85%. This can be justified by
two reasons:
a) The overwhelming majority of the users (12 out of 13) performed the
head tracker session with making less than 3 errors, which transforms
the accuracy to 91%. So the 13th user can be considered as the worst
case of such scenario.
b) The fact that the tracker device, i.e. the glasses with the infrared led, is
still in a prototype phase justifies its low accuracy - compared to the
market ready haptic device and voice recogniser.
Figure 29: Distribution of the user errors per session. The majority (12 out of 13) of the
users performed the tests making an almost negligible amount of errors.
Figure 30: The average number of user errors per session; even the Motor session,
which involved the head tracker, manages to achieve a mere mean of 2.0 errors. The
standard deviation is indicated with the read line segments.
100%
90%
27% 27% 31% 28%
33% 34% 36% 36% 37% 36%
80% 38% 42% 39%
Pointer Distance (Pixels)
70%
60%
36% 30%
50% 36% 46% 37% 29% 32%
34% 34% 49% 32%
38% 34%
40%
30%
20% 37% 38%
31% 30% 34% 32%
27% 29% 29% 29%
10% 24% 24% 23%
0%
1 2 3 4 5 6 7 8 9 10 11 12 13
User ID
Figure 31: The normalize distance (as a percentage of the total point distance travelled
through the tests). The results indicate that the distances travelled are comparable
through the usage of different modalities.
Overhead in Distance
25.00%
22.19%
20.00%
Distance Overhead
15.00% 13.22%
10.00%
5.00%
0.00%
Vision Mild Motor
Session
Figure 32: The distance average overhead (compared to the Normal case) of the
Vision-Mild and Motor sessions. As it is shown the average overhead is small.
b) the users are totally inexperienced into using only the motion of their
heads to navigate the mouse cursor. This had as a result the restriction
of the maximum speed the cursor could achieve in fact the VerMIM had
a calibration dialog where each user could select the cursor max speed
high speeds were not preferred because of making the cursor
incontrollable.
Pointer Velocity
400
350
300
Pointer Velocity (pixels/sec)
Normal
250 Mean (Normal)
Vision Mild
200 Mean (Vision Mild)
Motor
150
Mean (Motor)
100
50
0
1 2 3 4 5 6 7 8 9 10 11 12 13
User ID
Figure 33: The pointer velocity of each user of the Normal, Vision-Mild and Motor
sessions.
Table 12: The system usability questionnaire; The scale is from 1 to 5, where 5 indicates
strong agreement to the statement. The number of the test-subject is reported in each
cell (along with its translation to percentage).
Strongly Strongly
disagree agree
Statement 1 2 3 4 5
The technology acceptance model answers are included in Table 13. In most of
the answers a positive feedback was received by the users:
The VerMIM and its tools are considered as a good idea to the test-users
(answers #3 and #6), easy to use (#1).
Table 13: Technology acceptance model questionnaire for the VerMIM tools. Each
statement answer is scaled from 1 (favourable opinion of the system) to 7 (unfavourable
opinion of the system).
Extremely Extremely
likely unlikely
(like) (dislike)
Statement 1 2 3 4 5 6 7
Figure 34: The pair of glasses attached with the LED transmitter, that were used as
tracking device. The depicted system can be considered as low cost, as it total cost is
less than five Euros.
The head-tracking device is used to move the mouse cursor. In cooperation
with a simple voice recognition system, which can recognise simple commands
such as left click, right click, etc, can be used to manipulate the mouse
cursor and its behaviour, independently of any application that is running in the
desktop pc of the user. This solution is a global approach and does not need
any extra development for applications that are destined to users with
amputated hands or severe motor impairments.
The refinement process of this tool included several improvements, which were:
Improvements in the calibration GUI, via which the user is able to
configure and test the tool before using it in the multimodal scenario.
Better navigational capabilities. At first only 4-directional mouse
navigation was offered by the system. The refinement process added
two new modes: 8-directional and free navigation.
Several optimizations were applied, concerning the better cooperation of
the LED transmitter and Wii-remote receiver, in order to provide a wider
field of view for the users, as it was depicted in the heuristic evaluation
the signal was lost when the user turned her/his head in great angles.
Figure 39: The VerMIM calibration and test panel of the Head tracking module.
7 Conclusions
The work results described in this document have shown that the Multimodal
Interfaces Toolset is a valuable addition to the rest VERITAS tools for providing
to the impaired user a holistic approach of special multimodal interaction which
can cope successfully with her/his special needs. This deliverable starts by
analysing user interfaces indicators of user performance and satisfaction. Then
it continues with matching each one of these with one or more modalities and
how people with special needs could interact with systems using those
modalities.
Two test sessions have arranged and performed to evaluate the Multimodal
Interfaces toolset. The first test-session involved a heuristic evaluation of the
system. This evaluation performed by experts in the Multimodal Interaction field.
Despite the fact that the experts judged the whole system very strictly, the
majority of them was impressed by the potential of the tool and agreed that they
would use it in their daily work if specific design issues appear. Moreover, it was
found out that the tool is easy to use and it does not require extensive training.
The issues regarding the VerMIM user interface and the devices improper
functionality were taken into account in the refinement process and were fixed,
for the second evaluation.
The second evaluation was a user study which was based on performance and
satisfaction indicators. In the study, thirteen users, mostly developers and
researchers tried the VerMIM and managed to control successfully (100% rate)
a smart home application they had seen first time under circumstances affected
by mild and severe impairments. This is can be considered as an impressive
result if taken into account that most of the users have interacted with haptic
devices or head tracking devices for the first time. The general conclusion by
the user study tests strongly indicates that by using the integrated product of
VerMIM with the VERITAS Simulation platform, the developer has a great asset
when designing an application that includes people with disabilities.
Between and after the two testing sessions several of the components of the
VerMIM have been altered and improved through a refinement process defined
by the comments of the various test-users. New voices have been added to the
speech synthesizers, new grammar-based system has been integrated to the
voice recognition module for higher accuracy results and a stand-alone
application with panels for configuring and testing each tool have been
implemented.
Finally, a word must be said about the head tracking system has been added to
the VerMIM modules. This low cost tracker was made especially for the VerMIM
test requirements in order to simulate situations were the subject cannot use
her/his hands. The results have shown that the majority of the users, via the
interaction with such a device, were very satisfied of navigating through the
Smart Home application using only their heads.
Having the above in mind, the VerMIM tool may be considered as the VERITAS
multimodal interaction based solution for designers who need to be placed in
the impaired peoples shoes in order to provide better tools to real impaired
users.
References
[1] Panagiotis Moschonas, Dimitrios Tzovaras, George Ghinea, VERITAS
Deliverable D2.8.1 First prototypes of the multimodal interface tool set,
December 2011.
[3] Albert, W.S. and Dixon, E. (2003). Is this what you expected? The use
of expectation measures in usability testing. Proc. Usability
Professionals Association, 12thAnnual Conference, 10th paper.
[7] Buxton, W., Myers, B.A. (1986) A study in two-handed input. In: Mantei
M, Orbeton P (eds) ACM Conference on Human Factors in Computing
Systems (CHI86). ACM Press, Boston, Massachusetts, United States,
pp 321-326.
[8] Brooke, J., 1996, SUS: A Quick and Dirty Usability Scale. In: P.W.
Jordan, B. Thomas, B.A. Weerdmeester & I.L. McClelland (Eds.),
Usability Evaluation in Industry. London: Taylor & Francis, 189-194.
[9] Card, S.K., English, W.K., & Burr, B. J. (1978). Evaluation of mouse,
rate-controlled isometric joystick, step keys and text keys for text
selection on a CRT. Ergonomics, 21(8), 601-613.
[11] Cushing SL, Papsin BC, Rutka JA, James AL, Gordon KA. 2008.
Evidence of vestibular and balance dysfunction in children with
profound sensorineural hearing loss using cochlear implants.
Laryngoscope. 2008 Oct;118(10):1814-23.
[13] Frisoli A., Rocchi F., Marcheschi S., Dettori A., Salsedo F. &
Bergamasco M. (2005). A new force-feedback arm exoskeleton for
haptic interaction in virtual environments. Proceedings of the First
Eurohaptics Conference and Symposium on Haptic Interfaces for
Virtual Environment and Teleoperator Systems March,
2005, Pisa, Italy, 195-201.
[15] Karat, C.-M., Halverson, C., Horn, D., & Karat, J. (1999). Patterns of
entry and cor- rection in large vocabulary continuous speech
recognition systems. Proceedings of the International Conference for
Computer-Human Interaction (CHI99), 568575. New York: ACM
Press.
[21] MacKenzie, I.S. and S.X. Zhang. The design and evaluation of a high-
performance soft keyboard. Proc. CHI'99, p. 25-31.
[22] Myers, B.A., Wobbrock, J.O., Yang, S., Yeung, B., Nichols, J., Miller, R.
Using handhelds to help people with motor impairments. Proc. ASSETS
02. ACM Press, 2002, 89-96.
[24] Jeff Sauro, SUM: Single Usability Metric, April 17, 2005,
http://www.measuringusability.com/SUM/index.htm.
[28] Sibert, L., and Jacob R. 2000. Evaluation of eye gaze in- teraction. In
Proceedings of the SIGCHI conference on Human factors in computing
systems, 281288.
[32] Vaughan TM, McFarland DJ, Schalk G, Sarnacki WA, Krusienski DJ,
Sellers EW, Wolpaw JR. 2006.The Wadsworth BCI Research and
Development Program: at home with BCI. IEEE Trans Neural Syst
Rehabil Eng. Jun;14(2):229-33.
[35] Zafrulla, Zahoor and Brashear, Helene and Starner, Thad and Hamilton,
Harley and Presti, Peter. 2011. American sign language recognition
with the kinect. Proceedings of the 13th international conference on
multimodal interfaces. ICMI '11
8 Appendix
A window with the statistics of your interaction is opening now. Please save the
log file (file name: your user ID_scene1
Note that the SmartHome App will only react on our defined tasks.
A window with the statistics of your interaction is opening now. Please save the
log file (file name: your user ID_scene2).
Note that the SmartHome App will only react on our defined tasks.
A window with the statistics of your interaction is opening now. Please save the
log file (file name: your user ID_scene3).
Note that the SmartHome App will only react on our defined tasks.
A window with the statistics of your interaction is opening now. Please save the
log file (file name: your user ID_scene4).