Vous êtes sur la page 1sur 60

¡  Data

 vs.  Information  
§  Data  
▪  Raw  facts  
▪  Distinct  pieces  of  information,  usually  formatted  in  a  
special  way  
§  Information  
▪  A  collection  of  facts  organized  in  such  a  way  that  they  
have  additional  value  beyond  the  value  of  the  facts  
themselves  
¡  Data  –  thermometer  readings  of  temperature  
taken  every  hour:  
   16.0,  17.0,  16.0,  18.5,  17.0,15.5….  
 
  Transformation
¡  Information  –  today s  high:  18.5  
       today s  low:  15.5  
 
Data Represented by
Alphanumeric data Numbers, letters, and other characters

Image data Graphic images or pictures

Audio data Sound, noise, tones


Video data Moving images or pictures
Data Transformation Information
§  accurate,    
§  complete,    
§  economical,    
§  flexible,    
§  reliable,    
§  relevant,    
§  simple,    
§  timely,    
§  verifiable,    
§  accessible,    
§  secure  
¡  You  want  the  information  about  you  in  a  health  
information  system  to  be:  
§  As  accurate  as  possible  (e.g.  your  age,  sex)  
§  As  complete  as  possible  
§  Relevant  
§  To  be  reliable    
§  Should  be  available  in  a  timely  manner  (e.g.  information  
about  your  drug  allergies  are  available  before  your  
operation!)  
¡  Definition  
§  A  set  of  elements  or  components  that  interact  to  
accomplish  goals  
§  A  combination  of  components  working  together  
Customer Order Entry
Maintenance Component
Component

Customer Support System


Catalog
Order Fulfillment
Maintenance
Component
Component
¡  Inputs  
¡  Processing  mechanisms  
¡  Outputs  
Elements
System Goal
Processing
Inputs Outputs
elements
Actors, director, Filming, Finished film Entertaining
staff, sets, editing, delivered to movie, film
Movie equipment special movie studio awards,
effects, profits
distribution
¡  System  boundary  
§  Defines  the  system  and  distinguishes  it  from  everything  
else  
¡  System  types  
§  Simple    vs.  complex  
§  Open  vs.  closed  
§  Stable  vs.  dynamic  
§  Adaptive  vs.  non-­‐adaptive  
§  Permanent  vs.  temporary  
¡  Efficiency  
§  A  measure  of  what  is  produced  divided  by  what  is  
consumed  (eg.  Efficiency  of  a  motor  is  the  energy  
produced  divided  by  what  is  consumed)  
¡  Effectiveness  
§  A  measure  of  the  extent  to  which  a  system  achieves  its  
goals  
¡  System  performance  standard  
§  A  specific  objective  of  the  system  
¡  System  variable  
§  A  quantity  or  item  that  can  be  controlled  by  the  
decision  maker  
§  E.g.  the  price  a  company  charges  for  a  product  
¡  System  parameter  
§  A  value  or  quantity  that  cannot  be  controlled  by  
the  decision  maker  
§  E.g.,  cost  of  a  raw  material  
¡  Model  
§  An  abstraction  or  an  approximation  that  is  used  
to  represent  reality  
¡  Types  of  models  
§  Narrative  (aka  descriptive)  
§  Physical  
§  Schematic  
§  Mathematical  
¡  Make  understanding  complex  systems  easier  
(simplifies)  
¡  Can  be  used  to  design  –  make  models  of  new  
systems  so  can  refine  them  
¡  Makes  communication  about  systems  easier  
(e.g.  a  picture  can  communicate  a  thousand  
words)  
Statistics: A collection of procedures and
processes to enable researchers in the unbiased
pursuit of Knowledge

Statistics is an important part of the Scientific


Method

State a Hypothesis

Interpret the Results Design a Study


—Draw Conclusions and Collect Data

Analyze the Data


State a Hypothesis: The OBJECTIVE or OBJECTIVES
of the Study

A HYPOTHESIS OR SET OF HYPOTHESES should state


exactly what you want to DO or LEARN or STUDY

SHOULD ANSWER
What are the factors to be studied and what
relationships are to be investigated? What is the
experimental material? Etc.?
The area of STATISTICS would not be needed if each time
you measured an experimental unit you would obtain the
same response or value
BUT, THE RESPONSES ARE NOT THE SAME SINCE THERE
IS VARIABILITY or NOISE IN THE SYSTEM

STATISTICAL METHODS EXTRACT THE SIGNAL FROM


THE NOISE TO PROVIDE INFORMATION

One of the Statistician s JOBS is to make sense from DATA


in the presence of VARIABILITY or noise by using DATA
ANALYSIS TOOLS
DESIGN VS. ANALYSIS

The PURPOSE OF DATA COLLECTION is to GAIN


INFORMATION OR KNOWLEDGE!!

Collecting Data does not guarantee that information is


obtained.

INFORMATION ≠DATA
At best:
INFORMATION=DATA+ANALYSIS
If data are collected such that they contain NO
information in the first place, then the analysis phase
cannot find it!!!

The best way to insure that appropriate information is


contained in the collected data is to DESIGN (plan) and
Carefully Control the DATA COLLECTION PROCESS

The measured variables must relate to the stated


OBJECTIVES of the study
If you have a good design and process for data
collection, it is quite often straight forward to
construct an analysis that extracts all of the
available information from the data

The ROLE of a STATISTICIAN is to work with the REAEARCH


TEAM (or researcher) from the START of the study
The MOST IMPORTANT TIME for the statistician to
become involved with a research study is in the very
BEGINNING

A STATISTICIAN CAN HELP OBTAIN THE MAXIMUM


AMOUNT INFORMATON FROM AVAILABLE RESOURCES
HOW???
HELP WITH THE DESIGN OF THE EXPERIMENT
DETERMINE SAMPLE SIZE NEEDED
DEVELOP PROCESS OF COLLECTING DATA
DISCUSS VARIABLES TO BE MEASURED AND HOW THEY
RELATE TO THE OBJECTIVES OF THE STUDY
PROVIDE METHODS OF ANALYZING THE DATA
HELP TRANSLATE STATISTICAL CONCLUSIONS INTO
SUBJECT MATTER CONCLUSIONS
THE CORE HELP FROM THE STATISTICIAN IS IN THE
DESIGN OF THE EXPERIMENT

Help with selecting conditions that relate to the objectives


of the study

Selecting the Experimental Units

Deciding when REPLICATIONS exist

Determining the ORDER in which the experiment is to be


carried out

THE DESIGN OF THE EXPERIMENT IS CRITICAL


COMPONENTS OF DESIGNED EXPERIMENTS

TREATMENT STRUCTURE:
Factors or Populations or Treatments related to
the objectives of the experiment:
Brands of Product, Types of Uses of Product

DESIGN STRUCTURE OR EXPERIMENTAL UNITS:


Factors used in blocking the experimental units as well
as characteristics of exp. Units
Washing Machine, Person Using Machine, Products evaluated
in Session by Taste Panelist
RANDOMIZATION IS THE INSURANCE POLICY AGAINST
INTRODUCING BIAS INTO THE STUDY

Selecting an appropriate Treatment Structure, necessary


Design Structure, and required Randomization Process
provides the Statistician the information needed to
construct an appropriate model

APPROPRIATE MODEL = BEST ANALYSIS


Key to the Design of the Experiment is the Concept of
REPLICATION

REPLICATON: The independent observation of a


treatment
An Experimental Unit Provides a Replication of the
level of a Factor if the level is randomly assigned the
Experimental Unit and observed independently of the
other Experimental Units

Must make sure that Sub-samples are not considered to be


Replications
ANALYZE THE DATA:
Use the COMPLETED DESIGNED EXPERIMENT and the
data type to construct an appropriate analysis

Use Statistical Software – SAS, RS/1, JMP


A software package you know will provide valid
results
The Statistician will provide the STATISTICAL interpretation
of the results from the analyses – STATISTICAL ANALYSES
CONCLUSIONS

The Statistician will help the Researcher TRANSLATE the


statistical analyses conclusions into subject matter
conclusions

Discuss how the statistical analyses provide results that


relate to the STATED OBJECTIVES of the study. The
expected results should be written along with the
objectives. Results that are not expected should be
looked at carefully
Washing Machine Example:
4 brands or models -- one machine each
3 types of laundry – Whites, Wash/wear, Denim
3 persons to operate the Machines

For each person:


Randomly assign the order of Brands
For each Brand, randomly assign the order of Types
Random Order of Brands for Person 1

Brand D Brand B Brand A Brand C

Denim W/W White White


White Denim W/W Denim
W/W White Denim W/W

Machine Random Order of Types within


each Machine

Re-Randomize for each Person


Persons are Blocks of Machines

Machines are Experimental Unit for Brands and Variance


is computed by Person*Brand

Compare BRANDS by using the variability among


Machines Treated Alike
The Machines within a Person are Blocks for Types –
Three Loads per Machine

The Loads within a machine are the Experimental Units


for Type and Brand*Type

Variability among Loads treated alike provides the


measuring stick for comparing the levels of Type and
Brand*Type

This Design Involves Persons as Blocks and Two Sizes of


Experimental Unit
Machine and Load
STATISTICIAN S JOB – to figure out how
the study is being ran and help identify the
type of design that is being used which
includes determining if more than one size
of experimental unit is involved

This is accomplished BEST when the Statistician is


involved at the Beginning of the Study
Involving the Statistician in the Beginning of the Study will
1.  improve the chance of conducting a successful
experiment
2.  Speed up the turn around of the analyses since was
involved with the design
3. Reduce the costs associated with the experiment --
making sure the sample size is adequate to provide the
needed detectable differences
¡  Strive  for  consistency  
¡  Enable  short-­‐cuts  for  frequent  users  
¡  Informative  feedback  
¡  Design  dialogs  to  yield  closure  
¡  Offer  simple  error  handling  
¡  Permit  easy  reversal  of  actions  
¡  Support  internal  locus  of  control  
¡  Reduce  short-­‐term  memory  load  on  user  
¡  Consistency  
¡  Efficient  information  assimilation  by  user  
¡  Minimal  memory  load  on  user  
¡  Compatibility  between  data  display  and  data  entry  
¡  Flexibility  of  user  control  over  data  display  
¡  Consistency  
¡  Minimal  user  input  actions  
¡  Minimal  memory  load  on  user  
¡  Compatibility  between  data  entry  and  data  display  
¡  Flexible  user  control  
ANALYZE
analysis of claims about
stakeholders, Problem scenarios current
field studies practice
Scenario-Based Design

DESIGN
metaphors, Activity iterative
information scenarios analysis of
technology, usability
HCI theory, claims and
guidelines Information scenarios re-design

Interaction scenarios

PROTOTYPE & EVALUATE


summative formative
evaluation Usability specifications evaluation
¡  Formative  vs.  Summative  
¡  Analytic  vs.  Emprical  
Reqs Analysis

Evaluate Design

Develop

many iterations
Formative evaluation

Summative
evaluation
¡  Analytic  Methods:  
▪  Usability  inspection,    Expert  review  
▪  Heuristic  Evaluation  
▪  Cognitive  walk-­‐through  
¡  Empirical  Methods:  
▪  Usability  Testing  
▪  Field  or  lab  
▪  Observation,  problem  identification  
▪  Controlled  Experiment  
▪  Formal  controlled  scientific  experiment  
▪  Comparisons,  statistical  analysis  
¡  Ease  of  learning  
▪   learning  time,  …  

¡  Ease  of  use  


▪   perf  time,  error  rates…  

¡  User  satisfaction  


▪   surveys…  

Not   user  friendly  


¡  Formative:    helps  guide  design  
¡  Early  in  design  process  
▪  when  architecture  is  finalized,  then  its  too  late!  
¡  A  few  users  
¡  Usability  problems,  incidents  
¡  Qualitative  feedback  from  users  
¡  Quantitative  usability  specification  
Scenario Worst case Planned Best case Observed
task Target (expert)

Find most 1 min. 10 sec. 3 sec. ??? sec


expensive
house for
sale?

¡  Set  of  benchmark  tasks  
▪  Easy  to  hard,  specific  to  open-­‐ended  
▪  Coverage  of  different  UI  features  
▪  E.g.   find  the  5  most  expensive  houses  for  sale  
▪  Different  types:    learnability  vs.  performance  
¡  Consent  forms    
▪  Not  needed  unless  video-­‐taping  user s  face        (new  rule)  
¡  Experimenters:  
▪  Facilitator:    instructs  user  
▪  Observers:    take  notes,  collect  data,  video  tape  screen  
▪  Executor:    run  the  prototype  if  faked    
¡  Users  
▪  3-­‐5  users,  quality  not  quantity  
¡  Goal:    mimic  real  life  
▪  Do  not  cheat  by  showing  them  how  to  use  the  UI!  
¡  Initial  instructions  
▪  We  are  evaluating  the  system,  not  you.  
¡  Repeat:  
▪  Give  user  a  task  
▪  Ask  user  to   think  aloud  
▪  Observe,  note  mistakes  and  problems  
▪  Avoid  interfering,  hint  only  if  completely  stuck  
¡  Interview    
▪  Verbal  feedback  
▪  Questionnaire    
¡  ~1  hour  /  user  
¡  Note  taking  
▪  E.g.   &%$#@  user  keeps  clicking  on  the  wrong  
button…  
¡  Verbal  protocol:    think  aloud  
▪  E.g.  user  thinks  that  button  does  something  else…  
¡  Rough  quantitative  measures  
▪  HCI  metrics:    e.g.  task  completion  time,  ..  
¡  Interview  feedback  and  surveys  
¡  Video-­‐tape  screen  &  mouse  
¡  Eye  tracking,  biometrics?  
¡  Initial  reaction:  
▪  stupid  user! ,   that s  developer  X s  fault! ,   this  sucks  
¡  Mature  reaction:  
▪  how  can  we  redesign  UI  to  solve  that  usability  problem?  
▪  the  user  is  always  right  
 

¡  Identify  usability  problems  


▪  Learning  issues:    e.g.  can t  figure  out  or  didn t  notice  
feature  
▪  Performance  issues:    e.g.  arduous,  tiring  to  solve  tasks  
▪  Subjective  issues:    e.g.  annoying,  ugly  
¡  Problem  severity:    critical  vs.  minor  
Problem Importance Solutions Cost Ratio I/C

¡  Importance  1-­‐5:    (task  effect,  frequency)  


▪  5  =  critical,  major  impact  on  user,  frequent  occurance  
▪  3  =  user  can  complete  task,  but  with  difficulty  
▪  1  =  minor  problem,  small  speed  bump,  infrequent  
¡  Ratio  =  importance  /  cost  
▪  Sort  by  this  
▪  3  categories:    Must  fix,    next  version,    ignored  
¡  Simple  solutions  vs.  major  redesigns  
¡  Solve  problems  in  order  of:    importance/cost  
¡  Example:  
▪  Problem:    user  didn t  know  he  could  zoom  in  to  see  more…  
▪  Potential  solutions:  
▪  Better  zoom  button  icon,  tooltip  
▪  Add  a  zoom  bar  slider  (like  moosburg)  
▪  Icons  for  different  zoom  levels:    boundaries,  roads,  buildings  
▪  NOT:    more   help  documentation!!!        You  can  do  better.  
¡  Iterate  
▪  Test,  refine,  test,  refine,  test,  refine,  …  
▪  Until?      Meets  usability  specification  
¡  Usability  Evaluation:  
▪  >=3  users:      Not  (tainted)  HCI  students  
▪  Simple  data  collection      (Biometrics  optional!)  
▪  Exploit  this  opportunity  to  improve  your  design  
¡  Report:  
▪  Procedure    (users,  tasks,  specs,  data  collection)  
▪  Usability  problems  identified,  specs  not  met  
▪  Design  modifications  
¡  Usability  test:  
▪  Formative:    helps  guide  design  
▪  Single  UI,  early  in  design  process  
▪  Few  users  
▪  Usability  problems,  incidents  
▪  Qualitative  feedback  from  users  
¡  Controlled  experiment:  
▪  Summative:    measure  final  result  
▪  Compare  multiple  UIs  
▪  Many  users,  strict  protocol  
▪  Independent  &  dependent  variables  
▪  Quantitative  results,  statistical  significance  

Vous aimerez peut-être aussi