Vous êtes sur la page 1sur 10

HUMAN-COMPUTER INTERACTION

(HCI)

ARPAN DESAI

SMIT

KHATRI
SUNY NEW PALTZ
NEW PALTZ desaia1@hawkmail.newpaltz.edu
khatris1@hawkmail.newpaltz.edu

SUNY

Outline

HCI

Emotion
Recognition
Vision Gesture
Laser Pointer
?

Problem Study
In Todays scenario mobile devices and smart systems
have become quite popular so for that HCI is must to
communicate with each other. The major problem that
we face is to deal with that systems we have to
establish the various techniques.
This talk deals with such techniques and its solutions.

Emotion Recognition through Speech Signal for HCI


The intent of this technic is to integrate Speaker Emotion Recognition in
a way to diagnose seven different sentiments.
i.e. anger, boredom, fear, disgust, happiness, neutral and sadness with
a generalized feature set in real-time.
Here two classifier have been used to recognize emotions that are
continuous hidden markov model(HMM) and open source machine
learninglibrary LIBSVM toolkit.

FORMATION OF SPEAKER EMOTION


RECOGNITION SYSTEM

SPEECH I/P

FEATURE
EXTRACTION
1.MFCC
2.SMFCC
3.ECC
4.TECC

FEATURE
SELECTION

CLASSIFIER
1.HMM
2.LIBSVM

RECOGNIZED
EMOTION

Speech Database:

It is the most important part in speech emotion recognition system.


Database is mainly used to gather example of intermittent emotions.
There are various databases available named Berlin Emotional Speech,
Belfast database, Expressive Speech database, and Leading Reeds
database, but most commonly Berlin Emotional Speech database is used.
It is used both to train and to test the classifier and it is composed of a
accumulation of sentences with different emotional content.

Classifier:
In this technique two classifier has been used.
1) continuous HMM
2) LIBSVM Toolkit
Continuous HMM is used to classify the sentiment based on the feature set
. However, LIBSVM classifier is used to improve recognition efficiency.

VISION BASED GESTURE RECOGNITION SYSTEM MODEL


USER INTERFACE
DISP
GESTURE
RECOGNITI
ON SYSTEM

USE
R

PROCESSING &
UPDATING
OBJECT

IMAGE INPUT
CAMER
A

HAND MOVEMENT

IMAGE CAPTURE

CLASSIFICATION OF GESTURE BASED SYSTEM MODEL


Input Image
Obtain the input image using webcam
Training image from American Sign Language

Preprocessing
Initial step
Image segmentation
Filtering

Feature Extraction
To quickly compute the large image
Represents invariance properties

Classifier
Group of objects
Describes object shapes and movements
Validation of gestures

Command Generation
For every recognized gesture
Used for human computer interaction system

LASER POINTER BASED HUMAN COMPUTER INTERACTION


SYSTEM

Problem Study
Emotion Recognition
Larger Emotional Speech Data Base.
Continuous HMM system is not more efficient
Vision Based Gesture
Security
Range
Laser Pointer
object selection may not be done smoothly
Sometimes difficult to stabilize the users hand

Vous aimerez peut-être aussi