Académique Documents
Professionnel Documents
Culture Documents
Part I: Concept
HMM Tutorial
Real-world has structures and processes which have (or produce) observable outputs
Usually sequential (process unfolds over time) Cannot see the event producing the output
Problem: how to construct a model of the structure or process given only observations
HMM Background
Theory published in mathematic journals which were not widely read by practicing engineers Insufficient tutorial material for readers to understand and apply concepts
HMM Uses
Uses
Speech recognition
Text processing
Bioinformatics
Financial
HMM Overview
Machine learning method Makes use of state machines Based on probabilistic models Useful in problems having sequential steps Can only observe output from states, not the states themselves
State machine:
Weather
Sunny
0.1
0.1
0.8
What is the probability the weather for the next 7 days will be:
Coin toss:
Heads, tails sequence with 2 coins You are in a room, with a wall Person behind wall flips coin, tells result
Coin selection and toss is hidden Cannot observe events, only output (heads, tails) from events
Problem is then to build a model to explain observed sequence of heads and tails
HMM Components
probability of making transition from one state to the next probability of a emitting/observing a symbol at a particular state
HMM Components
Every state of model can be reached in a single step from every other state of the model
Bakis (left-right):
Three problems must be solved for HMMs to be useful in real-world applications 1) Evaluation 2) Decoding 3) Learning
Purpose: score how well a given model matches a given observation sequence
Assume HMMs (models) have been built for words home and work.
Given a speech signal, evaluation can determine the probability each model represents the utterance
Given a model and a set of observations, what are the hidden states most likely to have generated the observations?
Useful to learn about internal model structure, determine state statistics, and so forth
Training is crucial:
it allows optimal adaptation of model parameters to observed training data using real-world phenomena
No known method for obtaining optimal parameters from data only approximations Can be a bottleneck in HMM usage
Build models representing the hidden states of a process or structure using only observations Use the models to evaluate probability that a model represents a particular observation sequence Use the evaluation information in an application to: recognize speech, parse addresses, and many other applications
Provide a uniform view of several computer science bibliographic web data sources
An automated web information extraction system that requires little human input
HMMs used to parse unstructured bibliographic records into a structured format: NLP
Approach
1) Provide seed database of structured records 2) Extract raw records from relevant Web pages 3) Match structured records to raw records
4) Train HMM-based parser 5) Parse unmatched raw recs into structured recs 6) Merge new structured records into database
AutoBib Architecture
Step 1 - Seeding
Take small collection of BibTeX format records and insert into database Cleaning step normalizes record fields
Examples:
User specifies
Web pages to extract from How to follow next page links for multiple pages
Subtree of Interest = largest subtree of HTML tags Record separators = frequent HTML tags
Tokenized Records
Step 3 - Matching
Match at least author in R to an author in S S.year must appear in R If S.pages exists, R must contain it S.title is approximately contained in R
Levenshtein edit distance approximate string match
For each pair of R and S that match, annotate tokens in raw record with field names Annotated raw records are fed into HMM parser in order to learn:
State transition probabilities Symbol probabilities at each state
Key consideration is HMM structure for navigating record fields (fields, delimiters)
Special states
start, end
Normal states
Have multiple delimiter and tag states, one for each normal state
Sample HMM
(Method 3)
Source: http://www.cs.duke.edu/~geng/autobib/web/hmm.jpg
Step 5 - Conversion
Parse unmatched raw recs into structured recs using HMM parser Matched raw records can be directly converted without parsing because they were annotated in matching step
Step 6 - Merging
Merge new structured records into database Initial seed database has now grown New records will be used for improved matching on the next run
Evaluation
Success rate:
# of tokens labeled by HMM ------------------------------------# of tokens labeled by person
DBLP: 98.9%
CSWD: 93.4%
CompuScience WWW-Database
Advantages
Disadvantages
Not completely automatic May require manual markup Size of training data may be an issue
Other methods
Wrappers
Wrapper induction
Requires manual training Not always accommodating to changing structure Syntax-based; no semantic labeling
E-Commerce
Rather than navigating to and searching many sites, users can consult a single site
References
Concept:
Rabiner, L. R. (1989). A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition. Proceedings of the IEEE, 77(2), 257-285.
Application:
Geng, J. and Yang, J. (2004). Automatic Extraction of Bibliographic Information on the Web. Proceedings of the 8th International Database Engineering and Applications Symposium (IDEAS04), 193-204.