Vous êtes sur la page 1sur 18

Gaussain Mixture Model

Hidden Markov Model


Artificial Neural Network
Particle Swarm Optimization

Tutorial Session

September 18, 2019

Tutorial Session
Gaussain Mixture Model
Hidden Markov Model
Artificial Neural Network
Particle Swarm Optimization

GMM I

I Load GMM.Mat data(4D) which consists of data from two


different Gaussian distributions and save the data in X .
I Initialize the Means µk and Covariance matrices Σk , where
k = 1, 2 with some random numbers using “rand”.

mu1 = rand(4, 1)
mu2 = rand(4, 1)
Sigma1 = rand(4)
Sigam2 = rand(4)

Tutorial Session
Gaussain Mixture Model
Hidden Markov Model
Artificial Neural Network
Particle Swarm Optimization

GMM II

I Initialize prior probabilities π1 and π2 using “rand” and


normalize them with their sum to make sure that their sum is
equal to 1.
pi1 = rand

pi2 = rand

pi1 = pi1/(pi1 + pi2)

pi2 = pi2/(pi1 + pi2)

Tutorial Session
Gaussain Mixture Model
Hidden Markov Model
Artificial Neural Network
Particle Swarm Optimization

GMM III

I E-Step: Evaluate the responsibility factors using the current


parameters γ(znk ) = Pπ2 k Nπ (xn |µk ,Σk )
N (x |µ ,Σ )
j=1 j n j j
Pseudo Code:
for n = 1 : 1 : N
denom =
. . . pi1 ∗ exp(−(X (n, :) − mu1)T ∗ inv (Sigma1) ∗ (X (n, :) − mu1)/2)
. . . + pi2 ∗ exp(−(X (n, :) − mu2)T ∗ inv (Sigma2) ∗ (X (n, :) − mu2)/2)
gamma(n, 1) = pi1 ∗ exp(−(X (n, :) − mu1)T ∗ inv (Sigma1) ∗ (X (n, :
) − mu1)/2)/denom
gamma(n, 2) = pi2 ∗ exp(−(X (n, :) − mu2)T ∗ inv (Sigma2) ∗ (X (n, :
) − mu2)/2)/denom
end

Tutorial Session
Gaussain Mixture Model
Hidden Markov Model
Artificial Neural Network
Particle Swarm Optimization

GMM IV
I M-Step:
N
1 X
µnew
k = γ(znk )xn
Nk n=1
new N
X 1 X
= γ(znk )(xn − µnew new T
k )(xn − µk )
Nk n=1
k
N
Nk X
πknew = , whereNk = γ(znk )
N n=1

Pseudo Code:
for n = 1 : 1 : N
N1 = N1 + gamma(n, 1)

Tutorial Session
Gaussain Mixture Model
Hidden Markov Model
Artificial Neural Network
Particle Swarm Optimization

GMM V
N2 = N2 + gamma(n, 2)
end
mu1 new =[]
mu1 new =[]
Sigma1 new =[]
Sigma2 new =[]
pi1 new = N1/N
pi2 new = N2/N
for n = 1 : 1 : n
mu1 new = mu1 new + (1/N1) ∗ gamma(n, 1) ∗ X (n, :)
mu2 new = mu2 new + (1/N2) ∗ gamma(n, 2) ∗ X (n, :)
Sigma1 new = Sigma1 new + (1/N1) ∗ gamma(n, 1)∗
. . . (X (n, :) − mu1 new ) ∗ (X (n, :) − mu1 new )T
Sigma2 new = Sigma2 new + (1/N2) ∗ gamma(n, 2)∗
. . . (X (n, :) − mu2 new ) ∗ (X (n, :) − mu2 new )T

Tutorial Session
Gaussain Mixture Model
Hidden Markov Model
Artificial Neural Network
Particle Swarm Optimization

GMM VI

end
Save the new values for µk , Σk and πk into the old variables.
I Evaluate the log-likelihood function
N
( N  )
X X xn
ln P(x/µ, , π) = ln πk N
n=1
µk , k
k=1

Pseudo Code:
J=0; % J is the log-likelihood
for n = 1 : 1 : N
J = J + log (pi1 ∗ exp(−(X (n, :) − mu1)T ∗ inv (Sigma1) ∗ (X (n, :
) − mu1)/2)
. . .+pi2∗exp(−(X (n, :)−mu2)T ∗inv (Sigma2)∗(X (n, :)−mu2)/2))
end

Tutorial Session
Gaussain Mixture Model
Hidden Markov Model
Artificial Neural Network
Particle Swarm Optimization

GMM VII

I Repeat the E-Step and M-Step for some iterations and check for
the convergence of log-likelihood. for this use the for loop.

Tutorial Session
Gaussain Mixture Model
Hidden Markov Model
Artificial Neural Network
Particle Swarm Optimization

HMM I

Model 1 Model 2
p11 = 0.8 p21 = 0.3 p11 = 0.2 p21 = 0.7
h11 = 0.9 h21 = 0.2 h11 = 0.1 h21 = 0.8

Tutorial Session
Gaussain Mixture Model
Hidden Markov Model
Artificial Neural Network
Particle Swarm Optimization

HMM II

I Create the arrays DATA1 , DATA2 , STATE1 and STATE2 with


the given data.
I Create the matrices of the state transition and emission
probabilities p and h of both the models
I Now find the log likelihood of both the models and check
which model is giving the maximum likelihood.

Tutorial Session
Gaussain Mixture Model
Hidden Markov Model
Artificial Neural Network
Particle Swarm Optimization

ANN I

1. Here, we have considered a 2-class classification problem

Tutorial Session
Gaussain Mixture Model
Hidden Markov Model
Artificial Neural Network
Particle Swarm Optimization

ANN II
2. Create the arrays “TDATA1”, “TDATA2”,“VDATA1” and
“VDATA2”. Save both the “TDATA1” and “TDATA2” into
single array “TDATA”. Similarly both the validation data into
a single array.

TDATA = [TDATA1, TDATA2]

VDATA = [VDATA1, VDATA2]


3. Initialize the matrices W2×3 and V3×2 , biases b1×3 and a1×2
as given.
4. Take the target as
 
1 1 1 1 1 0 0 0 0 0
t=
0 0 0 0 0 1 1 1 1 1
Tutorial Session
Gaussain Mixture Model
Hidden Markov Model
Artificial Neural Network
Particle Swarm Optimization

ANN III

5. Obtain the hidden layer output and output layer output and
error vectors
for i = 1 : 1 : 10
h(i, :) = b + Tdata1(:, i)0 ∗ w
o(i, :) = a + h(i, :) ∗ v
e(i, :) = t(:, i)0 − o(i, :)
end
6. SSE = sum(sum(e. ∗ e))

Tutorial Session
Gaussain Mixture Model
Hidden Markov Model
Artificial Neural Network
Particle Swarm Optimization

ANN IV
7. Update the weights using the following:
w11 (t + 1) = w11 (t)+ ηh1 (1 − h1 )i1 [e1 v11 + e2 v12 ]
w12 (t + 1) = w12 (t)+ ηh2 (1 − h2 )i1 [e1 v21 + e2 v22 ]
w13 (t + 1) = w13 (t)+ ηh3 (1 − h3 )i1 [e1 v31 + e2 v32 ]
w21 (t + 1) = w21 (t)+ ηh1 (1 − h1 )i2 [e2 v11 + e2 v12 ]
w22 (t + 1) = w22 (t)+ ηh2 (1 − h2 )i2 [e2 v21 + e2 v22 ]
w23 (t + 1) = w23 (t)+ ηh3 (1 − h3 )i2 [e2 v31 + e2 v32 ]
v11 (t + 1) = v11 (t)+ ηh1 e1
v12 (t + 1) = v12 (t)+ ηh1 e2
v21 (t + 1) = v21 (t)+ ηh2 e1
v22 (t + 1) = v22 (t)+ ηh2 e2
v31 (t + 1) = v31 (t)+ ηh3 e1
v32 (t + 1) = v32 (t)+ ηh3 e2

Tutorial Session
Gaussain Mixture Model
Hidden Markov Model
Artificial Neural Network
Particle Swarm Optimization

ANN V

a1 (t + 1) = a1 (t) + ηe1
a2 (t + 1) = a2 (t) + ηe2
b1 (t + 1) = b1 (t) + ηh1 (1 − h1 )[e1 v11 + e2 v12 ]
b2 (t + 1) = b2 (t) + ηh2 (1 − h2 )[e1 v21 + e2 v22 ]
b3 (t + 1) = b3 (t) + ηh3 (1 − h3 )[e1 v31 + e2 v32 ]
8. Repeat steps 5 to 7 to obtain the following for 5 epochs and
check for the convergence of SSE .
9. Plot number of epochs versus SSE using plot.
10. Find the output using the final updated weight as obtained in
step 5.

Tutorial Session
Gaussain Mixture Model
Hidden Markov Model
Artificial Neural Network
Particle Swarm Optimization

PSO I

 
x
Objective: To obtain , that minimizes the function
y
(x − 0.2)2 + (y − 0.3)2 .
1. Create the arrays for INITIAL position and TENTATIVE
positions of the particles in the swarm as given in booklet.
2. Compute the functional values for all the particles for both
the INITIAL and TENTATIVE positions. Pseudo Code:

Tutorial Session
Gaussain Mixture Model
Hidden Markov Model
Artificial Neural Network
Particle Swarm Optimization

PSO II
for i = 1 : 1 : N
fval initial(i) = (INITIAL(1, i) − 0.2)2 + (INITIAL(2, i) − 0.3)2
fval tentative(i) =
(TENTATIVE (1, i) − 0.2)2 + (TENTATIVE (2, i) − 0.3)2
end
3. Obtain the GLOBAL best TENTATIVE position of the
particles that gives the minimum functional value.
4. Obtain the next positions of the particles using

NEXT = INITIAL+αl (TENTATIVE −INITIAL)+αg (GLOBAL−INITIAL)

where αl , αg are constants.


5. Compute the functional value of the NEXT positions of the
particles, fval next

Tutorial Session
Gaussain Mixture Model
Hidden Markov Model
Artificial Neural Network
Particle Swarm Optimization

PSO III

6. Compare the functional values of the TENTATIVE and NEXT


positions of the particles to obtain the NEXT TENTATIVE
positions:
for i = 1 : 1 : 10
if fval next(i) < fval tentative(i)
NEXT TENTATIVE (:, i) = NEXT (:, i)
end % end of if
end % end of for
7. Repeat the steps 2-6 for some iterations by considering NEXT and
NEXT TENTATIVE positions as the INITIAL and TENTATIVE
positions, respectively, for the next iteration. After the last
iteration, identify the position of the particle that gives the
minimum functional value.

Tutorial Session

Vous aimerez peut-être aussi