User Tools

Site Tools


cs401r_w2016:lab13

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
cs401r_w2016:lab13 [2016/01/13 17:24]
admin
cs401r_w2016:lab13 [2016/02/08 23:14]
admin
Line 6: Line 6:
 ====Deliverable:​==== ====Deliverable:​====
  
-For this lab, you will implement the Expectation Maximization algorithm on the Old Faithful dataset. ​ This involves learning the parameters of a Gaussian mixture model. ​ Your notebook should produce a visualization of the progress of the algorithm. ​ The final figure ​should ​look something like this:+For this lab, you will implement the Expectation Maximization algorithm on the Old Faithful dataset. ​ This involves learning the parameters of a Gaussian mixture model. ​ Your notebook should produce a visualization of the progress of the algorithm. ​ The final figure ​could look something like this (they don't have to be arranged in subplots)
  
 {{:​cs401r_w2016:​lab5_em.png?​direct&​600|}} {{:​cs401r_w2016:​lab5_em.png?​direct&​600|}}
Line 19: Line 19:
   * 20% Correctly updates covariances   * 20% Correctly updates covariances
   * 20% Correctly updates mixing weights   * 20% Correctly updates mixing weights
-  * 10% Final plot is tidy and legible+  * 10% Final plot(s) is tidy and legible
  
 ---- ----
 ====Description:​==== ====Description:​====
  
-To help our TA better grade your notebook, you should use the following initial parameters:+For this lab, we will be using the Expectation Maximization (EM) method to **learn** the parameters of a Gaussian mixture model. ​ These parameters will reflect cluster structure in the data -- in other words, we will learn probabilistic descriptions of clusters in the data. 
 + 
 +For this lab, you will use the Old Faithful dataset, which you can download here: 
 + 
 +[[http://​hatch.cs.byu.edu/​courses/​stat_ml/​old_faithful.mat|Old Faithful dataset]] 
 + 
 +The equations for implementing the EM algorithm are given in MLAPP 11.4.2.2 - 11.4.2.3. 
 + 
 +The algorithm is: 
 + 
 +  - Compute the responsibilities $r_{ik}$ (Eq. 11.27) 
 +  - Update the mixing weights $\pi_k$ (Eq. 11.28) 
 +  - Update the means $\mu_k$ (Eq. 11.31) 
 +  - Update the covariances $\Sigma_k$ (Eq. 11.32) 
 + 
 +Now, repeat until convergence. 
 + 
 +Since the EM algorithm is deterministic,​ and since precise initial conditions for your algorithm are given below, the progress of your algorithm should closely match the reference image shown above. 
 + 
 +**Note: ​To help our TA better grade your notebook, you should use the following initial parameters:**
  
 <code python> <code python>
  
-# the Gaussian means+# the Gaussian means (as column vectors -- ie, the mean for Gaussian 0 is mus[:,0]
 mus = np.asarray( [[-1.17288986,​ -0.11642103],​ mus = np.asarray( [[-1.17288986,​ -0.11642103],​
                    ​[-0.16526981, ​ 0.70142713]])                    ​[-0.16526981, ​ 0.70142713]])
Line 42: Line 61:
  
 # The Gaussian mixing weights # The Gaussian mixing weights
-mws = [ 0.68618439, 0.31381561 ]+mws = [ 0.68618439, 0.31381561 ]  # called alpha in the slides
  
 </​code>​ </​code>​
Line 70: Line 89:
 scipy.stats.multivariate_normal.pdf scipy.stats.multivariate_normal.pdf
  
 +# scatters a set of points; check out the "​c"​ keyword argument to change color, and the "​s"​ arg to change the size
 +plt.scatter
 +plt.xlim # sets the range of values for the x axis
 +plt.ylim # sets the range of values for the y axis
  
 +# to check the shape of a vector, use the .shape member
 +foo = np.random.randn( 100, 200 )
 +foo.shape # an array with values (100,200)
 +
 +# to transpose a vector, you can use the .T operator
 +foo = np.atleast_2d( [42, 43] ) # this is a row vector
 +foo.T # this is a column vector
 +
 +import numpy as np
 +np.atleast_2d
 +np.sum
  
 </​code>​ </​code>​
cs401r_w2016/lab13.txt · Last modified: 2021/06/30 23:42 (external edit)