This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
cs401r_w2016:lab13 [2016/01/13 17:24] admin |
cs401r_w2016:lab13 [2021/06/30 23:42] (current) |
||
---|---|---|---|
Line 6: | Line 6: | ||
====Deliverable:==== | ====Deliverable:==== | ||
- | For this lab, you will implement the Expectation Maximization algorithm on the Old Faithful dataset. This involves learning the parameters of a Gaussian mixture model. Your notebook should produce a visualization of the progress of the algorithm. The final figure should look something like this: | + | For this lab, you will implement the Expectation Maximization algorithm on the Old Faithful dataset. This involves learning the parameters of a Gaussian mixture model. Your notebook should produce a visualization of the progress of the algorithm. The final figure could look something like this (they don't have to be arranged in subplots): |
{{:cs401r_w2016:lab5_em.png?direct&600|}} | {{:cs401r_w2016:lab5_em.png?direct&600|}} | ||
Line 19: | Line 19: | ||
* 20% Correctly updates covariances | * 20% Correctly updates covariances | ||
* 20% Correctly updates mixing weights | * 20% Correctly updates mixing weights | ||
- | * 10% Final plot is tidy and legible | + | * 10% Final plot(s) is tidy and legible |
---- | ---- | ||
====Description:==== | ====Description:==== | ||
- | To help our TA better grade your notebook, you should use the following initial parameters: | + | For this lab, we will be using the Expectation Maximization (EM) method to **learn** the parameters of a Gaussian mixture model. These parameters will reflect cluster structure in the data -- in other words, we will learn probabilistic descriptions of clusters in the data. |
+ | |||
+ | For this lab, you will use the Old Faithful dataset, which you can download here: | ||
+ | |||
+ | [[https://www.dropbox.com/s/h8b67cg8wff7bg0/old_faithful.mat?dl=0|Old Faithful dataset]] | ||
+ | |||
+ | **The first thing you should is mean-center your data (ie, compute the mean of the data, then subtract that off from each datapoint).** (If you don't do this, you'll get zero probabilities for all of your responsibilities, given the initial conditions discussed below.) | ||
+ | |||
+ | The equations for implementing the EM algorithm are given in MLAPP 11.4.2.2 - 11.4.2.3. | ||
+ | |||
+ | The algorithm is: | ||
+ | |||
+ | - Compute the responsibilities $r_{ik}$ (Eq. 11.27) | ||
+ | - Update the mixing weights $\pi_k$ (Eq. 11.28) | ||
+ | - Update the covariances $\Sigma_k$ (Eq. 11.32) | ||
+ | - Update the means $\mu_k$ (Eq. 11.31) | ||
+ | |||
+ | Now, repeat until convergence. Note that if you change the order of operations, you may get slightly difference convergences than the reference image. | ||
+ | |||
+ | Since the EM algorithm is deterministic, and since precise initial conditions for your algorithm are given below, the progress of your algorithm should closely match the reference image shown above. | ||
+ | |||
+ | For your visualization, please print out at least nine plots. These should color each datapoint using $r_{ik}$, and they should plot the means and covariances of the Gaussians. See the hints section for how to plot an ellipse representing the 95% confidence interval of a Gaussian, given an arbitrary covariance matrix. | ||
+ | |||
+ | |||
+ | **Note: To help our TA better grade your notebook, you should use the following initial parameters:** | ||
<code python> | <code python> | ||
- | # the Gaussian means | + | # the Gaussian means (as column vectors -- ie, the mean for Gaussian 0 is mus[:,0] |
mus = np.asarray( [[-1.17288986, -0.11642103], | mus = np.asarray( [[-1.17288986, -0.11642103], | ||
[-0.16526981, 0.70142713]]) | [-0.16526981, 0.70142713]]) | ||
Line 42: | Line 66: | ||
# The Gaussian mixing weights | # The Gaussian mixing weights | ||
- | mws = [ 0.68618439, 0.31381561 ] | + | mws = [ 0.68618439, 0.31381561 ] # called alpha in the slides |
</code> | </code> | ||
Line 70: | Line 94: | ||
scipy.stats.multivariate_normal.pdf | scipy.stats.multivariate_normal.pdf | ||
+ | # scatters a set of points; check out the "c" keyword argument to change color, and the "s" arg to change the size | ||
+ | plt.scatter | ||
+ | plt.xlim # sets the range of values for the x axis | ||
+ | plt.ylim # sets the range of values for the y axis | ||
+ | # to check the shape of a vector, use the .shape member | ||
+ | foo = np.random.randn( 100, 200 ) | ||
+ | foo.shape # an array with values (100,200) | ||
+ | |||
+ | # to transpose a vector, you can use the .T operator | ||
+ | foo = np.atleast_2d( [42, 43] ) # this is a row vector | ||
+ | foo.T # this is a column vector | ||
+ | |||
+ | import numpy as np | ||
+ | np.atleast_2d | ||
+ | np.sum | ||
</code> | </code> |