User Tools

Site Tools


cs501r_f2016:tmp

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
cs501r_f2016:tmp [2016/09/24 20:45]
wingated
cs501r_f2016:tmp [2016/11/09 18:21]
wingated
Line 1: Line 1:
 ====Objective:​==== ====Objective:​====
  
-To read current papers on DNN research ​and translate them into working models. ​ To experiment with DNN-style regularization methods, including Dropout, Dropconnect, and L1/L2 weight regularization.+To gain experience coding a DNN architecture ​and learning program end-to-end, and to gain experience with Siamese network and ResNets.
  
 ---- ----
 ====Deliverable:​==== ====Deliverable:​====
  
-{{ :​cs501r_f2016:​lab6_do.png?​direct&​200|}}+For this lab, you will need to implement a simple face similarity detector.
  
-For this lab, you will need to implement ​three different regularization methods from the literature, and explore ​the parameters of each.+  - You must implement ​a siamese network that accepts two input images 
 +  - The network must output ​the probability that the two images are the same class 
 +  - Your implementation should use a ResNet architecture
  
-  - You must implement dropout (NOT using the pre-defined Tensorflow layers) +You should turn in the following:
-  - You must implement dropconnect +
-  - You must implement L1 weight regularization+
  
-You should ​ turn in an iPython notebook ​that shows three plotsone for each of the regularization methods.+  - A tensorboard screenshot showing ​that your architecture isindeed, a siamese architecture 
 +  - Your code 
 +  - A small writeup (<1/2 page) describing your test/​training split, your resnet architecture,​ and the final performance of your classifier.
  
-  - For dropouta plot showing training ​test performance as a function of the "keep probability"​. +You should use the [[http://www.openu.ac.il/​home/​hassner/​data/​lfwa/​|Labeled Faces in the Wild-a]] dataset (also available for  
-  - For dropconnect: ​the same +[[http://​liftothers.org/​byu/​lfwa.tar.gz|download from liftothers]]).
-  ​For L1 plot showing training / test performance as a function of the regularization strength, \lambda +
- +
-An example of my training/test performance for dropout is shown at the right.+
  
 ---- ----
Line 27: Line 26:
 Your notebook will be graded on the following: Your notebook will be graded on the following:
  
-  * 40% Correct implementation of Dropout +  * 35% Correct implementation of Siamese network 
-  * 30% Correct implementation of Dropconnect +  * 35% Correct implementation of Resnet 
-  * 20% Correct implementation of L1 regularization +  * 20% Reasonable effort to find a good-performing topology 
-  * 10% Tidy and legible plots+  * 10% Results writeup
  
 ---- ----
 ====Description:​==== ====Description:​====
  
-This lab is a chance for you to start reading the literature on deep neural networks, and understand how to replicate methods from the literature. ​ You will implement 4 different regularization methods, and will benchmark each one.+---- 
 +====Hints:​====
  
-To help ensure ​that everyone is starting off on the same footingyou should download ​the following ​scaffold code:+To help you get started, here's a simple script ​that will load all of the images and calculate labels. ​ It assumes that the face database has been unpacked in the current directoryand that there exists a file called ''​list.txt''​ that was generated with the following ​command:
  
-[[http://​liftothers.org/byu/lab6_scaffold.py|Lab 6 scaffold ​code]]+<code bash> 
 +find ./lfw2-name \*.jpg > list.txt 
 +</code>
  
-For all 4 methodswe will run on a single, deterministic batch of the first 1000 images from the MNIST dataset. ​ This will help us to overfit, and will hopefully ​be small enough not to tax your computers too much.+After running this code, the data will in the ''​data''​ tensor, and the labels ​will be in the ''​labels''​ tensor:
  
-**Part 1: implement dropout**+<code python>
  
-For the first part of the lab, you should implement dropout. ​ The paper upon which you should base your implementation is found at:+from PIL import Image 
 +import numpy as np
  
-[[https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf|The dropout paper]]+
 +# assumes list.txt is a list of filenames, formatted as 
 +
 +# ./lfw2//​Aaron_Eckhart/​Aaron_Eckhart_0001.jpg 
 +./lfw2//Aaron_Guiel/​Aaron_Guiel_0001.jpg 
 +# ... 
 +#
  
-The relevant equations are found in section 4 (pg 1933) You may also refer to the slides.+files = open'./list.txt' ).readlines()
  
-There are several notes to help you with this part:+data = np.zeros(( len(files), 250, 250 )) 
 +labels = np.zeros(( len(files), 1 ))
  
-  - First, you should run the provided code as-is. ​ It will overfit on the first 1000 images (how do you know this?​). ​ Record the test and training accuracy; this will be the "​baseline"​ line in your plot. +# a little hash map mapping subjects ​to IDs 
-  - Second, you should add dropout to each of the ''​h1'',​ ''​h2'',​ and ''​h3''​ layers. +ids = {} 
-  - You must consider carefully how to use tensorflow ​to implement dropout. +scnt = 0
-  - Remember that when you test images (or when you compute training set accuracy), you must scale activations by the ''​keep_probability'',​ as discussed in class and in the paper. +
-  - You should use the Adam optimizer, and optimize for 150 steps.+
  
-Note that although we are training on only the first 1000 images, we are testing on the entire 10,000 image test set.+# load in all of our images 
 +ind = 0 
 +for fn in files:
  
-In order to generate the final plot, you will need to scan across multiple values of the ''​keep_probability''​ You may wish to refactor the provided code in order to make this easier. ​ You should test at least the values ​''​[ ​0.1, 0.25, 0.5, 0.75, 1.0 ]''​. +    subject = fn.split('/')[3
- +    ​if ​not ids.has_key( subject )
-Once you understand dropout, implementing it is not hard; you should only have to add ~10 lines of code. +        ids[ subject ] = scnt 
- +        scnt += 
-Also note that because dropout involves some randomness, your curve may not match mine exactly; this is expected. +    label = idssubject ​
- +     
-**Part 2implement dropconnect** +    data[ ind, :, : ] = np.array( Image.openfn.rstrip() ) ) 
- +    ​labelsind = label 
-The specifications for this part are similar to part 1.  Once you have implemented Dropout, it should be very easy to modify your code to perform dropconnect. ​ The paper upon which you should base your implementation is +    ind +1
- +
-[[http://​www.jmlr.org/​proceedings/​papers/​v28/​wan13.pdf|The dropconnect paper]+
- +
-**Important note**the dropconnect paper has a somewhat more sophisticated inference method (that isthe method used at test time). ​ **We will not use that method.** Instead, we will use the same inference approximation used by the Dropout paper -- we will simply scale things by the ''​keep_probability''​. +
- +
-You should scan across the same values of ''​keep_probability'',​ and you should generate a similar plot. +
- +
-Dropconnect seems to want more training steps than dropout, so you should run the optimizer for 1500 iterations. +
- +
-**Part 3implement L1 regularization** +
- +
-For this part, you should implement L1 regularization on the weights This will change your computation graph a bit, and specifically will change your cost function -- instead of optimizing just ''​cross_entropy'',​ you should optimize ''​cross_entropy + lam*regularizers'',​ where ''​lam''​ is the \lambda regularization parameter from the slides You should regularize all of the weights and biases ​(six variables in total)+
- +
-You should create a plot of test/​training performance as you scan across values of lambda. ​ You should test at least [0.1, 0.01, 0.001]+
- +
-Note: unlike the dropout/​dropconnect regularizers,​ you will probably not be able to improve test time performance! +
- +
----- +
-====Hints:​====+
  
-To generate a random binary matrixyou can use ''​np.random.rand''​ to generate a matrix of random values between 0 and 1and then only keep those above a certain threshold.+# data is (13233250250) 
 +# labels is (13233, 1)
  
 +</​code>​
cs501r_f2016/tmp.txt · Last modified: 2021/06/30 23:42 (external edit)