User Tools

Site Tools


cs501r_f2016:lab6v2

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
cs501r_f2016:lab6v2 [2017/10/10 16:10]
wingated
cs501r_f2016:lab6v2 [2021/06/30 23:42] (current)
Line 1: Line 1:
 +=====BYU CS 501R - Deep Learning:​Theory and Practice - Lab 6=====
 +
 +
 +
 ====Objective:​==== ====Objective:​====
  
-To build a dense prediction model, to experiment with different +To build a dense prediction model, to begin to read current papers in DNN research, to experiment with different 
-topologies, and to experiment with different regularization techniques.+DNN topologies, and to experiment with different regularization techniques.
  
 ---- ----
 ====Deliverable:​==== ====Deliverable:​====
 +
 +{{ :​cs501r_f2016:​pos_test_000072.png?​direct&​200|}}
 +{{ :​cs501r_f2016:​pos_test_000072_output.png?​direct&​200|}}
  
 For this lab, you will turn in a report that describes your efforts at For this lab, you will turn in a report that describes your efforts at
Line 11: Line 18:
 notebook or PDF writeup that describes your (1) topology, (2) cost notebook or PDF writeup that describes your (1) topology, (2) cost
 function, (3) method of calculating accuracy, and (4) results with function, (3) method of calculating accuracy, and (4) results with
-experimenting with regularization.+experimenting with regularization.  You should also report on how much of the data you used.
  
 Your notebook / writeup should also include at an image that Your notebook / writeup should also include at an image that
 shows the dense prediction produced by your network on the shows the dense prediction produced by your network on the
-"pos_test_000072.png" ​image. ​ This is an image in the test set that +''​pos_test_000072.png'' ​image. ​ This is an image in the test set that 
-your network will not have seen before.+your network will not have seen before.  This image, and the ground truth labeling, is shown at the right. (And is contained in the downloadable dataset below).
  
 ---- ----
Line 25: Line 32:
   * 30% Proper design, creation and debugging of a dense prediction network   * 30% Proper design, creation and debugging of a dense prediction network
   * 30% Proper design of a loss function and test set accuracy measure   * 30% Proper design of a loss function and test set accuracy measure
-  * 20% Proper experimentation with regularization+  * 20% Proper experimentation with two different regularizers
   * 20% Tidy visualization of the output of your dense predictor   * 20% Tidy visualization of the output of your dense predictor
  
 ---- ----
 ====Data set:==== ====Data set:====
 +
 +{{ :​cs501r_f2016:​pos_train_000200.png?​direct&​200|}}
 +{{ :​cs501r_f2016:​pos_train_000200_output.png?​direct&​200|}}
  
 The data is given as a set of 1024x1024 PNG images. ​ Each input image The data is given as a set of 1024x1024 PNG images. ​ Each input image
-(in the "inputs" ​directory) is an RGB image of a section of tissue, +(in the ''​inputs'' ​directory) is an RGB image of a section of tissue, 
-and there a file with the same name (in the "outputs" ​directory) that+and there a file with the same name (in the ''​outputs'' ​directory) that
 has a dense labeling of whether or not a section of tissue is has a dense labeling of whether or not a section of tissue is
 cancerous (white pixels mean "​cancerous",​ while black pixels mean "not cancerous (white pixels mean "​cancerous",​ while black pixels mean "not
Line 40: Line 50:
 The data has been pre-split for you into test and training splits. The data has been pre-split for you into test and training splits.
 Filenames also reflect whether or not the image has any cancer at all Filenames also reflect whether or not the image has any cancer at all
-(files starting with "pos_" ​have some cancerous pixels, while files +(files starting with ''​pos_'' ​have some cancerous pixels, while files 
-starting with "neg_" ​have no cancer anywhere). ​ All of the data is+starting with ''​neg_'' ​have no cancer anywhere). ​ All of the data is
 hand-labeled,​ so the dataset is not very large. ​ That means that hand-labeled,​ so the dataset is not very large. ​ That means that
 overfitting is a real possibility. overfitting is a real possibility.
  
-[[http://​liftothers.org/​cancer_data.tar.gz|The data can be downloaded here.]]+[[http://​liftothers.org/​cancer_data.tar.gz|The data can be downloaded here.]] ​//Please note that this dataset is not publicly available, and should not be redistributed.//​ 
 + 
 +As in the previous lab, you are welcome to sub-sample the data if your computer is not powerful enough to fit it all in RAM.  However, if you do, please clearly report how much data you used in your final report.
  
 ---- ----
Line 67: Line 79:
  
 **Part 1a: Implement your network topology** **Part 1a: Implement your network topology**
 +
 +{{ :​cs501r_f2016:​screen_shot_2017-10-10_at_10.11.55_am.png?​direct&​200|}}
  
 Like the previous lab, you must choose your topology. ​ I have had good Like the previous lab, you must choose your topology. ​ I have had good
-luck implementing the "Deep Convolution U-Net" from this paper: +luck implementing the "Deep Convolution U-Net" from this paper: [[https://​arxiv.org/​pdf/​1505.04597.pdf|U-Net:​ Convolutional Networks for Biomedical Image Segmentation]] (See figure 1, replicated at the right).  This should be fairly easy to implement given the 
- +''​conv'' ​helper functions that you implemented previously; you 
-[[https://​arxiv.org/​pdf/​1505.04597.pdf|U-Net:​ Convolutional Networks +may also need the tensorflow function ​''​tf.concat''​.
-for Biomedical Image Segmentation]] +
- +
-(See figure 1).  This should be fairly easy to implement given the +
-"conv" and "​fc" ​helper functions that you implemented previously; you +
-may also need the tensorflow function ​"tf.concat".+
  
 +//Note that the simplest network you could implement (with all the desired properties) is just a single convolution layer with two filters and no relu!  Why is that?  (of course it wouldn'​t work very well!)//
  
 **Part 1b: Implement a cost function** **Part 1b: Implement a cost function**
Line 113: Line 123:
 down to 512x512. down to 512x512.
  
-I used the ``scikit-image`` package to handle all of my image IO and+I used the ''​scikit-image'' ​package to handle all of my image IO and
 resizing. ​ **NOTE: be careful about data types!** When you first load resizing. ​ **NOTE: be careful about data types!** When you first load
-an image using ``skimage.io.imread``, it returns a tensor with uint8+an image using ''​skimage.io.imread''​, it returns a tensor with ''​uint8''​
 pixels in the range of [0,​255]. ​ However, after using pixels in the range of [0,​255]. ​ However, after using
-``skimage.transform.resize``, the result is an image with float32+''​skimage.transform.resize''​, the result is an image with ''​float32''​
 entries in [0,1]. entries in [0,1].
 +
 +Don't forget to whiten your data.  And remember that if your data is stored as a numpy array, be careful about the data type: if you try to whiten it while it is still a ''​uint8'',​ bad things will happen.
  
 You are welcome (and encouraged) to use the built-in tensorflow You are welcome (and encouraged) to use the built-in tensorflow
 dropout layer. dropout layer.
  
cs501r_f2016/lab6v2.1507651838.txt.gz · Last modified: 2021/06/30 23:40 (external edit)