User Tools

Site Tools


cs501r_f2018:lab4

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
cs501r_f2018:lab4 [2018/09/25 02:13]
shreeya
cs501r_f2018:lab4 [2021/06/30 23:42] (current)
Line 56: Line 56:
 ---- ----
 ====Description:​==== ====Description:​====
 +
 +For a video including some tips and tricks that can help with this lab: [[https://​youtu.be/​Ms19kgK_D8w|https://​youtu.be/​Ms19kgK_D8w]]
  
 For this lab, you will implement a virtual radiologist. ​ You are given For this lab, you will implement a virtual radiologist. ​ You are given
Line 75: Line 77:
 {{ :​cs501r_f2016:​screen_shot_2017-10-10_at_10.11.55_am.png?​direct&​200|}} {{ :​cs501r_f2016:​screen_shot_2017-10-10_at_10.11.55_am.png?​direct&​200|}}
  
-Use the "Deep Convolution U-Net" from this paper: [[https://​arxiv.org/​pdf/​1505.04597.pdf|U-Net:​ Convolutional Networks for Biomedical Image Segmentation]] (See figure 1, replicated at the right).  ​This should ​be fairly easy to implement given the +Use the "Deep Convolution U-Net" from this paper: [[https://​arxiv.org/​pdf/​1505.04597.pdf|U-Net:​ Convolutional Networks for Biomedical Image Segmentation]] (See figure 1, replicated at the right).  ​You should ​use existing pytorch functions (not your own Conv2D module), such as ''​nn.Conv2d'';​ you will also need the pytorch function ''​torch.cat''​ and ''​nn.ConvTranspose2d''​
-''​conv'' ​helper functions that you implemented previously; you +
-may also need the pytorch function ''​torch.cat''​ and ''​nn.ConvTranspose2d''​+
  
 ''​torch.cat''​ allows you to concatenate tensors. ''​nn.ConvTranspose2d''​ is the opposite of ''​nn.Conv2d''​. It is used to bring an image from low res to higher res. [[https://​towardsdatascience.com/​up-sampling-with-transposed-convolution-9ae4f2df52d0|This blog]] should help you understand this function in detail. ''​torch.cat''​ allows you to concatenate tensors. ''​nn.ConvTranspose2d''​ is the opposite of ''​nn.Conv2d''​. It is used to bring an image from low res to higher res. [[https://​towardsdatascience.com/​up-sampling-with-transposed-convolution-9ae4f2df52d0|This blog]] should help you understand this function in detail.
Line 104: Line 104:
  
 <code python> <code python>
 +import torchvision
 import os import os
 import gzip import gzip
Line 141: Line 142:
     img = self.dataset_folder[index]     img = self.dataset_folder[index]
     label = self.label_folder[index]     label = self.label_folder[index]
-    return img[0] ​* 255,​label[0][0]+    return img[0],​label[0][0]
     ​     ​
   def __len__(self):​   def __len__(self):​
Line 183: Line 184:
  
   def forward(self,​ input):   def forward(self,​ input):
-    #might need to change data dim b4 using input directly 
     conv1_out = self.conv1(input)     conv1_out = self.conv1(input)
     relu2_out = self.relu2(conv1_out)     relu2_out = self.relu2(conv1_out)
Line 191: Line 191:
 </​code>​ </​code>​
  
-You are welcome (and encouraged) to use the built-in dropout layer.+You are welcome (and encouraged) to use the built-in ​batch normalization and dropout layer. 
 + 
 +Guessing that the pixel is not cancerous every single time will give you an accuracy of ~ 85%. Your trained network should be able to do better than that (but you will not be graded on accuracy). This is the result I got after 1 hour or training. 
 + 
 +{{:​cs501r_f2016:​training_accuracy.png?​400|}}  
 +{{:​cs501r_f2016:​training_loss.png?​400|}}
cs501r_f2018/lab4.1537841629.txt.gz · Last modified: 2021/06/30 23:40 (external edit)