User Tools

Site Tools


cs501r_f2018:lab4

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
cs501r_f2018:lab4 [2018/09/24 21:46]
shreeya
cs501r_f2018:lab4 [2021/06/30 23:42] (current)
Line 56: Line 56:
 ---- ----
 ====Description:​==== ====Description:​====
 +
 +For a video including some tips and tricks that can help with this lab: [[https://​youtu.be/​Ms19kgK_D8w|https://​youtu.be/​Ms19kgK_D8w]]
  
 For this lab, you will implement a virtual radiologist. ​ You are given For this lab, you will implement a virtual radiologist. ​ You are given
Line 75: Line 77:
 {{ :​cs501r_f2016:​screen_shot_2017-10-10_at_10.11.55_am.png?​direct&​200|}} {{ :​cs501r_f2016:​screen_shot_2017-10-10_at_10.11.55_am.png?​direct&​200|}}
  
-Use the "Deep Convolution U-Net" from this paper: [[https://​arxiv.org/​pdf/​1505.04597.pdf|U-Net:​ Convolutional Networks for Biomedical Image Segmentation]] (See figure 1, replicated at the right).  ​This should ​be fairly easy to implement given the +Use the "Deep Convolution U-Net" from this paper: [[https://​arxiv.org/​pdf/​1505.04597.pdf|U-Net:​ Convolutional Networks for Biomedical Image Segmentation]] (See figure 1, replicated at the right).  ​You should ​use existing pytorch functions (not your own Conv2D module), such as ''​nn.Conv2d'';​ you will also need the pytorch function ''​torch.cat''​ and ''​nn.ConvTranspose2d''​
-''​conv'' ​helper functions that you implemented previously; you +
-may also need the pytorch function ''​torch.cat''​ and ''​nn.ConvTranspose2d''​+
  
 ''​torch.cat''​ allows you to concatenate tensors. ''​nn.ConvTranspose2d''​ is the opposite of ''​nn.Conv2d''​. It is used to bring an image from low res to higher res. [[https://​towardsdatascience.com/​up-sampling-with-transposed-convolution-9ae4f2df52d0|This blog]] should help you understand this function in detail. ''​torch.cat''​ allows you to concatenate tensors. ''​nn.ConvTranspose2d''​ is the opposite of ''​nn.Conv2d''​. It is used to bring an image from low res to higher res. [[https://​towardsdatascience.com/​up-sampling-with-transposed-convolution-9ae4f2df52d0|This blog]] should help you understand this function in detail.
Line 101: Line 101:
 ====Hints:​==== ====Hints:​====
  
-The focus of this lab is to learn how to make deep neural nets and implement loss function. Therefore we'll help you with the implementation of Dataset.+The intention ​of this lab is to learn how to make deep neural nets and implement loss function. Therefore we'll help you with the implementation of Dataset. This code will download the dataset for you so that you are ready to use it and focus on network implementation,​ losses and accuracies.
  
 <code python> <code python>
 +import torchvision
 +import os
 +import gzip
 +import tarfile
 +import gc
 +from IPython.core.ultratb import AutoFormattedTB
 +__ITB__ = AutoFormattedTB(mode = '​Verbose',​color_scheme='​LightBg',​ tb_offset = 1)
 +
 class CancerDataset(Dataset):​ class CancerDataset(Dataset):​
   def __init__(self,​ root, download=True,​ size=512, train=True):​   def __init__(self,​ root, download=True,​ size=512, train=True):​
Line 134: Line 142:
     img = self.dataset_folder[index]     img = self.dataset_folder[index]
     label = self.label_folder[index]     label = self.label_folder[index]
-    return img[0] ​* 255,​label[0][0]+    return img[0],​label[0][0]
     ​     ​
   def __len__(self):​   def __len__(self):​
     return len(self.dataset_folder)     return len(self.dataset_folder)
 </​code>​ </​code>​
-    
-The tricky part here is that Imagefolder takes a folder of images. When you have both inputs and outputs folder inside cancer_data,​ it does not know that outputs is label. To get over this problem, I created a layer of folders that looks like this: cancer_data -> train_input -> train_input->​all image files and similar for all other folders.(we are looking at other options of making the dataset...) 
  
 You are welcome to resize your input images, although don't make them You are welcome to resize your input images, although don't make them
Line 146: Line 152:
 down to 512x512. down to 512x512.
  
-You are welcome (and encouraged) to use the built-in +You will need to add some lines of code for memory management:​ 
-dropout layer.+ 
 +<code python>​ 
 +def scope(): 
 +  try: 
 +    #your code for calling dataset and dataloader 
 +     
 +    gc.collect() 
 +    print(torch.cuda.memory_allocated(0) / 1e9) 
 +     
 +    #for epochs: 
 +    #  Call your model,loss and accuracy 
 +     
 +  except: 
 +    __ITB__() 
 + 
 +scope() 
 +</​code>​ 
 +   
 +Since you will be using the output of one network in two places(convolution and maxpooling),​ you can't use nn.Sequential. Instead you will write up the network like normal variable assignment as the example shown below: 
 + 
 +<code python>​ 
 +class CancerDetection(nn.Module):​ 
 +  def __init__(self):​ 
 +    super(CancerDetection,​ self).__init__() 
 +     
 +    self.conv1 = nn.Conv2d(3,​64,​kernel_size = 3, stride = 1, padding = 1) 
 +    self.relu2 = nn.ReLU() 
 +    self.conv3 = nn.Conv2d(64,​128,​kernel_size = 3, stride = 1, padding = 1) 
 +    self.relu4 = nn.ReLU() 
 + 
 +  def forward(self,​ input): 
 +    conv1_out = self.conv1(input) 
 +    relu2_out = self.relu2(conv1_out) 
 +    conv3_out = self.conv3(relu2_out) 
 +    relu4_out = self.relu4(conv3_out)  
 +    return relu4_out 
 +</​code>​ 
 + 
 +You are welcome (and encouraged) to use the built-in ​batch normalization and dropout layer. 
 + 
 +Guessing that the pixel is not cancerous every single time will give you an accuracy of ~ 85%. Your trained network should be able to do better than that (but you will not be graded on accuracy). This is the result I got after 1 hour or training. 
 + 
 +{{:​cs501r_f2016:​training_accuracy.png?​400|}}  
 +{{:​cs501r_f2016:​training_loss.png?​400|}}
cs501r_f2018/lab4.1537825589.txt.gz · Last modified: 2021/06/30 23:40 (external edit)