User Tools

Site Tools


cs501r_f2016:lab13

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
cs501r_f2016:lab13 [2017/11/10 21:04]
wingated
cs501r_f2016:lab13 [2017/11/11 17:10]
wingated
Line 35: Line 35:
 ====Description:​==== ====Description:​====
  
-For this lab, you should implement the style transfer algorithm referenced above. ​ We are providing the following:+For this lab, you should implement the style transfer algorithm referenced above. ​ We are providing the following, [[https://​www.dropbox.com/​sh/​tt0ctms12aumgui/​AACRKSSof6kw-wi8vs1v8ls3a?​dl=0 
 +|available from a dropbox folder]]: 
 + 
 +  - lab10_scaffold.py - Lab 10 scaffolding code 
 +  - vgg16.py - The VGG16 model 
 +  - content.png - An example content image 
 +  - style.png - An example style image 
 + 
 +You will also need the VGG16 pre-trained weights:
  
-  - [[http://​liftothers.org/​byu/​lab10_scaffold.py.txt|Lab 10 scaffolding code]] 
-  - [[http://​liftothers.org/​byu/​vgg16.py.txt|The VGG16 model]] 
   - [[http://​liftothers.org/​byu/​vgg16_weights.npz|VGG16 weights]]   - [[http://​liftothers.org/​byu/​vgg16_weights.npz|VGG16 weights]]
-  - [[http://​liftothers.org/​byu/​content.png|An example content image]] +
-  - [[http://​liftothers.org/​byu/​style.png|An example style image]]+
  
 In the scaffolding code, you will find some examples of how to use the provided VGG model. ​ (This model is a slightly modified version of [[https://​www.cs.toronto.edu/​~frossard/​post/​vgg16/​|code available here]]). In the scaffolding code, you will find some examples of how to use the provided VGG model. ​ (This model is a slightly modified version of [[https://​www.cs.toronto.edu/​~frossard/​post/​vgg16/​|code available here]]).
Line 138: Line 143:
  
 I found that it was important to clip pixel values to be in [0,​255]. ​ To do that, every 100 iterations I extracted the image, clipped it, and then assigned it back in. I found that it was important to clip pixel values to be in [0,​255]. ​ To do that, every 100 iterations I extracted the image, clipped it, and then assigned it back in.
 +
 +...although now that I think about it, perhaps I should have been operating on whitened images from the beginning! ​ You should probably try that.
 +
 +
 +----
 +====Bonus:​====
 +
 +There'​s no official extra credit for this lab, but have some fun with it!  Try different content and different styles. ​ See if you can get nicer, higher resolution images out of it.
 +
 +Also, take a look at the vgg16.py code.  What happens if you swap out max pooling for average pooling?
 +
 +What difference does whitening the input images make?
 +
 +Show me the awesome results you can generate!
 +
 +
 +
  
cs501r_f2016/lab13.txt · Last modified: 2021/06/30 23:42 (external edit)