This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
cs501r_f2016:lab13 [2017/11/11 17:00] wingated |
cs501r_f2016:lab13 [2017/11/11 17:11] wingated |
||
---|---|---|---|
Line 35: | Line 35: | ||
====Description:==== | ====Description:==== | ||
- | For this lab, you should implement the style transfer algorithm referenced above. We are providing the following: | + | For this lab, you should implement the style transfer algorithm referenced above. We are providing the following, [[https://www.dropbox.com/sh/tt0ctms12aumgui/AACRKSSof6kw-wi8vs1v8ls3a?dl=0 |
+ | |available from a dropbox folder]]: | ||
+ | |||
+ | - lab10_scaffold.py - Lab 10 scaffolding code | ||
+ | - vgg16.py - The VGG16 model | ||
+ | - content.png - An example content image | ||
+ | - style.png - An example style image | ||
+ | |||
+ | You will also need the VGG16 pre-trained weights: | ||
- | - [[http://liftothers.org/byu/lab10_scaffold.py.txt|Lab 10 scaffolding code]] | ||
- | - [[http://liftothers.org/byu/vgg16.py.txt|The VGG16 model]] | ||
- [[http://liftothers.org/byu/vgg16_weights.npz|VGG16 weights]] | - [[http://liftothers.org/byu/vgg16_weights.npz|VGG16 weights]] | ||
- | - [[http://liftothers.org/byu/content.png|An example content image]] | + | |
- | - [[http://liftothers.org/byu/style.png|An example style image]] | + | |
In the scaffolding code, you will find some examples of how to use the provided VGG model. (This model is a slightly modified version of [[https://www.cs.toronto.edu/~frossard/post/vgg16/|code available here]]). | In the scaffolding code, you will find some examples of how to use the provided VGG model. (This model is a slightly modified version of [[https://www.cs.toronto.edu/~frossard/post/vgg16/|code available here]]). | ||
Line 139: | Line 144: | ||
I found that it was important to clip pixel values to be in [0,255]. To do that, every 100 iterations I extracted the image, clipped it, and then assigned it back in. | I found that it was important to clip pixel values to be in [0,255]. To do that, every 100 iterations I extracted the image, clipped it, and then assigned it back in. | ||
- | ...although now that I think about it, perhaps I should have been operating on whitened images from the beginning! You should probably try that. | + | **...although now that I think about it, perhaps I should have been operating on whitened images from the beginning! You should probably try that.** |