This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
cs501r_f2017:lab5v2 [2017/09/28 16:45] wingated |
cs501r_f2017:lab5v2 [2021/06/30 23:42] (current) |
||
---|---|---|---|
Line 6: | Line 6: | ||
====Deliverable:==== | ====Deliverable:==== | ||
- | For this lab, you will need to perform three steps: | + | There are two parts to this lab: |
- You need to implement several helper functions | - You need to implement several helper functions | ||
- You need to create and train your own DNN | - You need to create and train your own DNN | ||
- | |||
You should turn in an iPython notebook that shows a tensorboard screenshot of your classifier's computation graph, as well as a of visualization of classification accuracy (on a held-out test set) going up over time. | You should turn in an iPython notebook that shows a tensorboard screenshot of your classifier's computation graph, as well as a of visualization of classification accuracy (on a held-out test set) going up over time. | ||
Line 66: | Line 64: | ||
<code python> | <code python> | ||
- | def fc( x, out_size=50, name="fc" ): | + | def fc( x, out_size=50, is_output=False, name="fc" ): |
''' | ''' | ||
x is an input tensor | x is an input tensor | ||
Line 106: | Line 104: | ||
- This graph should have at least two convolution layers and two fully connected layers. | - This graph should have at least two convolution layers and two fully connected layers. | ||
- You may pick the number of filters in each convolution layer, and the size of the fully connected layers. Typically, there are about 64 filters in a convolution layer, and about 256 neurons in the first fully connected layer and 64 in the second. | - You may pick the number of filters in each convolution layer, and the size of the fully connected layers. Typically, there are about 64 filters in a convolution layer, and about 256 neurons in the first fully connected layer and 64 in the second. | ||
- | - You should use the cross entropy loss function. | + | - You should use the cross entropy loss function. I implemented this using ''tf.nn.sparse_softmax_cross_entropy_with_logits''. Check the documentation for details. |
- Train the network using an optimizer of your choice | - Train the network using an optimizer of your choice | ||
- You might as well use the Adam optimizer | - You might as well use the Adam optimizer |