# BYU CS classes

### Site Tools

cs501r_f2017:lab5v2

### Objective:

To become more proficient in Tensorflow, to become more proficient in the construction of computation graphs and image classification, and to create and debug your first full DNN.

### Deliverable:

There are two parts to this lab:

1. You need to implement several helper functions
2. You need to create and train your own DNN

You should turn in an iPython notebook that shows a tensorboard screenshot of your classifier's computation graph, as well as a of visualization of classification accuracy (on a held-out test set) going up over time.

An example plot is shown to the right.

NOTE: You are welcome to look at code on the internet to help with this lab, including the tensorflow tutorials and code developed in class, but all code that you turn in must be your own work!

• 10% Proper creation of test/train split
• 30% Proper implementation of conv and fc helper functions
• 40% Design and implementation of classifier
• 20% Tidy and legible plots of computation graph and classification accuracy over time

### Description:

You now have all of the tools you need to become a real deep learning ninja – you understand the basics of vectorized code, computation graphs, automatic differentiation, convolutions, and optimization. We're now going to put all of those pieces together!

There are two parts to this lab.

Part 1: Create helper functions

To simplify your life now and in the future, your first task is to create two helper functions that will be used to create convolution and fully connected layers. The API for these two functions, along with a description of what they're supposed to do, is provided below.

The challenge of this part of the lab is ensuring that sizes of everything matches, and that variables are properly initialized. To do that, you will need to examine the size of the input tensor x. I used code like x.get_shape().as_list() to get a list of dimensions.

You may also find the tf.nn.bias_add and tf.get_variable functions helpful.

If you use the tf.get_variable method of creating and initializing variables, you can specify an optional initializer for the variable. I used tf.contrib.layers.variance_scaling_initializer() with good results.

Make sure that when you call tf.nn.conv2d, you use the padding=“SAME” option!

def conv( x, filter_size=3, stride=2, num_filters=64, is_output=False, name="conv" ):
'''
x is an input tensor
Declare a name scope using the "name" parameter
Within that scope:
Create a W filter variable with the proper size
Create a B bias variable with the proper size
Convolve x with W by calling the tf.nn.conv2d function
If is_output is False,
Call the tf.nn.relu function
Return the final op
'''
pass
def fc( x, out_size=50, is_output=False, name="fc" ):
'''
x is an input tensor
Declare a name scope using the "name" parameter
Within that scope:
Create a W filter variable with the proper size
Create a B bias variable with the proper size
Multiply x by W and add b
If is_output is False,
Call the tf.nn.relu function
Return the final op
'''
pass

Given these helper functions, we can define a DNN as follows:

input_data = tf.placeholder( tf.float32, [1,32,32,3] )
h0 = conv( input_data, name="h0" )
h1 = conv( h0, name="h1" )
h2 = conv( h1, name="h2" )
...

etc.

Part 2: Implement a DNN on the CIFAR-10 dataset

This is it! Now it's time to implement your first DNN. Using the helper functions defined previously, you now need to implement an image classifier, end-to-end.

The steps for completion of this lab are:

1. Load as much of the data as you can into RAM. Create an 80/20 training/test split.
2. Use tensorflow to create your DNN.
1. Using the conv and fc layers you defined previously, create a computation graph to classify images.
2. This graph should have at least two convolution layers and two fully connected layers.
3. You may pick the number of filters in each convolution layer, and the size of the fully connected layers. Typically, there are about 64 filters in a convolution layer, and about 256 neurons in the first fully connected layer and 64 in the second.
4. You should use the cross entropy loss function. I implemented this using tf.nn.sparse_softmax_cross_entropy_with_logits. Check the documentation for details.
3. Train the network using an optimizer of your choice
1. You might as well use the Adam optimizer

For now, you are welcome to assume a batch size of 1, but if you're feeling adventurous, see if you can code your computation graph to support arbitrarily large batches.

You input placeholders will need to be four dimensional – ie, [1,32,32,3]. This is required by the convolution function. (The “1” is the batch size)

It is a little bit tricky to transition from the convolution layers to the fully connected layers. The most common way to accomplish this is to reshape the tensor, using tf.reshape to “flatten” a tensor.

You are welcome (and encouraged!) to see what happens as you add more and more layers!

### Hints:

The Tensorflow documentation is quite helpful. A few things that you might need:

• Use tf.nn.relu to create a relu layer.
• Variable initialization matters. If your classifier seems stuck at 10% or 11% accuracy, make sure you're not initializing to all zeros!
• If you're having trouble debugging your DNN, first make sure that you can overfit on one image – you should be able to achieve a loss of very close to 0.
• You will need to specify a learning rate. This is likely topology dependent. Try values like 0.001, 0.01, or 0.1.