User Tools

Site Tools


cs501r_f2016:tmp

This is an old revision of the document!


Objective:

To explore deeper networks, to leverage convolutions, and to explore Tensorboard.


Deliverable:

For this lab, you will need to perform three steps:

  1. You need to implement the Deep MNIST for experts tutorial
  2. You need to modify the tutorial code to deliver visualizations via Tensorboard.

Specifically, you should turn in an iPython notebook that shows two images:

  1. A Tensorboard image showing your classification accuracy over time
  2. A Tensorboard image showing your (expanded) computation graph

Examples are shown to the right.

According to the tutorial, if you run for 20,000 iterations, the final accuracy of your classifier will be around 99.5%. To make your life simpler, you only need to run for 1500 iterations.


Grading standards:

Your notebook will be graded on the following:

  • 40% Correct multilayer convolutional network defined and working
  • 30% Tidy and legible display of Tensorboard accuracy
  • 30% Tidy and legible display of Tensorboard computation graph

Description:

You now understand the basics of multi-layer neural networks. Here, we'll expand on your toolkit by adding in convolutions, a bit of dropout, and a new optimization method. Most of these will be explained in future lectures, so for now we will just use them without (fully) understanding them.

Part 1: implement deep convolutional networks

For this lab, you must implement the Deep MNIST for experts tutorial. This is mostly cutting-and-pasting code; since you already have Tensorflow up and running, this should be fairly straightfoward.

A few things to note:

  1. You are now adding multiple layers. Be careful with your variable names!
  2. You'll use the Adam optimizer, not vanilla SGD. We learn more about this later.
  3. The dropout layer is optional, but you should probably leave it in just to make cutting-and-pasting easier.

Note: you only need to train for 1500 steps. My final accuracy was 96.5%. If you want to train for the full 20k steps, you are of course welcome to do so!

Part 2: add in Tensorboard visualizations

There are two parts to this: first, you need to scope all of the nodes in your computation graph. In class, I showed a visualization that drew pretty boxes around all of the different parts of your computation graph. That's what I want from you!

Second, you'll need to produce little graphs that show accuracy over time.

Adventurous souls can dive right into the Tensorflow visualization tutorial. Here are some condensed notes:

Tensorboard logs events to a summary log. You'll need to tell Tensorboard where to stash those events and when to write them out; both are done with a SummaryWriter. You need to create a SummaryWriter object:

summary_writer = tf.train.SummaryWriter( “./tf_logs”, graph=sess.graph )

as well as scalar summaries of relevant variables; maybe something like this:

acc_summary = tf.scalar_summary( 'accuracy', accuracy )

These summaries are considered ops, just like any node in the computation graph, and they are triggered by sess.run. Tensorflow helpfully allows you to merge all of the summary ops into a single operation:

merged_summary_op = tf.merge_all_summaries()

Then, you'll need to trigger the merged_summary_op operation. This will generate a summary string, which you should pass to your summary writer.

Once you have run your code and collected the necessary statistics, you should be able to start up the Tensorboard visualizer. It runs as a webserver; to start Tensorboard, you should be able to run something like the following from the directory where you ran your TF code:

cd tf_logs
tensorboard --logdir .

At which point you'll see something like the following output:

Starting TensorBoard 28 on port 6006
(You can navigate to http://192.168.250.107:6006)

Point your browser to the spot indicated, and voila!


Hints:

cs501r_f2016/tmp.1474303961.txt.gz · Last modified: 2021/06/30 23:40 (external edit)