User Tools

Site Tools


cs501r_f2016:fp

This is an old revision of the document!


Objective:

To creatively apply knowledge gained through the course of the semester to a substantial learning problem of your own choosing.


Deliverable:

For your final project, you should execute a substantial project of your own choosing. You will turn in a single writeup (in PDF format only, please!). Your writeup can be structured in whatever way makes sense for your project, but see below for some possible outlines (for example, the outline of your project could closely follow the grading standards).

Your project will be graded more on effort than results. As I have stated in class, I would rather have you swing for the fences and miss, than take on a simple, safe project. It is therefore very important that your final writeup clearly convey the scope of your efforts.

I am expecting some serious effort on this project, so I am expecting a writeup that reflects that. There are no fixed page limits, but I would guess that about 8-10 pages would be good.


Grading standards:

Your writeup will be graded on the following elements:

  • 5% Clearly motivated problem and background. Do a literature search. What other attempts have been made to solve this problem?
  • 10% Exploratory data analysis. What patterns exist in the data before applying DNNs? Is there any pre-processing you need to do?
  • 35% Description of technical approach
    • 5% How will you know if you succeed? Are there quantitative metrics for success (such as a classification error rate), or will success be judged qualitatively (such as the image quality of GAN-generated images)?
    • 10% How did you prepare and analyze your data? How did you establish baselines, and test/train splits?
    • 15% Describe how DNNs fit into your solution method. Discuss whether this is a supervised, unsupervised, or RL problem.
    • 5% Is there anything unique about your problem, or about the way you applied DNNs?
  • 45% Analysis of results
    • 25% Present your final results, including comparison to baselines, in whatever format is most appropriate to your problem
    • 20% Describe the process of getting to your final result. What did you tweak? Did you iterate on your topology? How did you debug your model? Include anything relevant to support your discussion, such as tensorboard screenshots, graphs of cost decreasing over time, charts comparing different topologies, etc.
  • 5% Tidy and legible final writeup

Possible project ideas:

Many different kinds of final projects are possible. A few examples include:

  • Learning how to render a scene based on examples of position and lighting
  • Learning which way is “up” in a photo (useful for drone odometry)
  • Training an HTTP server to predict which web pages a user will likely visit next
  • Training an earthquake predictor
  • Using GANs to turn rendered faces into something more realistic (avoiding the “uncanny valley”)
  • Transforming Minecraft into a more realistic looking game with DNN post-processing
  • Using style transfer on a network trained for facial recognition (to identify and accentuate facial characteristics)
  • Using RGB+Depth datasets to improve geometric plausibility of GANs

The project can involve any application area, but the core challenge must be tackled using some sort of deep learning.

The best projects involve a new, substantive idea and novel dataset. It may also be acceptable to use vanilla DNN techniques on a novel dataset, as long as you demonstrate significant effort in the “science” of the project – evaluating results, exploring topologies, thinking hard about how to train, and careful test/training evaluation. It may also be acceptable to simply implement a state-of-the-art method from the literature, but clear such projects with me first.


Notes:

You are welcome to use any publicly available code on the internet to help you.

Here are some possible questions that you might consider answering as part of your report:

  1. A discussion of the dataset
    1. Where did it come from? Who published it?
    2. Who cares about this data?
  2. A discussion of the problem to be solved
    1. Is this a classification problem? A regression problem?
    2. Is it supervised? Unsupervised?
    3. What sort of background knowledge do you have that you could bring to bear on this problem?
    4. What other approaches have been tried? How did they fare?
  3. A discussion of your exploration of the dataset.
    1. Before you start coding, you should look at the data. What does it include? What patterns do you see?
    2. Any visualizations about the data you deem relevant
  4. A clear, technical description of your approach. This section should include:
    1. Background on the approach
    2. Description of the model you use
    3. Description of the inference / training algorithm you use
    4. Description of how you partitioned your data into a test/training split
  5. An analysis of how your approach worked on the dataset
    1. What was your final RMSE on your private test/training split?
    2. Did you overfit? How do you know?
    3. Was your first algorithm the one you ultimately used for your submission? Why did you (or didn't you) iterate your design?
    4. Did you solve (or make any progress on) the problem you set out to solve?

Possible sources of interesting datasets

Croudflower

KDD cup

UCI repository

Kaggle (current and past)

Data.gov

AWS

World bank

BYU CS478 datasets

data.utah.gov

Google research

BYU DSC competition

cs501r_f2016/fp.1479830625.txt.gz · Last modified: 2021/06/30 23:40 (external edit)