User Tools

Site Tools


cs501r_f2016:fp

Objective:

To creatively apply knowledge gained through the course of the semester to a substantial learning problem of your own choosing.


Deliverable:

There are two deliverables for the final:

  • An excel spreadsheet (or CSV file) that shows the total amount of time you spent on your final, broken down by day
  • A PDF writeup of your project (one page)

Grading standards:

Your final project counts as 20% of your overall grade.

Grading is divided into two parts: 80% of your final project grade is based on the number of hours you spent, and 20% is based on your writeup.

For the number of hours, I will take the total number of hours and divide by 35, then multiply by 100 (and capped at 100%). This will be your percentage. (So, 35 hours == 100%, 17.5 hours == 50%, etc.)

I will evaluate your writeup primarily based on the quality of your writing, although I reserve the right to assign some points based on the quality of your project.

Note that no late submissions are possible for this project, because it is done in lieu of the final exam.


Description:

For your final project, you should execute a substantial project of your own choosing. You will turn in a single writeup (in PDF format only, please!). Your writeup can be structured in whatever way makes sense for your project, but see below for some possible outlines.

Your project will be graded more on effort than results. As I have stated in class, I would rather have you swing for the fences and miss, than take on a simple, safe project. It is therefore very important that your final time log clearly convey the scope of your efforts.

I am expecting some serious effort on this project, so I am expecting that your writeup, even if it's short, reflects that.


Requirements for the time log:

For the time log, you must document the time you spent (on a daily basis) along with a simple description of your activities during that time. If you do not document your time, it will not count. In other words, it is not acceptable to claim that you spent 35 hours on your project, without a time log to back it up. I will not accept any excuses about this requirement.

So, for example, a time log might look like the following:

  • 8/11 - 1 hour - read alphago paper
  • 8/12 - 2 hours - downloaded and cleaned data
  • 8/21 - 4 hours - found alphago code
  • 8/24 - 1 hour - implemented game logic
  • 9/17 - 2 hours - worked on self-play engine
  • 9/18 - 1 hour - worked on self-play engine
  • 10/1 - 2 hours - started training
  • … etc.

Additional requirements:

  • You may not count any more than 5 hours of research and reading
  • You may not count any more than 15 hours of “prep work”. This could include dataset preparation, collection and cleaning; or wrestling with getting a simulator / model working for a deep RL project; etc.
  • At least 20 hours must involve designing, testing, and iterating deep learning-based models, analyzing results, experimenting, etc.
  • You don't get extra credit for more than 35 hours. Sorry. :)

Requirements for the writeup:

Your writeup serves to inform me about what you did, and simply needs to describe what you did for your project. You should describe:

  • The problem you set out to solve
  • The exploratory data analysis you did
  • Your technical approach
  • Your results

It should be about 1-2 pages.


Possible project ideas:

Many different kinds of final projects are possible. A few examples include:

  • Learning how to render a scene based on examples of position and lighting
  • Learning which way is “up” in a photo (useful for drone odometry)
  • Training an HTTP server to predict which web pages a user will likely visit next
  • Training an earthquake predictor
  • Using GANs to turn rendered faces into something more realistic (avoiding the “uncanny valley”)
  • Transforming Minecraft into a more realistic looking game with DNN post-processing
  • Using style transfer on a network trained for facial recognition (to identify and accentuate facial characteristics)
  • Using RGB+Depth datasets to improve geometric plausibility of GANs

The project can involve any application area, but the core challenge must be tackled using some sort of deep learning.

The best projects involve a new, substantive idea and novel dataset. It may also be acceptable to use vanilla DNN techniques on a novel dataset, as long as you demonstrate significant effort in the “science” of the project – evaluating results, exploring topologies, thinking hard about how to train, and careful test/training evaluation. It may also be acceptable to simply implement a state-of-the-art method from the literature, but clear such projects with me first.


Notes:

You are welcome to use any publicly available code on the internet to help you.

Here are some possible questions that you might consider answering as part of your report:

  1. A discussion of the dataset
    1. Where did it come from? Who published it?
    2. Who cares about this data?
  2. A discussion of the problem to be solved
    1. Is this a classification problem? A regression problem?
    2. Is it supervised? Unsupervised?
    3. What sort of background knowledge do you have that you could bring to bear on this problem?
    4. What other approaches have been tried? How did they fare?
  3. A discussion of your exploration of the dataset.
    1. Before you start coding, you should look at the data. What does it include? What patterns do you see?
    2. Any visualizations about the data you deem relevant
  4. A clear, technical description of your approach.
    1. Background on the approach
    2. Description of the model you use
    3. Description of the inference / training algorithm you use
    4. Description of how you partitioned your data into a test/training split
    5. How many parameters does your model have? What optimizer did you use?
    6. What topology did you choose, and why?
    7. Did you use any pre-trained weights? Where did they come from?
  5. An analysis of how your approach worked on the dataset
    1. What was your final RMSE on your private test/training split?
    2. Did you overfit? How do you know?
    3. Was your first algorithm the one you ultimately used for your submission? Why did you (or didn't you) iterate your design?
    4. Did you solve (or make any progress on) the problem you set out to solve?

Possible sources of interesting datasets

Croudflower

KDD cup

UCI repository

Kaggle (current and past)

Data.gov

AWS

World bank

BYU CS478 datasets

data.utah.gov

Google research

BYU DSC competition

cs501r_f2016/fp.txt · Last modified: 2021/06/30 23:42 (external edit)