User Tools

Site Tools



To creatively apply knowledge gained through the course of the semester to a substantial data analysis problem of your own choosing.


For your final project, you will find a dataset and apply your data analysis skills to a new problem based on the data. You will turn in a PDF report discussing your efforts, don't include code in your report.

Grading standards:

Your entry will be graded on the following elements:

  • 75% Project writeup
    • 35% Exploratory data analysis
    • 35% Description of technical approach
    • 30% Analysis of performance of method
  • 25% Project presentation
    • 33% Clearly motivated problem
    • 33% Clear description of technical approach
    • 33% Clear presentation of results


The final project is designed to give you a chance to explore a data science project end-to-end, with minimal restrictions.

For this project, you must:

  • Select a dataset to analyze (perhaps one from Kaggle?)
  • Define a question or task to be performed
    • What is your goal in analyzing this dataset? Is it a prediction problem? Or are you searching for patterns?
    • If appropriate, define a cost function to be optimized
  • Choose an analysis strategy
  • If appropriate, define a model
  • If appropriate, choose an inference algorithm to answer your question, given a model

You are welcome to use any publicly available code on the internet to help you. For example, you may wish to use the Stan language to help you construct an HMC sampler. Other possibilities include PyMC, the Venture probabilistic programming language, BayesDB, etc.

Your writeup should be a serious report on the dataset you chose, the problem you set out to solve, the technical approach you took (and your rationale for it), the results of any exploratory data analysis, and the results of your final model / inference / optimization algorithm.

Your writeup should discuss questions similar to your recommender engine report: This writeup must include five main sections:

  1. A discussion of the dataset
    1. Where did it come from? Who published it?
    2. Who cares about this data?
  2. A discussion of the problem to be solved
    1. Is this a classification problem? A regression problem?
    2. Is it supervised? Unsupervised?
    3. What sort of background knowledge do you have that you could bring to bear on this problem?
    4. What other approaches have been tried? How did they fare?
  3. A discussion of your exploration of the dataset.
    1. Before you start coding, you should look at the data. What does it include? What patterns do you see?
    2. Any visualizations about the data you deem relevant
  4. A clear, technical description of your approach. This section should include:
    1. Background on the approach
    2. Description of the model you use
    3. Description of the inference / training algorithm you use
    4. Description of how you partitioned your data into a test/training split
  5. An analysis of how your approach worked on the dataset
    1. What was your final RMSE on your private test/training split?
    2. Did you overfit? How do you know?
    3. Was your first algorithm the one you ultimately used for your submission? Why did you (or didn't you) iterate your design?
    4. Did you solve (or make any progress on) the problem you set out to solve?

Possible sources of interesting datasets


KDD cup

UCI repository

Kaggle (current and past)


World bank

BYU CS478 datasets

Google research

BYU DSC competition

cs401r_w2016/fp.txt · Last modified: 2018/04/23 18:59 by sadler