User Tools

Site Tools


cs501r_f2018:lab6

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

cs501r_f2018:lab6 [2018/10/02 16:01]
carr
cs501r_f2018:lab6 [2021/06/30 23:42]
Line 1: Line 1:
-====Objective:​==== 
- 
-Work with sequential data in Pytorch by building a Char-RNN for text generation 
- 
----- 
-====Deliverable:​==== 
- 
-For this lab, you will submit an ipython notebook via learningsuite. 
- 
-There are many resources for character level recurrent neural networks. This [[http://​karpathy.github.io/​2015/​05/​21/​rnn-effectiveness/​|Blog Post]] will be helpful in understanding the potential, and getting a basic understanding. 
- 
- 
- This lab will have three parts: 
- 
-**Part 1:** Build RNN with built-in methods, train on _textfile.txt_ 
- 
-**Part 2:** Build your own LSTM Cell 
- 
-**Part 3:** Build your own GRU Cell 
- 
-**Part 4:** Generate awesome text with a dataset of your choice 
- 
-This is an example output from The Lord of the Rings, after only 20 minutes of training. ​ 
- 
-"Who now further here the learnest and  
-south, looking slow you beastion, and that is all plainly day." 
- 
----- 
-====Grading standards:​==== 
- 
-Your notebook will be graded on the following: 
- 
-  * 100% Build something amazing 
-  * 20% Modified code to include a test/train split 
-  * 20% Modified code to include a visualization of train/test losses 
-  * 10% Tidy and legible figures, including labeled axes where appropriate 
- 
----- 
-====Description:​==== 
- 
-At this point in the semester, we have worked primarily with  
- 
- 
----- 
-====Part 1 detailed outline:​==== 
- 
-**Step 1.** Get a colab notebook up and running with GPUs enabled. 
- 
-**Step 2.** Install pytorch and torchvision 
- 
-<code python> 
-!pip3 install torch  
-!pip3 install torchvision 
-!pip3 install tqdm 
-</​code>​ 
- 
-**Step 3.** Import pytorch and other important classes 
- 
-<code python> 
-import torch 
-import torch.nn as nn 
-import torch.nn.functional as F 
-import torch.optim as optim 
-from torch.utils.data import Dataset, DataLoader 
-import numpy as np 
-import matplotlib.pyplot as plt 
-from torchvision import transforms, utils, datasets 
-from tqdm import tqdm 
- 
-assert torch.cuda.is_available() # You need to request a GPU from Runtime > Change Runtime Type 
-</​code>​ 
- 
-**Step 4.** Construct  ​ 
- 
-- a model class that inherits from “nn.Module” ​ 
-  * Check out [[https://​pytorch.org/​docs/​stable/​nn.html#​torch.nn.Module]] 
-  * Your model can contain any submodules you wish -- nn.Linear is a good, easy, starting point 
-- a dataset class that inherits from “Dataset” and produces samples from [[https://​pytorch.org/​docs/​stable/​torchvision/​datasets.html#​fashion-mnist]] 
-  * You may be tempted to use this dataset directly (as it already inherits from Dataset) but we want you to learn how a dataset is constructed. Your class should be pretty simple and output items from FashionMNIST 
- 
-**Step 5.** Create instances of the following objects: 
- 
-  * SGD optimizer Check out [[https://​pytorch.org/​docs/​stable/​optim.html#​torch.optim.SGD]] 
-  * your model 
-  * the DataLoader class using your dataset 
-  * MSE loss function [[https://​pytorch.org/​docs/​stable/​nn.html#​torch.nn.MSELoss]] 
- 
-**Step 6.** Loop over your training dataloader, inside of this loop you should 
- 
-  * zero out your gradients 
-  * compute the loss between your model and the true value 
-  * take a step on the optimizer 
  
cs501r_f2018/lab6.txt · Last modified: 2021/06/30 23:42 (external edit)