User Tools

Site Tools


cs501r_f2018:lab6

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
cs501r_f2018:lab6 [2018/10/08 22:36]
sadler [Description:]
cs501r_f2018:lab6 [2018/10/09 19:26]
sadler [Description:]
Line 105: Line 105:
  
 <code python> <code python>
 +import torch
 +from torch.autograd import Variable
 # Turn string into list of longs # Turn string into list of longs
 def char_tensor(string):​ def char_tensor(string):​
Line 124: Line 126:
  
 <code python> <code python>
 +import time
 n_epochs = 2000 n_epochs = 2000
 print_every = 100 print_every = 100
Line 140: Line 143:
  
 for epoch in range(1, n_epochs + 1): for epoch in range(1, n_epochs + 1):
-    ​loss = train(*random_training_set()) ​       +    ​loss_ = train(*random_training_set()) ​       
-    loss_avg += loss+    loss_avg += loss_
  
     if epoch % print_every == 0:     if epoch % print_every == 0:
-        print('​[%s (%d %d%%) %.4f]' % (time_since(start), epoch, epoch / n_epochs * 100, loss))+        print('​[%s (%d %d%%) %.4f]' % (time.time() - start, epoch, epoch / n_epochs * 100, loss_))
         print(evaluate('​Wh',​ 100), '​\n'​)         print(evaluate('​Wh',​ 100), '​\n'​)
  
Line 179: Line 182:
         # decode output         # decode output
     ​     ​
-    def forward(self, ​input, hidden):+    def forward(self, ​input_char, hidden):
         # by reviewing the documentation,​ construct a forward function that properly uses the output         # by reviewing the documentation,​ construct a forward function that properly uses the output
         # of the GRU         # of the GRU
Line 200: Line 203:
       # your code here       # your code here
     ## /     ## /
 +    loss = 0
     for c in range(chunk_len):​     for c in range(chunk_len):​
         output, hidden = # run the forward pass of your rnn with proper input         output, hidden = # run the forward pass of your rnn with proper input
-        loss += criterion(output,​ target[c].view(1))+        loss += criterion(output,​ target[c].unsqueeze(0))
  
     ## calculate backwards loss and step the optimizer (globally)     ## calculate backwards loss and step the optimizer (globally)
cs501r_f2018/lab6.txt · Last modified: 2021/06/30 23:42 (external edit)