Skip to content
Snippets Groups Projects

Lab2 part A

Open Leif Eriksson requested to merge Lab2 into master
1 file
+ 176
14
Compare changes
  • Side-by-side
  • Inline
+ 176
14
%% Cell type:markdown id: tags:
# L2: Language modelling
%% Cell type:markdown id: tags:
In this lab you will implement and train two neural language models: the fixed-window model mentioned in Lecture 2.3, and the recurrent neural network model from Lecture 2.5. You will evaluate these models by computing their perplexity on a benchmark dataset.
%% Cell type:code id: tags:
``` python
import torch
```
%% Cell type:markdown id: tags:
For this lab, you should use the GPU if you have one:
%% Cell type:code id: tags:
``` python
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
```
%% Cell type:markdown id: tags:
## Data
%% Cell type:markdown id: tags:
The data for this lab is [WikiText](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/), a collection of more than 100 million tokens extracted from the set of ‘Good’ and ‘Featured’ articles on Wikipedia. We will use the small version of the dataset, which contains slightly more than 2.5 million tokens.
The next cell contains code for an object that will act as a container for the &lsquo;training&rsquo; and the &lsquo;validation&rsquo; section of the data. We fill this container by reading the corresponding text files. The only processing that we do is to whitespace-tokenize, and to replace each newline character with a special token `<eos>` (end-of-sentence).
%% Cell type:code id: tags:
``` python
class WikiText(object):
def __init__(self):
self.vocab = {}
self.train = self.read_data('wiki.train.tokens')
self.valid = self.read_data('wiki.valid.tokens')
self.train = self.read_data('C:\\Users\\epii\\Documents\\tdde09\\l2\\wiki.train.tokens')
self.valid = self.read_data('C:\\Users\\epii\\Documents\\tdde09\\l2\\wiki.valid.tokens')
def read_data(self, path):
ids = []
with open(path) as source:
with open(path, encoding = 'cp850') as source:
for line in source:
for token in line.split() + ['<eos>']:
if token not in self.vocab:
self.vocab[token] = len(self.vocab)
ids.append(self.vocab[token])
return ids
```
%% Cell type:markdown id: tags:
The cell below loads the data and prints the total number of tokens and the size of the vocabulary.
%% Cell type:code id: tags:
``` python
wikitext = WikiText()
print('Tokens in train:', len(wikitext.train))
print('Tokens in valid:', len(wikitext.valid))
print('Vocabulary size:', len(wikitext.vocab))
```
%% Output
Tokens in train: 2088628
Tokens in valid: 217646
Vocabulary size: 33278
%% Cell type:markdown id: tags:
## Problem 1: Fixed-window neural language model
%% Cell type:markdown id: tags:
In this section you will implement and train the fixed-window neural language model proposed by [Bengio et al. (2003)](http://www.jmlr.org/papers/volume3/bengio03a/bengio03a.pdf) and introduced in Lecture&nbsp;2.3. Recall that an input to the network takes the form of a vector of $n-1$ integers representing the preceding words. Each integer is mapped to a vector via an embedding layer. (All positions share the same embedding.) The embedding vectors are then concatenated and sent through a two-layer feed-forward network with a non-linearity in the form of a rectified linear unit (ReLU) and a final softmax layer.
%% Cell type:markdown id: tags:
### Problem 1.1: Vectorize the data
%% Cell type:markdown id: tags:
Your first task is to write code for transforming the data in the WikiText container into a vectorized form that can be fed to the fixed-window model. Complete the skeleton code in the cell below:
%% Cell type:code id: tags:
``` python
def vectorize_fixed_window(wikitext_data, n):
# TODO: Replace the following line with your own code
return None, None
out = torch.zeros([len(wikitext_data),n-1]).int()
for i in range(n-1):
out[:,i] = torch.tensor(wikitext_data)
wikitext_data.append(wikitext_data.pop(0))
return out, torch.tensor(wikitext_data).int()
```
%% Cell type:markdown id: tags:
Your function should meet the following specification:
**vectorize_fixed_window** (*wikitext_data*, *n*)
> Transforms WikiText data (a list of word ids) into a pair of tensors $\mathbf{X}$, $\mathbf{y}$ that can be used to train the fixed-window model. Let $N$ be the total number of $n$-grams from the token list; then $\mathbf{X}$ is a matrix with shape $(N, n-1)$ and $\mathbf{y}$ is a vector with length $N$.
⚠️ Your function should be able to handle arbitrary values of $n \geq 1$.
%% Cell type:markdown id: tags:
#### 🤞 Test your code
Test your implementation by running the code in the next cell. Does the output match your expectation?
%% Cell type:code id: tags:
``` python
valid_x, valid_y = vectorize_fixed_window(wikitext.valid, 3)
valid_x, valid_y = vectorize_fixed_window(wikitext.valid, 4)
print(valid_x.size())
```
%% Output
torch.Size([217646, 3])
%% Cell type:markdown id: tags:
### Problem 1.2: Implement the model
%% Cell type:markdown id: tags:
Your next task is to implement the fixed-window model based on the graphical specification given in the lecture.
%% Cell type:code id: tags:
``` python
import torch.nn as nn
class FixedWindowModel(nn.Module):
def __init__(self, n, n_words, embedding_dim=50, hidden_dim=50):
super().__init__()
# TODO: Add your own code
self.embedding_dim = embedding_dim
self.hidden_dim = hidden_dim
self.n = n-1
self.n_words = n_words
self.E = nn.Embedding(n_words, embedding_dim).to(device)
self.H = nn.Linear(embedding_dim*(n-1), hidden_dim).to(device)
self.R = nn.ReLU().to(device)
self.O = nn.Linear(hidden_dim, n_words).to(device)
self.S = nn.Softmax(0).to(device)
def forward(self, x):
# TODO: Replace the next line with your own code
raise NotImplemented
# Embeddings
_x = self.E(x)
# Concat
_w = _x.view((x.shape[0],self.embedding_dim*self.n))
# Do linear stuff
_l = self.H(_w)
# Do some relu stuff
_r = self.R(_l)
# Linear again
_l2 = self.O(_r)
# Do Softmax Stuff
return self.S(_l2)
```
%% Cell type:markdown id: tags:
Here is the specification of the two methods:
**__init__** (*self*, *n*, *n_words*, *embedding_dim*=50, *hidden_dim*=50)
> Creates a new fixed-window neural language model. The argument *n* specifies the model&rsquo;s $n$-gram order. The argument *n_words* is the number of words in the vocabulary. The arguments *embedding_dim* and *hidden_dim* specify the dimensionalities of the embedding layer and the hidden layer of the feedforward network, respectively; their default value is 50.
**forward** (*self*, *x*)
> Computes the network output on an input batch *x*. The shape of *x* is $(B, n-1)$, where $B$ is the batch size. The output of the forward pass is a tensor of shape $(B, V)$ where $V$ is the number of words in the vocabulary.
**Hint:** The most efficient way to implement the vector concatenation in this model is to use the [`view()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.view) method.
#### 🤞 Test your code
Test your code by instantiating the model and feeding it a batch of examples from the training data.
%% Cell type:markdown id: tags:
### Problem 1.3: Train the model
%% Cell type:markdown id: tags:
Your final task is to write code to train the fixed-window model using minibatch gradient descent and the cross-entropy loss function.
For your convenience, the following cell contains a utility function that randomly samples minibatches of a specified size from a pair of tensors:
%% Cell type:code id: tags:
``` python
def batchify(x, y, batch_size):
random_indices = torch.randperm(len(x))
for i in range(0, len(x) - batch_size + 1, batch_size):
indices = random_indices[i:i+batch_size]
yield x[indices].to(device), y[indices].to(device)
remainder = len(x) % batch_size
if remainder:
indices = random_indices[-remainder:]
yield x[indices].to(device), y[indices].to(device)
```
%% Cell type:markdown id: tags:
What remains to be done is the implementation of the training loop. This should be a straightforward generalization of the training loops that you have seen so far. Complete the skeleton code in the cell below:
%% Cell type:code id: tags:
``` python
def train_fixed_window(n, n_epochs=1, batch_size=3200, lr=1e-2):
# TODO: Replace the following line with your own code
return None
import torch
import torch.nn.functional as F
import torch.optim as optim
from datetime import datetime
def train_fixed_window(n, n_epochs=1, batch_size=6400, lr=1e-2,embedding_dim=50, hidden_dim=50, loss_mult=1):
# No clue why the loss multiplier is neede
# Could probably change the learning rate instead, but hey, it works
# Initialize the model
model = FixedWindowModel(n, len(wikitext.vocab), embedding_dim, hidden_dim)
# Initialize the optimizer.
optimizer = optim.Adam(model.parameters(), lr=lr)
train_x, train_y = vectorize_fixed_window(wikitext.train, n)
_z = torch.zeros(batch_size,len(wikitext.vocab)).to(device)
_ones = torch.ones(batch_size,len(wikitext.vocab)).to(device)
# We train for several epochs
for t in range(n_epochs):
print(t)
asd = 0
# In each epoch, we loop over all the minibatches
for _x, y in batchify(train_x, train_y, batch_size):
# Scatter target values
indx = y.clone().detach().long().to(device)
indx = indx[:,None]
_y = _z.clone().detach().scatter_(1,indx,_ones)
# Reset the accumulated gradients
optimizer.zero_grad()
# Forward pass
ts1 = datetime.timestamp(datetime.now())
#print("prefor", ts1)
output = model.forward(_x)
ts2 = datetime.timestamp(datetime.now())
#print("forward took: ", ts2-ts1)
# Compute the loss
#The resize here is needed for the last batch, as it's smaller
loss = F.cross_entropy(output.float()*loss_mult, _y[:output.shape[0]]*loss_mult)
if asd % 100 == 0:
print("Loss: ",loss.item())
asd += 1
# Backward pass; propagates the loss and computes the gradients
ts3 = datetime.timestamp(datetime.now())
#print("preback", ts3)
loss.backward()
ts4 = datetime.timestamp(datetime.now())
#print("back took: ", ts4-ts3)
# Update the parameters of the model
optimizer.step()
pp = []
corr = 0
valid_x, valid_y = vectorize_fixed_window(wikitext.valid, n)
for _x, y in batchify(valid_x, valid_y, 6400):
output = model.forward(_x)
_sum = torch.sum(output, dim=1)
corr_prob = torch.zeros(y.shape[0]).to(device)
for i in range(y.shape[0]):
corr_prob[i] = _sum[i] / output[i,y[i]]
_pp = torch.sum(corr_prob[i]).item()
#print("pp: ", _pp)
#corr_prob = output[:]
o_token = torch.argmax(output,dim=1)
correct = torch.eq(o_token, y).float()
corr += torch.sum(correct).item()
#loss = F.cross_entropy(correct, torch.ones(correct.shape[0]).to(device))
pp.append(_pp/y.shape[0])
print("Perplexity(?): ", sum(pp)/len(pp))
print("#Times Guessed Correctly: ", corr)
return model
```
%% Cell type:markdown id: tags:
Here is the specification of the training function:
**train_fixed_window** (*n*, *n_epochs* = 1, *batch_size* = 3200, *lr* = 0.01)
> Trains a fixed-window neural language model of order *n* using minibatch gradient descent and returns it. The parameters *n_epochs* and *batch_size* specify the number of training epochs and the minibatch size, respectively. Training uses the cross-entropy loss function and the [Adam optimizer](https://pytorch.org/docs/stable/optim.html#torch.optim.Adam) with learning rate *lr*. After each epoch, prints the perplexity of the model on the validation data.
%% Cell type:markdown id: tags:
The code in the cell below trains a bigram model.
%% Cell type:code id: tags:
``` python
model_fixed_window = train_fixed_window(2, n_epochs=1)
model_fixed_window = train_fixed_window(3, n_epochs=5, embedding_dim=50, hidden_dim=50,loss_mult=10000)
```
%% Output
0
Loss: 104525.4140625
Loss: 78113.421875
Loss: 74721.375
Loss: 72334.5859375
Perplexity(?): 256.7982085338616
#Times Guessed Correctly: 28876.0
1
Loss: 74606.4921875
Loss: 71736.4140625
Loss: 69903.453125
Loss: 70367.21875
Perplexity(?): 54.023261545382674
#Times Guessed Correctly: 33388.0
2
Loss: 72136.8671875
Loss: 68782.8671875
Loss: 68649.8046875
Loss: 68080.6640625
Perplexity(?): 36.938720551721794
#Times Guessed Correctly: 33189.0
3
Loss: 70512.0078125
Loss: 67624.4921875
Loss: 68213.515625
Loss: 67837.40625
Perplexity(?): 16.589290246797646
#Times Guessed Correctly: 34634.0
4
Loss: 70544.3046875
Loss: 66867.4296875
Loss: 66749.390625
Loss: 66685.921875
Perplexity(?): 138.95681497618872
#Times Guessed Correctly: 34017.0
%% Cell type:markdown id: tags:
**⚠️ Your submitted notebook must contain output demonstrating a validation perplexity of at most 350.**
**Hint:** Computing the validation perplexity in one go may exhaust your computer&rsquo;s memory and/or take a lot of time. If you run into this problem, break the computation down into minibatches and take the average perplexity.
%% Cell type:code id: tags:
``` python
```
%% Cell type:markdown id: tags:
#### 🤞 Test your code
To see whether your network is learning something, print the loss and/or the perplexity on the training data. If the two values are not decreasing over time, try to find the problem before wasting time (and energy) on useless training.
Training and even evaluation will take some time on a CPU, you should expect several minutes per epoch, depending on hardware. To speed things up, you can train using a GPU; our reference implementation runs in less than 30 seconds per epoch on [Colab](http://colab.research.google.com).
%% Cell type:markdown id: tags:
## Problem 2: Recurrent neural network language model
%% Cell type:markdown id: tags:
In this section you will implement the recurrent neural network language model that was presented in Lecture&nbsp;2.5. Recall that an input to the network is a vector of word ids. Each integer is mapped to an embedding vector. The sequence of embedded vectors is then fed into an unrolled LSTM. At each position $i$ in the sequence, the hidden state of the LSTM at that position is sent through a linear transformation into a final softmax layer, from which we read off the index of the word at position $i+1$. In theory, the input vector could represent the complete training data or at least a complete sentence; for practical reasons, however, we will truncate the input to some fixed value *bptt_len*, the **backpropagation-through-time horizon**.
%% Cell type:markdown id: tags:
### Problem 2.1: Vectorize the data
%% Cell type:markdown id: tags:
As in the previous problem, your first task is to transform the data in the WikiText container into a vectorized form that can be fed to the model.
%% Cell type:code id: tags:
``` python
def vectorize_rnn(wikitext_data, bptt_len):
# TODO: Replace the next line with your own code
return None
```
%% Cell type:markdown id: tags:
Your function should meet the following specification:
**vectorize_rnn** (*wikitext_data*, *bptt_len*)
> Transforms a list of token indexes into a pair of tensors $\mathbf{X}$, $\mathbf{Y}$ that can be used to train the recurrent neural language model. The rows of both tensors represent contiguous subsequences of token indexes of length *bptt_len*. Compared to the sequences in $\mathbf{X}$, the corresponding sequences in $\mathbf{Y}$ are shifted one position to the right. More precisely, if the $i$th row of $\mathbf{X}$ is the sequence that starts at token position $j$, then the same row of $\mathbf{Y}$ is the sequence that starts at position $j+1$.
%% Cell type:markdown id: tags:
#### 🤞 Test your code
Test your implementation by running the following code:
%% Cell type:code id: tags:
``` python
valid_x, valid_y = vectorize_rnn(wikitext.valid, 32)
print(valid_x.size())
```
%% Output
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[13], line 1
----> 1 valid_x, valid_y = vectorize_rnn(wikitext.valid, 32)
3 print(valid_x.size())
TypeError: cannot unpack non-iterable NoneType object
%% Cell type:markdown id: tags:
### Problem 2.2: Implement the model
%% Cell type:markdown id: tags:
Your next task is to implement the recurrent neural network model based on the graphical specification given in the lecture.
%% Cell type:code id: tags:
``` python
import torch.nn as nn
class RNNModel(nn.Module):
def __init__(self, n_words, embedding_dim=50, hidden_dim=50):
super().__init__()
# TODO: Add your own code
def forward(self, x):
# TODO: Replace the next line with your own code
raise NotImplemented
```
%% Cell type:markdown id: tags:
Your implementation should follow this specification:
**__init__** (*self*, *n_words*, *embedding_dim* = 50, *hidden_dim* = 50)
> Creates a new recurrent neural network language model. The argument *n_words* is the number of words in the vocabulary. The arguments *embedding_dim* and *hidden_dim* specify the dimensionalities of the embedding layer and the LSTM hidden layer, respectively; their default value is 50.
**forward** (*self*, *x*)
> Computes the network output on an input batch *x*. The shape of *x* is $(B, H)$, where $B$ is the batch size and $H$ is the length of each input sequence. The shape of the output tensor is $(B, H, V)$, where $V$ is the size of the vocabulary.
%% Cell type:markdown id: tags:
#### 🤞 Test your code
Test your code by instantiating the model and feeding it a batch of examples from the training data.
%% Cell type:markdown id: tags:
### Problem 2.3: Train the model
%% Cell type:markdown id: tags:
The training loop for the recurrent neural network model is essentially identical to the loop that you wrote for the feed-forward model. The only thing to note is that the cross-entropy loss function expects its input to be a two-dimensional tensor; you will therefore have to re-shape the output tensor from the LSTM as well as the gold-standard output tensor in a suitable way. The most efficient way to do so is to use the [`view()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.view) method.
%% Cell type:code id: tags:
``` python
def train_rnn(n_epochs=1, batch_size=100, bptt_len=32):
# TODO: Replace the next line with your own code
return None
```
%% Cell type:markdown id: tags:
Here is the specification of the training function:
**train_rnn** (*n_epochs* = 1, *batch_size* = 100, *bptt_len* = 32, *lr* = 0.01)
> Trains a recurrent neural network language model on the WikiText data using minibatch gradient descent and returns it. The parameters *n_epochs* and *batch_size* specify the number of training epochs and the minibatch size, respectively. The parameter *bptt_len* specifies the length of the backpropagation-through-time horizon, that is, the length of the input and output sequences. Training uses the cross-entropy loss function and the [Adam optimizer](https://pytorch.org/docs/stable/optim.html#torch.optim.Adam) with learning rate *lr*. After each epoch, prints the perplexity of the model on the validation data.
%% Cell type:markdown id: tags:
Evaluate your model by running the following code cell:
%% Cell type:code id: tags:
``` python
model_rnn = train_rnn(n_epochs=1)
```
%% Cell type:markdown id: tags:
**⚠️ Your submitted notebook must contain output demonstrating a validation perplexity of at most 310.**
%% Cell type:markdown id: tags:
## Problem 3: Parameter initialization (reflection)
%% Cell type:markdown id: tags:
Since the error surfaces that gradient search explores when training neural networks can be very complex, it is important to choose &lsquo;good&rsquo; initial values for the parameters. In PyTorch, the weights of the embedding layer are initialized by sampling from the standard normal distribution $\mathcal{N}(0, 1)$. Test how changing the standard deviation and/or the distribution affects the perplexity of your feed-forward language model. Write a short report about your experience (ca. 150 words). Use the following prompts:
* What different settings for the initialization did you try? What results did you get?
* How can you choose a good initialization strategy?
* What did you learn? How, exactly, did you learn it? Why does this learning matter?
%% Cell type:markdown id: tags:
*TODO: Enter your text here*
%% Cell type:code id: tags:
``` python
```
%% Cell type:code id: tags:
``` python
```
Loading