"In this lab you will implement and train two neural language models: the fixed-window model mentioned in Lecture 1.3, and the recurrent neural network model from Lecture 1.5. You will evaluate these models by computing their perplexity on a benchmark dataset."
"In this lab you will implement and train two neural language models: the fixed-window model mentioned in Lecture 2.3, and the recurrent neural network model from Lecture 2.5. You will evaluate these models by computing their perplexity on a benchmark dataset."
]
},
{
...
...
%% Cell type:markdown id: tags:
# L2: Language modelling
%% Cell type:markdown id: tags:
In this lab you will implement and train two neural language models: the fixed-window model mentioned in Lecture 1.3, and the recurrent neural network model from Lecture 1.5. You will evaluate these models by computing their perplexity on a benchmark dataset.
In this lab you will implement and train two neural language models: the fixed-window model mentioned in Lecture 2.3, and the recurrent neural network model from Lecture 2.5. You will evaluate these models by computing their perplexity on a benchmark dataset.
%% Cell type:code id: tags:
``` python
importtorch
```
%% Cell type:markdown id: tags:
For this lab, you should use the GPU if you have one:
The data for this lab is [WikiText](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/), a collection of more than 100 million tokens extracted from the set of ‘Good’ and ‘Featured’ articles on Wikipedia. We will use the small version of the dataset, which contains slightly more than 2.5 million tokens.
The next cell contains code for an object that will act as a container for the ‘training’ and the ‘validation’ section of the data. We fill this container by reading the corresponding text files. The only processing that we do is to whitespace-tokenize, and to replace each newline character with a special token `<eos>` (end-of-sentence).
%% Cell type:code id: tags:
``` python
classWikiText(object):
def__init__(self):
self.vocab={}
self.train=self.read_data('wiki.train.tokens')
self.valid=self.read_data('wiki.valid.tokens')
defread_data(self,path):
ids=[]
withopen(path)assource:
forlineinsource:
fortokeninline.split()+['<eos>']:
iftokennotinself.vocab:
self.vocab[token]=len(self.vocab)
ids.append(self.vocab[token])
returnids
```
%% Cell type:markdown id: tags:
The cell below loads the data and prints the total number of tokens and the size of the vocabulary.
%% Cell type:code id: tags:
``` python
wikitext=WikiText()
print('Tokens in train:',len(wikitext.train))
print('Tokens in valid:',len(wikitext.valid))
print('Vocabulary size:',len(wikitext.vocab))
```
%% Cell type:markdown id: tags:
## Problem 1: Fixed-window neural language model
%% Cell type:markdown id: tags:
In this section you will implement and train the fixed-window neural language model proposed by [Bengio et al. (2003)](http://www.jmlr.org/papers/volume3/bengio03a/bengio03a.pdf) and introduced in Lecture 2.3. Recall that an input to the network takes the form of a vector of $n-1$ integers representing the preceding words. Each integer is mapped to a vector via an embedding layer. (All positions share the same embedding.) The embedding vectors are then concatenated and sent through a two-layer feed-forward network with a non-linearity in the form of a rectified linear unit (ReLU) and a final softmax layer.
%% Cell type:markdown id: tags:
### Problem 1.1: Vectorize the data
%% Cell type:markdown id: tags:
Your first task is to write code for transforming the data in the WikiText container into a vectorized form that can be fed to the fixed-window model. Complete the skeleton code in the cell below:
%% Cell type:code id: tags:
``` python
defvectorize_fixed_window(wikitext_data,n):
# TODO: Replace the following line with your own code
returnNone,None
```
%% Cell type:markdown id: tags:
Your function should meet the following specification:
**vectorize_fixed_window** (*wikitext_data*, *n*)
> Transforms WikiText data (a list of word ids) into a pair of tensors $\mathbf{X}$, $\mathbf{y}$ that can be used to train the fixed-window model. Let $N$ be the total number of $n$-grams from the token list; then $\mathbf{X}$ is a matrix with shape $(N, n-1)$ and $\mathbf{y}$ is a vector with length $N$.
⚠️ Your function should be able to handle arbitrary values of $n \geq 1$.
%% Cell type:markdown id: tags:
#### 🤞 Test your code
Test your implementation by running the code in the next cell. Does the output match your expectation?
> Creates a new fixed-window neural language model. The argument *n* specifies the model’s $n$-gram order. The argument *n_words* is the number of words that need to be embedded. The optional arguments *embedding_dim* and *hidden_dim* specify the dimensionalities of the embedding layer and the hidden layer, respectively; their default value is 50.
**forward** (*self*, *x*)
> Computes the network output on an input batch *x*. The shape of *x* is $(B, n-1)$, where $B$ is the batch size. The output of the forward pass is a tensor of shape $(B, V)$ where $V$ is the number of words in the vocabulary.
#### 🤞 Test your code
Test your code by instantiating the model and feeding it a batch of examples from the training data.
%% Cell type:markdown id: tags:
### Problem 1.3: Train the model
%% Cell type:markdown id: tags:
Your final task is to write code to train the fixed-window model using minibatch gradient descent and the cross-entropy loss function.
For your convenience, the following cell contains a utility function that randomly samples minibatches of a specified size from a pair of tensors:
%% Cell type:code id: tags:
``` python
defbatchify(x,y,batch_size):
random_indices=torch.randperm(len(x))
foriinrange(0,len(x)-batch_size+1,batch_size):
indices=random_indices[i:i+batch_size]
yieldx[indices].to(device),y[indices].to(device)
remainder=len(x)%batch_size
ifremainder:
indices=random_indices[-remainder:]
yieldx[indices].to(device),y[indices].to(device)
```
%% Cell type:markdown id: tags:
What remains to be done is the implementation of the training loop. This should be a straightforward generalization of the training loops that you have seen so far. Complete the skeleton code in the cell below:
> Trains a fixed-window neural language model of order *n* using minibatch gradient descent and returns it. The parameters *n_epochs* and *batch_size* specify the number of training epochs and the minibatch size, respectively. Training uses the cross-entropy loss function and the [Adam optimizer](https://pytorch.org/docs/stable/optim.html#torch.optim.Adam) with learning rate *lr*. After each epoch, prints the perplexity of the model on the validation data.
**⚠️ Your submitted notebook must contain output demonstrating a validation perplexity of at most 350.**
**Hint:** Computing the validation perplexity in one go may exhaust your computer’s memory and/or take a lot of time. If you run into this problem, break the computation down into minibatches and take the average perplexity.
%% Cell type:markdown id: tags:
#### 🤞 Test your code
To see whether your network is learning something, print the loss and/or the perplexity on the training data. If the two values are not decreasing over time, try to find the problem before wasting time (and energy) on useless training.
Training and even evaluation will take some time – on a CPU, you should expect several minutes per epoch, depending on hardware. To speed things up, you can train using a GPU; our reference implementation runs in less than 30 seconds per epoch on [Colab](http://colab.research.google.com).
%% Cell type:markdown id: tags:
## Problem 2: Recurrent neural network language model
%% Cell type:markdown id: tags:
In this section you will implement the recurrent neural network language model that was presented in Lecture 1.5. Recall that an input to the network is a vector of word ids. Each integer is mapped to an embedding vector. The sequence of embedded vectors is then fed into an unrolled LSTM. At each position $i$ in the sequence, the hidden state of the LSTM at that position is sent through a linear transformation into a final softmax layer, from which we read off the index of the word at position $i+1$. In theory, the input vector could represent the complete training data or at least a complete sentence; for practical reasons, however, we will truncate the input to some fixed value *bptt_len*, the **backpropagation-through-time horizon**.
%% Cell type:markdown id: tags:
### Problem 2.1: Vectorize the data
%% Cell type:markdown id: tags:
As in the previous problem, your first task is to transform the data in the WikiText container into a vectorized form that can be fed to the model.
%% Cell type:code id: tags:
``` python
defvectorize_rnn(wikitext_data,bptt_len):
# TODO: Replace the next line with your own code
returnNone
```
%% Cell type:markdown id: tags:
Your function should meet the following specification:
**vectorize_rnn** (*wikitext_data*, *bptt_len*)
> Transforms a list of token indexes into a pair of tensors $\mathbf{X}$, $\mathbf{Y}$ that can be used to train the recurrent neural language model. The rows of both tensors represent contiguous subsequences of token indexes of length *bptt_len*. Compared to the sequences in $\mathbf{X}$, the corresponding sequences in $\mathbf{Y}$ are shifted one position to the right. More precisely, if the $i$th row of $\mathbf{X}$ is the sequence that starts at token position $j$, then the same row of $\mathbf{Y}$ is the sequence that starts at position $j+1$.
%% Cell type:markdown id: tags:
#### 🤞 Test your code
Test your implementation by running the following code:
%% Cell type:code id: tags:
``` python
valid_x,valid_y=vectorize_rnn(wikitext.valid,32)
print(valid_x.size())
```
%% Cell type:markdown id: tags:
### Problem 2.2: Implement the model
%% Cell type:markdown id: tags:
Your next task is to implement the recurrent neural network model based on the graphical specification given in the lecture.
> Creates a new recurrent neural network language model. The argument *n_words* is the number of words that need to be embedded. The optional arguments *embedding_dim* and *hidden_dim* specify the dimensionalities of the embedding layer and the LSTM hidden layer, respectively; their default value is 50.
**forward** (*self*, *x*)
> Computes the network output on an input batch *x*. The shape of *x* is $(B, H)$, where $B$ is the batch size and $H$ is the length of each input sequence. The shape of the output tensor is $(B, H, V)$, where $V$ is the size of the vocabulary (the number of words).
%% Cell type:markdown id: tags:
#### 🤞 Test your code
Test your code by instantiating the model and feeding it a batch of examples from the training data.
%% Cell type:markdown id: tags:
### Problem 2.3: Train the model
%% Cell type:markdown id: tags:
The training loop for the recurrent neural network model is essentially identical to the loop that you wrote for the feed-forward model. The only thing to note is that the cross-entropy loss function expects its input to be a two-dimensional tensor; you will therefore have to re-shape the output tensor from the LSTM as well as the gold-standard output tensor in a suitable way. The most efficient way to do so is to use the [`view()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.view) method.
> Trains a recurrent neural network language model on the WikiText data using minibatch gradient descent and returns it. The parameters *n_epochs* and *batch_size* specify the number of training epochs and the minibatch size, respectively. The parameter *bptt_len* specifies the length of the backpropagation-through-time horizon, that is, the length of the input and output sequences. Training uses the cross-entropy loss function and the [Adam optimizer](https://pytorch.org/docs/stable/optim.html#torch.optim.Adam) with learning rate *lr*. After each epoch, prints the perplexity of the model on the validation data.
%% Cell type:markdown id: tags:
Evaluate your model by running the following code cell:
%% Cell type:code id: tags:
``` python
model_rnn=train_rnn(n_epochs=1)
```
%% Cell type:markdown id: tags:
**⚠️ Your submitted notebook must contain output demonstrating a validation perplexity of at most 310.**
%% Cell type:markdown id: tags:
## Problem 3: Parameter initialization (reflection)
%% Cell type:markdown id: tags:
Since the error surfaces that gradient search explores when training neural networks can be very complex, it is important to choose ‘good’ initial values for the parameters. In PyTorch, the weights of the embedding layer are initialized by sampling from the standard normal distribution $\mathcal{N}(0, 1)$. Test how changing the standard deviation and/or the distribution affects the perplexity of your feed-forward language model. Write a short report about your experience (ca. 150 words). Use the following prompts:
* What different settings for the initialization did you try? What results did you get?
* How can you choose a good initialization strategy?
* What did you learn? How, exactly, did you learn it? Why does this learning matter?