"The data for this lab is [WikiText](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/), a collection of more than 100 million tokens extracted from the “Good” and “Featured” articles on Wikipedia. We will use the small version of the dataset, which contains slightly more than 2.5 million tokens.\n",
"\n",
"The next cell contains code for an object that will act as a container for the “training” and the “validation” section of the data. We fill this container by reading the corresponding text files. The only processing we do is to whitespace-tokenise, and to enclose each non-empty line within `<bos>` (beginning-of-sentence) and `<eos>` (end-of-sentence) tokens."
"The next cell contains code for an object that will act as a container for the “training” and the “validation” section of the data. We fill this container by reading the corresponding text files. The only processing we do is to whitespace-tokenise and to replace each newline with an end-of-sentence token."
]
},
{
...
...
@@ -74,7 +74,7 @@
" for line in source:\n",
" line = line.rstrip()\n",
" if line:\n",
" for token in ['<bos>'] + line.split() + ['<eos>']:\n",
" for token in line.split() + ['<eos>']:\n",
" if token not in self.vocab:\n",
" self.vocab[token] = len(self.vocab)\n",
" ids.append(self.vocab[token])\n",
...
...
@@ -119,7 +119,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Problem 1.1: Vectorise the data (1 point)"
"### Problem 1.1: Vectorise the data (2 point)"
]
},
{
...
...
@@ -238,7 +238,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Problem 1.3: Train the model (3 points)"
"### Problem 1.3: Train the model (4 points)"
]
},
{
...
...
@@ -274,7 +274,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"The code in the cell below trains a bigram model."
"The code in the cell below trains a trigram model."
]
},
{
...
...
@@ -283,7 +283,7 @@
"metadata": {},
"outputs": [],
"source": [
"model_fixed_window = train_fixed_window(2)"
"model_fixed_window = train_fixed_window(3)"
]
},
{
...
...
@@ -292,7 +292,7 @@
"source": [
"#### Performance goal\n",
"\n",
"**Your submitted notebook must contain output demonstrating a validation perplexity of at most 350.** If you do not reach this perplexity after the first epoch, try training for a second epoch.\n",
"**Your submitted notebook must contain output demonstrating a validation perplexity of at most 360.** If you do not reach this perplexity after the first epoch, try training for a second epoch.\n",
"\n",
"⚠️ Computing the validation perplexity in one go (for the full validation set) will probably exhaust your computer’s memory and/or take a lot of time. If you run into this problem, do the computation at the minibatch level and aggregate the results."
]
...
...
@@ -326,7 +326,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Problem 2.1: Vectorise the data (1 point)"
"### Problem 2.1: Vectorise the data (2 points)"
]
},
{
...
...
@@ -448,7 +448,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Problem 2.3: Train the model (3 points)"
"### Problem 2.3: Train the model (4 points)"
]
},
{
...
...
@@ -502,14 +502,14 @@
"source": [
"#### Performance goal\n",
"\n",
"**Your submitted notebook must contain output demonstrating a validation perplexity of at most 280.** If you do not reach this perplexity after the first epoch, try training for a second epoch."
"**Your submitted notebook must contain output demonstrating a validation perplexity of at most 300.** If you do not reach this perplexity after the first epoch, try training for a second epoch."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Problem 3: Parameter initialisation (3 points)"
"## Problem 3: Parameter initialisation (6 points)"
]
},
{
...
...
@@ -558,7 +558,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
"version": "3.10.10"
}
},
"nbformat": 4,
...
...
%% Cell type:markdown id: tags:
# L1: Language modelling
%% Cell type:markdown id: tags:
In this lab you will implement and train two neural language models: the fixed-window model and the recurrent neural network model. You will evaluate these models by computing their perplexity on a benchmark dataset.
%% Cell type:code id: tags:
``` python
importtorch
```
%% Cell type:markdown id: tags:
For this lab, you should use the GPU if you have one:
The data for this lab is [WikiText](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/), a collection of more than 100 million tokens extracted from the “Good” and “Featured” articles on Wikipedia. We will use the small version of the dataset, which contains slightly more than 2.5 million tokens.
The next cell contains code for an object that will act as a container for the “training” and the “validation” section of the data. We fill this container by reading the corresponding text files. The only processing we do is to whitespace-tokenise, and to enclose each non-empty line within `<bos>` (beginning-of-sentence) and `<eos>` (end-of-sentence) tokens.
The next cell contains code for an object that will act as a container for the “training” and the “validation” section of the data. We fill this container by reading the corresponding text files. The only processing we do is to whitespace-tokenise and to replace each newline with an end-of-sentence token.
%% Cell type:code id: tags:
``` python
classWikiText(object):
def__init__(self):
self.vocab={}
self.train=self.read_data('wiki.train.tokens')
self.valid=self.read_data('wiki.valid.tokens')
defread_data(self,path):
ids=[]
withopen(path,encoding='utf-8')assource:
forlineinsource:
line=line.rstrip()
ifline:
fortokenin['<bos>']+line.split()+['<eos>']:
fortokeninline.split()+['<eos>']:
iftokennotinself.vocab:
self.vocab[token]=len(self.vocab)
ids.append(self.vocab[token])
returnids
```
%% Cell type:markdown id: tags:
The cell below loads the data and prints the total number of tokens and the size of the vocabulary.
%% Cell type:code id: tags:
``` python
wikitext=WikiText()
print('Tokens in train:',len(wikitext.train))
print('Tokens in valid:',len(wikitext.valid))
print('Vocabulary size:',len(wikitext.vocab))
```
%% Cell type:markdown id: tags:
## Problem 1: Fixed-window model
%% Cell type:markdown id: tags:
In this section, you will implement and train the fixed-window neural language model proposed by [Bengio et al. (2003)](http://www.jmlr.org/papers/volume3/bengio03a/bengio03a.pdf) and presented in the lectures. Recall that an input to the network takes the form of a vector of $n-1$ integers representing the preceding words. Each integer is mapped to a vector via an embedding layer. (All positions share the same embedding.) The embedding vectors are then concatenated and sent through a two-layer feed-forward network with a non-linearity in the form of a rectified linear unit (ReLU) and a final softmax layer.
%% Cell type:markdown id: tags:
### Problem 1.1: Vectorise the data (1 point)
### Problem 1.1: Vectorise the data (2 point)
%% Cell type:markdown id: tags:
Your first task is to write code for transforming the data in the WikiText container into a vectorised form that can be fed to the fixed-window model. Concretely, you will implement a [collate function](https://pytorch.org/docs/stable/data.html#dataloader-collate-fn) in the form of a callable vectoriser object. Complete the skeleton code in the cell below:
%% Cell type:code id: tags:
``` python
classFixedWindowVectorizer(object):
def__init__(self,n):
# n-gram order
self.n=n
def__call__(self,data):
# TODO: Replace the following line with your own code
returnNone,None
```
%% Cell type:markdown id: tags:
Your code should implement the following specification:
**__init__** (*self*, *n*)
> Creates a new vectoriser with n-gram order $n$. Your code should be able to handle arbitrary n-gram orders $n \geq 1$.
**__call__** (*self*, *data*)
> Transforms WikiText *data* (a list of word ids) into a pair of tensors $\mathbf{X}$, $\mathbf{y}$ that can be used to train the fixed-window model. Let $N$ be the total number of $n$-grams from the token list; then $\mathbf{X}$ is a matrix with shape $(N, n-1)$ and $\mathbf{y}$ is a vector with length $N$.
%% Cell type:markdown id: tags:
#### 🤞 Test your code
Test your implementation by running the code in the next cell. Does the output match your expectation?
> Creates a new fixed-window neural language model. The argument *n* specifies the model’s $n$-gram order. The argument *n_words* is the number of words in the vocabulary. The arguments *embedding_dim* and *hidden_dim* specify the dimensionalities of the embedding layer and the hidden layer of the feedforward network, respectively; their default value is 50.
**forward** (*self*, *x*)
> Computes the network output on an input batch *x*. The shape of *x* is $(B, n-1)$, where $B$ is the batch size. The output of the forward pass is a tensor of shape $(B, V)$ where $V$ is the number of words in the vocabulary.
**Hint:** The most efficient way to implement the vector concatenation in this model is to use the [`view()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.view) method.
#### 🤞 Test your code
Test your code by instantiating the model and feeding it a batch of examples from the training data.
%% Cell type:markdown id: tags:
### Problem 1.3: Train the model (3 points)
### Problem 1.3: Train the model (4 points)
%% Cell type:markdown id: tags:
Your final task is to write code to train the fixed-window model using minibatch gradient descent and the cross-entropy loss function. This should be a straightforward generalisation of the training loops that you have seen so far. Complete the skeleton code in the cell below:
> Trains a fixed-window neural language model of order *n* using minibatch gradient descent and returns it. The parameters *n_epochs* and *batch_size* specify the number of training epochs and the minibatch size, respectively. Training uses the cross-entropy loss function and the [Adam optimizer](https://pytorch.org/docs/stable/optim.html#torch.optim.Adam) with learning rate *lr*. After each epoch, prints the perplexity of the model on the validation data.
%% Cell type:markdown id: tags:
The code in the cell below trains a bigram model.
The code in the cell below trains a trigram model.
%% Cell type:code id: tags:
``` python
model_fixed_window=train_fixed_window(2)
model_fixed_window=train_fixed_window(3)
```
%% Cell type:markdown id: tags:
#### Performance goal
**Your submitted notebook must contain output demonstrating a validation perplexity of at most 350.** If you do not reach this perplexity after the first epoch, try training for a second epoch.
**Your submitted notebook must contain output demonstrating a validation perplexity of at most 360.** If you do not reach this perplexity after the first epoch, try training for a second epoch.
⚠️ Computing the validation perplexity in one go (for the full validation set) will probably exhaust your computer’s memory and/or take a lot of time. If you run into this problem, do the computation at the minibatch level and aggregate the results.
%% Cell type:markdown id: tags:
#### 🤞 Test your code
To see whether your network is learning something, print or plot the loss and/or the perplexity on the training data. If the two values do not decrease during training, try to find the problem before wasting time (and electricity) on useless computation.
Training and even evaluation will take some time – on a CPU, you should expect several minutes per epoch, depending on hardware. Our reference implementation uses a GPU and runs in less than 30 seconds per epoch on [Colab](http://colab.research.google.com).
%% Cell type:markdown id: tags:
## Problem 2: Recurrent neural network model
%% Cell type:markdown id: tags:
In this section, you will implement the recurrent neural network language model. Recall that an input to this model is a vector of word ids. Each integer is mapped to an embedding vector. The sequence of embedded vectors is then fed into an unrolled LSTM. At each position $i$ in the sequence, the hidden state of the LSTM at that position is sent through a linear transformation into a final softmax layer representing the probability distribution over the words at position $i+1$. In theory, the input vector could represent the complete training data; for practical reasons, however, we will truncate the input to some fixed value *bptt_len*. This length is called the **backpropagation-through-time horizon**.
%% Cell type:markdown id: tags:
### Problem 2.1: Vectorise the data (1 point)
### Problem 2.1: Vectorise the data (2 points)
%% Cell type:markdown id: tags:
As in the previous problem, your first task is to transform the data in the WikiText container into a vectorised form that can be fed to the model.
%% Cell type:code id: tags:
``` python
classRNNVectorizer(object):
def__init__(self,bptt_len):
# backpropagation-through-time horizon
self.bptt_len=bptt_len
def__call__(self,data):
# TODO: Replace the following line with your own code
returnNone,None
```
%% Cell type:markdown id: tags:
Your vectoriser should meet the following specification:
**__init__** (*self*, *bptt_len*)
> Creates a new vectoriser. The parameter *bptt_len* specifies the backpropagation-through-time horizon.
**__call__** (*self*, *data*)
> Transforms a list of token indexes *data* into a pair of tensors $\mathbf{X}$, $\mathbf{Y}$ that can be used to train the recurrent neural language model. The rows of both tensors represent contiguous subsequences of token indexes of length *bptt_len*. Compared to the sequences in $\mathbf{X}$, the corresponding sequences in $\mathbf{Y}$ are shifted one position to the right. More precisely, if the $i$th row of $\mathbf{X}$ is the sequence that starts at token position $j$, then the same row of $\mathbf{Y}$ is the sequence that starts at position $j+1$.
%% Cell type:markdown id: tags:
#### 🤞 Test your code
Test your implementation by running the following code:
%% Cell type:code id: tags:
``` python
valid_x,valid_y=RNNVectorizer(32)(wikitext.valid)
print(valid_x.size(),valid_y.size())
```
%% Cell type:markdown id: tags:
### Problem 2.2: Implement the model (2 points)
%% Cell type:markdown id: tags:
Your next task is to implement the recurrent neural network model based on the graphical specification.
> Creates a new recurrent neural network language model based on an LSTM. The argument *n_words* is the number of words in the vocabulary. The arguments *embedding_dim* and *hidden_dim* specify the dimensionalities of the embedding layer and the LSTM hidden layer, respectively; their default value is 50.
**forward** (*self*, *x*)
> Computes the network output on an input batch *x*. The shape of *x* is $(B, H)$, where $B$ is the batch size and $H$ is the length of each input sequence. The shape of the output tensor is $(B, H, V)$, where $V$ is the size of the vocabulary.
%% Cell type:markdown id: tags:
#### 🤞 Test your code
Test your code by instantiating the model and feeding it a batch of examples from the training data.
%% Cell type:markdown id: tags:
### Problem 2.3: Train the model (3 points)
### Problem 2.3: Train the model (4 points)
%% Cell type:markdown id: tags:
The training loop for the recurrent neural network model is essentially identical to the loop that you wrote for the feed-forward model. The only thing to note is that the cross-entropy loss function expects its input to be a two-dimensional tensor; you will therefore have to re-shape the output tensor from the LSTM as well as the gold-standard output tensor in a suitable way. The most efficient way to do so is to use the [`view()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.view) method.
> Trains a recurrent neural network language model on the WikiText data using minibatch gradient descent and returns it. The parameters *n_epochs* and *batch_size* specify the number of training epochs and the minibatch size, respectively. The parameter *bptt_len* specifies the length of the backpropagation-through-time horizon, that is, the length of the input and output sequences. Training uses the cross-entropy loss function and the [Adam optimizer](https://pytorch.org/docs/stable/optim.html#torch.optim.Adam) with learning rate *lr*. After each epoch, prints the perplexity of the model on the validation data.
%% Cell type:markdown id: tags:
Evaluate your model by running the following code cell:
%% Cell type:code id: tags:
``` python
model_rnn=train_rnn()
```
%% Cell type:markdown id: tags:
#### Performance goal
**Your submitted notebook must contain output demonstrating a validation perplexity of at most 280.** If you do not reach this perplexity after the first epoch, try training for a second epoch.
**Your submitted notebook must contain output demonstrating a validation perplexity of at most 300.** If you do not reach this perplexity after the first epoch, try training for a second epoch.
%% Cell type:markdown id: tags:
## Problem 3: Parameter initialisation (3 points)
## Problem 3: Parameter initialisation (6 points)
%% Cell type:markdown id: tags:
The error surfaces explored when training neural networks can be very complex. Because of this, it is important to choose “good” initial values for the parameters. In PyTorch, the weights of the embedding layer are initialised by sampling from the standard normal distribution $\mathcal{N}(0, 1)$. Test how changing the initialisation affects the perplexity of your feed-forward language model. Find research articles that propose different initialisation strategies.
Write a short (150 words) report about your experiments and literature search. Use the following prompts:
* What different initialisation did you try? What results did you get?
* How do your results compare to what was suggested by the research articles?
* What did you learn? How, exactly, did you learn it? Why does this learning matter?
You are allowed to consult sources for this problem if you appropriately cite them. If in doubt, please read the [Academic Integrity Policy](https://www.ida.liu.se/~TDDE09/logistics/policies.html#academic-integrity-policy).