diff --git a/labs/l1-basic/nlp-l1-basic.ipynb b/labs/l1-basic/nlp-l1-basic.ipynb
index 551e8fc5677de65839750732eddecf3b6531ecd6..ef27061790a4ba6a285f0b10988da57e80e398bf 100644
--- a/labs/l1-basic/nlp-l1-basic.ipynb
+++ b/labs/l1-basic/nlp-l1-basic.ipynb
@@ -52,7 +52,7 @@
    "source": [
     "The data for this lab is [WikiText](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/), a collection of more than 100 million tokens extracted from the “Good” and “Featured” articles on Wikipedia. We will use the small version of the dataset, which contains slightly more than 2.5 million tokens.\n",
     "\n",
-    "The next cell contains code for an object that will act as a container for the “training” and the “validation” section of the data. We fill this container by reading the corresponding text files. The only processing we do is to whitespace-tokenise, and to enclose each non-empty line within `<bos>` (beginning-of-sentence) and `<eos>` (end-of-sentence) tokens."
+    "The next cell contains code for an object that will act as a container for the “training” and the “validation” section of the data. We fill this container by reading the corresponding text files. The only processing we do is to whitespace-tokenise and to replace each newline with an end-of-sentence token."
    ]
   },
   {
@@ -74,7 +74,7 @@
     "            for line in source:\n",
     "                line = line.rstrip()\n",
     "                if line:\n",
-    "                    for token in ['<bos>'] + line.split() + ['<eos>']:\n",
+    "                    for token in line.split() + ['<eos>']:\n",
     "                        if token not in self.vocab:\n",
     "                            self.vocab[token] = len(self.vocab)\n",
     "                        ids.append(self.vocab[token])\n",
@@ -119,7 +119,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "### Problem 1.1: Vectorise the data (1&nbsp;point)"
+    "### Problem 1.1: Vectorise the data (2&nbsp;point)"
    ]
   },
   {
@@ -238,7 +238,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "### Problem 1.3: Train the model (3&nbsp;points)"
+    "### Problem 1.3: Train the model (4&nbsp;points)"
    ]
   },
   {
@@ -274,7 +274,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "The code in the cell below trains a bigram model."
+    "The code in the cell below trains a trigram model."
    ]
   },
   {
@@ -283,7 +283,7 @@
    "metadata": {},
    "outputs": [],
    "source": [
-    "model_fixed_window = train_fixed_window(2)"
+    "model_fixed_window = train_fixed_window(3)"
    ]
   },
   {
@@ -292,7 +292,7 @@
    "source": [
     "#### Performance goal\n",
     "\n",
-    "**Your submitted notebook must contain output demonstrating a validation perplexity of at most 350.** If you do not reach this perplexity after the first epoch, try training for a second epoch.\n",
+    "**Your submitted notebook must contain output demonstrating a validation perplexity of at most 360.** If you do not reach this perplexity after the first epoch, try training for a second epoch.\n",
     "\n",
     "⚠️ Computing the validation perplexity in one go (for the full validation set) will probably exhaust your computer’s memory and/or take a lot of time. If you run into this problem, do the computation at the minibatch level and aggregate the results."
    ]
@@ -326,7 +326,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "### Problem 2.1: Vectorise the data (1&nbsp;point)"
+    "### Problem 2.1: Vectorise the data (2&nbsp;points)"
    ]
   },
   {
@@ -448,7 +448,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "### Problem 2.3: Train the model (3&nbsp;points)"
+    "### Problem 2.3: Train the model (4&nbsp;points)"
    ]
   },
   {
@@ -502,14 +502,14 @@
    "source": [
     "#### Performance goal\n",
     "\n",
-    "**Your submitted notebook must contain output demonstrating a validation perplexity of at most 280.** If you do not reach this perplexity after the first epoch, try training for a second epoch."
+    "**Your submitted notebook must contain output demonstrating a validation perplexity of at most 300.** If you do not reach this perplexity after the first epoch, try training for a second epoch."
    ]
   },
   {
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## Problem 3: Parameter initialisation (3&nbsp;points)"
+    "## Problem 3: Parameter initialisation (6&nbsp;points)"
    ]
   },
   {
@@ -558,7 +558,7 @@
    "name": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
-   "version": "3.10.4"
+   "version": "3.10.10"
   }
  },
  "nbformat": 4,