From 4b5085429f202f498ae2095a62c6a334a9f1e66a Mon Sep 17 00:00:00 2001 From: Marco Kuhlmann <marco.kuhlmann@liu.se> Date: Wed, 10 Jan 2024 15:28:03 +0100 Subject: [PATCH] Rephrase Problem 3 in nlp-l1-basic --- labs/l1-basic/nlp-l1-basic.ipynb | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/labs/l1-basic/nlp-l1-basic.ipynb b/labs/l1-basic/nlp-l1-basic.ipynb index 9ac5fb6..551e8fc 100644 --- a/labs/l1-basic/nlp-l1-basic.ipynb +++ b/labs/l1-basic/nlp-l1-basic.ipynb @@ -509,18 +509,22 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Problem 3: Parameter initialisation (reflection; 3 points)" + "## Problem 3: Parameter initialisation (3 points)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "The error surfaces that gradient search explores when training neural networks can be very complex. Because of this, it is important to choose “good†initial values for the parameters. In PyTorch, the weights of the embedding layer are initialised by sampling from the standard normal distribution $\\mathcal{N}(0, 1)$. Test how changing the standard deviation and/or the distribution affects the perplexity of your feed-forward language model. Find research articles that propose different types of initialisations. Write a short (150 words) report about your experiments and literature study. Use the following prompts:\n", + "The error surfaces explored when training neural networks can be very complex. Because of this, it is important to choose “good†initial values for the parameters. In PyTorch, the weights of the embedding layer are initialised by sampling from the standard normal distribution $\\mathcal{N}(0, 1)$. Test how changing the initialisation affects the perplexity of your feed-forward language model. Find research articles that propose different initialisation strategies.\n", "\n", - "* What different settings for the initialisation did you try? What results did you get?\n", + "Write a short (150 words) report about your experiments and literature search. Use the following prompts:\n", + "\n", + "* What different initialisation did you try? What results did you get?\n", "* How do your results compare to what was suggested by the research articles?\n", - "* What did you learn? How, exactly, did you learn it? Why does this learning matter?" + "* What did you learn? How, exactly, did you learn it? Why does this learning matter?\n", + "\n", + "You are allowed to consult sources for this problem if you appropriately cite them. If in doubt, please read the [Academic Integrity Policy](https://www.ida.liu.se/~TDDE09/logistics/policies.html#academic-integrity-policy)." ] }, { @@ -553,7 +557,8 @@ "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", - "pygments_lexer": "ipython3" + "pygments_lexer": "ipython3", + "version": "3.10.4" } }, "nbformat": 4, -- GitLab