Newer
Older
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
{
"cells": [
{
"cell_type": "markdown",
"id": "e86f90cc",
"metadata": {},
"source": [
"# L5: Parameter-efficient fine-tuning"
]
},
{
"cell_type": "markdown",
"id": "cbe683ab",
"metadata": {},
"source": [
"Fine-tuning all parameters of pre-trained language models can be resource-intensive. Because of this, current research in natural language processing is looking into developing methods for adapting models to downstream tasks without full fine-tuning. These methods only tune a small number of model parameters while yielding performance comparable to that of a fully fine-tuned model.\n",
"\n",
"In this lab, you will implement LoRA, one of the most well-known methods for parameter-efficient fine-tuning. LoRA stands for “Low-Rank Adaptation of Large Language Models” and was originally described in a research article by [Hu et al. (2021)](https://arxiv.org/abs/2106.09685).\n",
"\n",
"Along the way, you will earn experience with [Hugging Face Transformers](https://huggingface.co/docs/transformers/en/index), a state-of-the-art library for training and deploying language models, as well as with several related libraries. In particular, you will learn a best-practice workflow for downloading a Transformer model and fine-tuning it on the downstream task of binary sentiment classification.\n",
"\n",
"*Tasks you can choose for the oral exam are marked with the graduation cap 🎓 emoji.*"
]
},
{
"cell_type": "markdown",
"id": "90eaa7f8",
"metadata": {},
"source": [
"## Dataset"
]
},
{
"cell_type": "markdown",
"id": "b426bb69",
"metadata": {},
"source": [
"The data for this lab comes from the [Large Movie Review Dataset](https://ai.stanford.edu/~amaas/data/sentiment/). The full dataset consists of 50,000 highly polar movie reviews collected from the Internet Movie Database (IMDB). Here, we use a random sample consisting of 2,000 reviews for training and 500 reviews for evaluation."
]
},
{
"cell_type": "markdown",
"id": "1234a390",
"metadata": {},
"source": [
"To load the dataset, we use the [Hugging Face Datasets](https://huggingface.co/docs/datasets/en/index) library."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dcf185b8",
"metadata": {},
"outputs": [],
"source": [
"from datasets import load_dataset\n",
"\n",
"imdb_dataset = load_dataset(\n",
" \"csv\", data_files={\"train\": \"train.csv\", \"eval\": \"eval.csv\"}\n",
")\n",
"\n",
"imdb_dataset"
]
},
{
"cell_type": "markdown",
"id": "d63c62a4",
"metadata": {},
"source": [
"As we can see, each sample in the dataset is a record with three fields: an internal index (`index`, an integer), the text of the review (`review`, a string), and the sentiment label (`label`, an integer – 1 for “positive” and 0 for “negative” sentiment).\n",
"\n",
"Here is an example record:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7b5547ce",
"metadata": {},
"outputs": [],
"source": [
"imdb_dataset[\"train\"][645]"
]
},
{
"cell_type": "markdown",
"id": "e77c6865",
"metadata": {},
"source": [
"## Tokeniser"
]
},
{
"cell_type": "markdown",
"id": "eb9646fe",
"metadata": {},
"source": [
"As our pre-trained language model, we will use [DistilBERT](https://huggingface.co/docs/transformers/en/model_doc/distilbert), a compact encoder model with 40% less parameters than BERT base. DistilBERT is not actually a *large* language model by modern standards and thus does not benefit as much from parameter-efficient fine-tuning as other models. However, it has the benefit of being light and fast, and can be run even on consumer hardware.\n",
"\n",
"To feed the movie reviews to DistilBERT, we need to tokenise them and encode the resulting tokens as integers in the model vocabulary. We start by loading the DistilBERT tokeniser using the [Auto classes](https://huggingface.co/docs/transformers/en/model_doc/auto):"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d76f2c9e",
"metadata": {},
"outputs": [],
"source": [
"from transformers import AutoTokenizer\n",
"\n",
"tokenizer = AutoTokenizer.from_pretrained(\"distilbert-base-uncased\")"
]
},
{
"cell_type": "markdown",
"id": "ed29329d",
"metadata": {},
"source": [
"We then create a tokenised version of the dataset:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "de583eff",
"metadata": {},
"outputs": [],
"source": [
"def tokenize_function(batch):\n",
" return tokenizer(batch[\"review\"], padding=True, truncation=True)\n",
"\n",
"\n",
"tokenized_imdb_dataset = imdb_dataset.map(tokenize_function, batched=True)\n",
"\n",
"tokenized_imdb_dataset"
]
},
{
"cell_type": "markdown",
"id": "66cde945",
"metadata": {},
"source": [
"As we can see, tokenising adds two additional fields to each review: `input_ids` is the list of token ids corresponding to the review, and `attention_mask` is the list of indices specifying which tokens the encoder should attend to."
]
},
{
"cell_type": "markdown",
"id": "e1efb97b",
"metadata": {},
"source": [
"To avoid trouble when fine-tuning the model later, the next cell disables tokeniser parallelism."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "70d0190e",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"TOKENIZERS_PARALLELISM\"] = \"false\""
]
},
{
"cell_type": "markdown",
"id": "fceee00c",
"metadata": {},
"source": [
"## Trainer"
]
},
{
"cell_type": "markdown",
"id": "8ddcc655",
"metadata": {},
"source": [
"In this section, we will set up our workflow for training and evaluating DistilBERT models. The central component in this workflow is the [Trainer](https://huggingface.co/docs/transformers/main_classes/trainer), which provides extensive configuration options. Here, we leave most of these options at their default value. Two changes we *do* make are to enable evaluation of the trained model after each epoch, and to log the training and evaluation loss after every 5 training steps (the default is 500)."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e6d2854d",
"metadata": {},
"outputs": [],
"source": [
"from transformers import TrainingArguments\n",
"\n",
"training_args = TrainingArguments(\n",
" output_dir=\"tmp_trainer\",\n",
" eval_strategy=\"epoch\",\n",
" logging_steps=5,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "2f22cfb9",
"metadata": {},
"source": [
"In addition to the loss, we also track classification accuracy. For this we import the [Hugging Face Evaluate](https://huggingface.co/docs/evaluate/en/index) library and define a small helper function `compute_metrics()` that the trainer will call after each epoch."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4bafbadf",
"metadata": {},
"outputs": [],
"source": [
"import evaluate\n",
"\n",
"accuracy = evaluate.load(\"accuracy\")\n",
"\n",
"\n",
"def compute_metrics(eval_pred):\n",
" logits, labels = eval_pred\n",
" predictions = logits.argmax(axis=-1)\n",
" return accuracy.compute(predictions=predictions, references=labels)"
]
},
{
"cell_type": "markdown",
"id": "f4a488a2",
"metadata": {},
"source": [
"In the next cell we define a convenience function `make_trainer()` that creates a readily-configured trainer for a specified model (*model*). We will use this trainer both to train the model on the training section of the tokenised review dataset, and to evaluate it on the evaluation section."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "eb219812",
"metadata": {},
"outputs": [],
"source": [
"from transformers import Trainer\n",
"\n",
"\n",
"def make_trainer(model):\n",
" trainer = Trainer(\n",
" model=model,\n",
" args=training_args,\n",
" train_dataset=tokenized_imdb_dataset[\"train\"],\n",
" eval_dataset=tokenized_imdb_dataset[\"eval\"],\n",
" compute_metrics=compute_metrics,\n",
" )\n",
" return trainer"
]
},
{
"cell_type": "markdown",
"id": "851792f4",
"metadata": {},
"source": [
"## Full fine-tuning"
]
},
{
"cell_type": "markdown",
"id": "892ac059",
"metadata": {},
"source": [
"In the rest of this notebook, we will work our way to the implementation of LoRA, and compare LoRA to traditional fine-tuning methods. Our first point of reference is a fully fine-tuned DistilBERT model."
]
},
{
"cell_type": "markdown",
"id": "6206cf1c",
"metadata": {},
"source": [
"We start by loading the pre-trained model:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d1433800",
"metadata": {},
"outputs": [],
"source": [
"from transformers import AutoModelForSequenceClassification\n",
"\n",
"pretrained_model = AutoModelForSequenceClassification.from_pretrained(\n",
" \"distilbert-base-uncased\", num_labels=2\n",
")\n",
"\n",
"pretrained_model"
]
},
{
"cell_type": "markdown",
"id": "9427dceb",
"metadata": {},
"source": [
"The architecture of DistilBERT is that of a standard Transformer encoder with an embedding layer (`embeddings`) followed by a stack of six Transformer blocks (`transformer`) and a feedforward network with two linear layers (`pre_classifier` and `classifier`) and a final dropout layer (`dropout`)."
]
},
{
"cell_type": "markdown",
"id": "a633d0ef",
"metadata": {},
"source": [
"### 🎈 Task 1: Counting the number of trainable parameters\n",
"\n",
"One relevant measure in the context of parameter-efficient fine-tuning is the number of parameters that need to be changed when training a model. Your first task in this lab is to write a function `num_trainable_parameters()` that calculates this number for a given model."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "69dc6856",
"metadata": {},
"outputs": [],
"source": [
"def num_trainable_parameters(model):\n",
" # TODO: Replace the next line with your own code\n",
" return 0"
]
},
{
"cell_type": "markdown",
"id": "0d221df2",
"metadata": {},
"source": [
"The function should implement the following specification:\n",
"\n",
"> **num_trainable_parameters** (*model*)\n",
">\n",
"> Returns the number of float-valued trainable parameters in the specified *model* as an integer.\n",
"\n",
"#### 👍 Hint\n",
"\n",
"The term *parameter* can refer to either complete tensors or the individual elements of these tensors. For example, a linear layer created by `nn.Linear(3, 5)` has 2 tensor-valued parameters (a weight matrix and a bias vector) and 20 float-valued parameters (the elements of these tensors). To get the tensor-valued parameters of a model, you can use the [`parameters()`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html#torch.nn.Module.parameters) method. A parameter is *trainable* if it requires gradient."
]
},
{
"cell_type": "markdown",
"id": "190616d0",
"metadata": {},
"source": [
"#### 🤞 Test your code\n",
"\n",
"To test your code, apply your function to the pre-trained model. The correct number of float-valued trainable parameters for this model is 66,955,010."
]
},
{
"cell_type": "markdown",
"id": "6f18202c",
"metadata": {},
"source": [
"### Fine-tuning\n",
"\n",
"When we load the pre-trained model, the Hugging Face Transformers library warns us that the weights of the feedforward network have not yet been trained. To do so, in the next cell, we pass the pre-trained model to a trainer and initiate the fine-tuning process.\n",
"\n",
"**⚠️ Please note that fine-tuning the model will take some time! ⚠️**\n",
"\n",
"You can work on the other problems in this lab while you are waiting."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d2de10f5",
"metadata": {},
"outputs": [],
"source": [
"finetuned_trainer = make_trainer(pretrained_model)\n",
"\n",
"finetuned_trainer.train()"
]
},
{
"cell_type": "markdown",
"id": "9b02d273",
"metadata": {},
"source": [
"Because full fine-tuning is so resource-intensive, we save the fine-tuned model to disk:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2e394803",
"metadata": {},
"outputs": [],
"source": [
"finetuned_trainer.save_model(\"finetuned\")"
]
},
{
"cell_type": "markdown",
"id": "c624fc86",
"metadata": {},
"source": [
"Later in this notebook, whenever you need the fully fine-tuned version of the model, you can load it as follows:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f1124496",
"metadata": {},
"outputs": [],
"source": [
"finetuned_model = AutoModelForSequenceClassification.from_pretrained(\"finetuned\")"
]
},
{
"cell_type": "markdown",
"id": "b64681f8",
"metadata": {},
"source": [
"### Convenience functions\n",
"\n",
"Because we will repeat the steps we just took to fine-tune the pre-trained model several times in this notebook, we define two convenience functions:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "960d010b",
"metadata": {},
"outputs": [],
"source": [
"def train(model):\n",
" print(\"Number of trainable parameters:\", num_trainable_parameters(model))\n",
" trainer = make_trainer(model)\n",
" trainer.train()\n",
" return model"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "96208182",
"metadata": {},
"outputs": [],
"source": [
"def evaluate(model):\n",
" trainer = make_trainer(model)\n",
" return trainer.evaluate()"
]
},
{
"cell_type": "markdown",
"id": "1cfa5b36",
"metadata": {},
"source": [
"## Tuning the final layers only"
]
},
{
"cell_type": "markdown",
"id": "0227d8c8",
"metadata": {},
"source": [
"If full fine-tuning marks one end of the complexity spectrum, the other end is marked by only tuning the final layers of the transformer – the *head* of the model. In the case of DistilBERT, the head consists of the `pre_classifier` and `classifier` layers."
]
},
{
"cell_type": "markdown",
"id": "4b309f40",
"metadata": {},
"source": [
"### 🎈 Task 2: Head-tuning\n",
"\n",
"Implement the head-tuning strategy by coding the following function:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a05aae5b",
"metadata": {},
"outputs": [],
"source": [
"def make_headtuned_model():\n",
" # TODO: Replace the next line with your own code\n",
" raise NotImplementedError"
]
},
{
"cell_type": "markdown",
"id": "707dd4d5",
"metadata": {},
"source": [
"Here is the specification of this function:\n",
"\n",
"> **make_headtuned_model** ()\n",
">\n",
"> Returns a model that is identical to the pre-trained model, except that the head layers have been trained on the sentiment data. (The other parameters of the pre-trained model are left untouched.)\n",
"\n",
"#### 👍 Hint\n",
"\n",
"You freeze a parameter by setting its `requires_grad`-attribute to `False`."
]
},
{
"cell_type": "markdown",
"id": "60679685",
"metadata": {},
"source": [
"Once you have an implementation of the head-tuning strategy, evaluate it on the evaluation data. How much accuracy do we lose when only training the final layers of the pre-trained model, compared to full fine-tuning?"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c66d73a6",
"metadata": {},
"outputs": [],
"source": [
"headtuned_model = make_headtuned_model()"
]
},
{
"cell_type": "markdown",
"id": "334ee3b2",
"metadata": {},
"source": [
"#### 🤞 Test your code\n",
"\n",
"If you configured your model correctly, `num_trainable_parameters()` should show 592,130 trainable parameters."
]
},
{
"cell_type": "markdown",
"id": "2bced4a1",
"metadata": {},
"source": [
"For future reference, we also save the head-tuned model:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "59677530",
"metadata": {},
"outputs": [],
"source": [
"make_trainer(headtuned_model).save_model(\"headtuned\")"
]
},
{
"cell_type": "markdown",
"id": "ade8ef3f",
"metadata": {},
"source": [
"## Layer surgery"
]
},
{
"cell_type": "markdown",
"id": "94efdbce",
"metadata": {},
"source": [
"LoRA works by “wrapping” frozen layers from the pre-trained Transformer model inside adapter modules. Conventionally, this wrapping is only applied to the linear layers that transform the queries and values in the self-attention mechanism. To implement the wrapping, we need functions to extract and replace layers in a model. Your task in this section is to code these functions."
]
},
{
"cell_type": "markdown",
"id": "a4660183",
"metadata": {},
"source": [
"### 🎓 Task 3: Extracting layers\n",
"\n",
"Code a function that extracts the query and value linear layers from a DistilBERT model:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "62f17aa1",
"metadata": {},
"outputs": [],
"source": [
"def extract(model):\n",
" # TODO: Replace the next line with your own code\n",
" return {}"
]
},
{
"cell_type": "markdown",
"id": "52ed77d7",
"metadata": {},
"source": [
"Implement this function to match the following specification:\n",
"\n",
"> **extract** (*model*)\n",
">\n",
"> Takes a DistilBERT model (*model*) and extracts the query and value linear layers from each block of the Transformer. Returns a dictionary mapping the DistilBERT module names of these layers to the layers themselves (instances of `nn.Linear`).\n",
"\n",
"#### 👍 Hint\n",
"\n",
"As we saw earlier, the DistilBERT model consists of a hierarchy of nested submodules. Each of these can be addressed by a fully-qualified string name. Use [`get_submodule()`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html#torch.nn.Module.get_submodule) to retrieve a layer by name. You can hard-wire the names of the layers you want to extract."
]
},
{
"cell_type": "markdown",
"id": "eee363a3",
"metadata": {},
"source": [
"#### 🤞 Test your code\n",
"\n",
"To test your code, check the number of trainable float-valued parameters in the extracted layers. This number should be 7,087,104."
]
},
{
"cell_type": "markdown",
"id": "0abad1cc",
"metadata": {},
"source": [
"### 🎓 Task 4: Replacing layers\n",
"\n",
"Next, code the inverse of the `extract()` function to replace selected layers of a module using a dictionary of named layers."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0c7e5124",
"metadata": {},
"outputs": [],
"source": [
"def replace(model, named_layers):\n",
" # TODO: Replace the next line with your own code\n",
" return model"
]
},
{
"cell_type": "markdown",
"id": "a56cfd34",
"metadata": {},
"source": [
"Implement this function to match the following specification:\n",
"\n",
"> **replace** (*model*, *named_layers*)\n",
">\n",
"> Takes a DistilBERT model (*model*) and a dictionary in the format returned by `extract()` (*named_layers*) and injects the extracted layers into the model. More specifically, suppose that *named_layers* contains a key–value pair `(name, layer)`. Then the function replaces the submodule of *model* addressed by the fully-qualified string name `name` by the layer `layer`. Returns the modified model.\n",
"\n",
"#### 👍 Hint\n",
"\n",
"Use [`getattr()`](https://docs.python.org/3/library/functions.html#getattr) and [`setattr()`](https://docs.python.org/3/library/functions.html#setattr) to return or set the value of a named submodule."
]
},
{
"cell_type": "markdown",
"id": "b9003d41",
"metadata": {},
"source": [
"#### 🤞 Test your code\n",
"\n",
"To test your implementation, write code that (1) extracts the query and value linear layers from the fine-tuned model; (2) replaces these layers with clones with random weights; and (3) replaces these layers again with the original versions. Evaluating the modified model after step (2) should yield a near-random accuracy. Evaluating it again after step (3) should yield the original accuracy.\n",
"\n",
"The following function should be helpful. It clones a linear layer, copying the weights and the bias from the original."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "851b2937",
"metadata": {},
"outputs": [],
"source": [
"import torch.nn as nn\n",
"\n",
"\n",
"def clone_linear(original):\n",
" out_features, in_features = original.weight.shape\n",
" copy = nn.Linear(in_features, out_features)\n",
" copy.load_state_dict(original.state_dict())\n",
" return copy"
]
},
{
"cell_type": "markdown",
"id": "cb91ef5c",
"metadata": {},
"source": [
"## Low-rank approximation"
]
},
{
"cell_type": "markdown",
"id": "e27ab7f3",
"metadata": {},
"source": [
"The basic idea behind LoRA is to conceptualise fine-tuned weights as a sum $W_0 + \\Delta W$ of the weights from the pre-trained model, $W_0$, and a low-rank update matrix $\\Delta W$. The goal of fine-tuning, then, is to learn the update matrix; this happens in the adapter layers.\n",
"\n",
"Before we get to the implementation of the LoRA adapter layers, we first check to what extent the assumption that fine-tuning can be described by low-rank matrices holds true for DistilBERT. To do so, we will “cheat” and replace the query and value linear layers of the head-tuned model with low-rank approximations. The technical key to this is the truncated singular value decomposition (SVD)."
]
},
{
"cell_type": "markdown",
"id": "a321995d",
"metadata": {},
"source": [
"### 🎓 Task 5: Low-rank matrix approximation\n",
"\n",
"Your first task in this section is to implement the low-rank matrix approximation."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b8b01b6c",
"metadata": {},
"outputs": [],
"source": [
"def approximate(matrix, rank):\n",
" # TODO: Replace the next line with your own code\n",
" return matrix"
]
},
{
"cell_type": "markdown",
"id": "bbc9fe9d",
"metadata": {},
"source": [
"Implement this function to match the following specification:\n",
"\n",
"> **approximate** (*matrix*, *rank*)\n",
">\n",
"> Takes a 2D-tensor (*matrix*) and an integer rank $r$ (*rank*), computes the truncated SVD with rank $r$ on the tensor, and returns the corresponding low-rank approximation matrix."
]
},
{
"cell_type": "markdown",
"id": "348eee05",
"metadata": {},
"source": [
"#### 👍 Hint\n",
"\n",
"If you need a refresher on the low-rank matrix approximation, read the corresponding section from the Wikipedia article on the [Singular value decomposition](https://en.wikipedia.org/wiki/Singular_value_decomposition#Low-rank_matrix_approximation). The truncated SVD is an extension of the full SVD; the latter can be computed using [`torch.linalg.svd()`](https://pytorch.org/docs/stable/generated/torch.linalg.svd.html)."
]
},
{
"cell_type": "markdown",
"id": "d2a1f3a5",
"metadata": {},
"source": [
"#### 🤞 Test your code\n",
"\n",
"To test your code, run the following cell. It creates a matrix `original` with rank $r \\leq 8$ and after that the rank-$8$ approximation matrix `approximation`. You should find that the distance between the two matrices is very low."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6ce06708",
"metadata": {},
"outputs": [],
"source": [
"original = torch.rand(768, 8) @ torch.rand(8, 384)\n",
"approximation = approximate(original, 8)\n",
"torch.dist(original, approximation)"
]
},
{
"cell_type": "markdown",
"id": "a082b398",
"metadata": {},
"source": [
"### 🎓 Task 6: Approximated fine-tuned model (version 1)\n",
"\n",
"In the next step, your task is to construct a version of the head-tuned model in which every query and value linear layer is replaced by a low-rank approximation of the corresponding layer from the fully fine-tuned model."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0b5f59ca",
"metadata": {},
"outputs": [],
"source": [
"def make_approximated_model_1(rank):\n",
" # TODO: Replace the next line with your own code\n",
" raise NotImplementedError"
]
},
{
"cell_type": "markdown",
"id": "45e4b80a",
"metadata": {},
"source": [
"Here is the specification of this function:\n",
"\n",
"> **make_approximated_model_1** (*rank*)\n",
">\n",
"> Takes an integer rank $r$ (*rank*) and returns a version of the head-tuned model in which every query and value linear layer is replaced by its $r$-approximated corresponding layer from the fully fine-tuned model."
]
},
{
"cell_type": "markdown",
"id": "5e8dcd1f",
"metadata": {},
"source": [
"Run the next cell to evaluate your model for different rank values. Start with the full rank and then halve the rank in each step. What is the lowest rank that still gives you a higher accuracy than the head-tuned model?"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "acd5aa87",
"metadata": {},
"outputs": [],
"source": [
"approximated_model_1 = make_approximated_model_1(768)\n",
"\n",
"evaluate(approximated_model_1)"
]
},
{
"cell_type": "markdown",
"id": "6fbea18c",
"metadata": {},
"source": [
"### 🎓 Task 7: Approximated fine-tuned model (version 2)\n",
"\n",
"In the approximated model from the previous section, the truncated SVD is applied to the full weight matrix of the fine-tuned model: $W_0 + \\Delta W$. In LoRA, the low-rank approximation only applies to the *update matrix* $\\Delta W$, i.e., the difference between the fully fine-tuned weights and the pre-trained weights."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7a1d7e5b",
"metadata": {},
"outputs": [],
"source": [
"def make_approximated_model_2(rank):\n",
" # TODO: Replace the next line with your own code\n",
" raise NotImplementedError"
]
},
{
"cell_type": "markdown",
"id": "1951195c",
"metadata": {},
"source": [
"Implement the function to match the following specification:\n",
"\n",
"> **make_approximated_model_2** (*rank*)\n",
">\n",
"> Takes an integer rank $r$ (*rank*) and returns a version of the head-tuned model in which the weight matrix of every query and value linear layer is replaced by the sum $W_0 + \\Delta W$, where $W_0$ is the weight matrix of the pre-trained model and $\\Delta W$ is the rank-$r$ approximation of the update matrix, i.e., the difference between the fully fine-tuned weights and the pre-trained weights."
]
},
{
"cell_type": "markdown",
"id": "475125cf",
"metadata": {},
"source": [
"Run the next cell to evaluate your model for different rank values. Start with the rank from the approximated model from the previous section and then halve the rank in each step. What is the lowest rank that still gives you a higher accuracy than the head-tuned model?"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2c614186",
"metadata": {},
"outputs": [],
"source": [
"approximated_model_2 = make_approximated_model_2(768)\n",
"\n",
"evaluate(approximated_model_2)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "18430496",
"metadata": {},
"outputs": [],
"source": [
"approximated_model_2 = make_approximated_model_2(3)\n",
"\n",
"evaluate(approximated_model_2)"
]
},
{
"cell_type": "markdown",
"id": "9136defe",
"metadata": {},
"source": [
"## Low-Rank Adaptation (LoRA)"
]
},
{
"cell_type": "markdown",
"id": "4d4fd2c2",
"metadata": {},
"source": [
"In this section, you will implement the LoRA adapters and fine-tune the adapted model."
]
},
{
"cell_type": "markdown",
"id": "e56a4235",
"metadata": {},
"source": [
"### 🎓 Task 8: Implement the adapter\n",
"\n",
"A LoRA adapter implements the forward function\n",
"\n",
"$$\n",
"y = x W_0 + x \\Delta W = x W_0 + x A B\n",
"$$\n",
"\n",
"where $W_0$ is a linear transformation from the pre-trained model and $\\Delta W$ is a learned update matrix, deconstructed into the product $AB$ of two rank-$r$ matrices $A$ and $B$. LoRA scales the update matrix $\\Delta W$ by a factor of $\\alpha / r$, where $\\alpha$ is a hyperparameter. (To keep the formula tidy, we ignore the fact that the linear transformation in the pre-trained model may additionally include a bias.)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f91ec802",
"metadata": {},
"outputs": [],
"source": [
"import torch.nn as nn\n",
"\n",
"\n",
"class LoRA(nn.Module):\n",
" def __init__(self, pretrained, rank=12, alpha=24):\n",
" super().__init__()\n",
" # TODO: Add your code here\n",
"\n",
" def forward(self, x):\n",
" # TODO: Replace the next line with your own code\n",
" raise NotImplementedError"
]
},
{
"cell_type": "markdown",
"id": "dbd075dc",
"metadata": {},
"source": [
"Your code must comply with the following specification:\n",
"\n",
"**__init__** (*self*, *pretrained*, *rank* = 12, *alpha* = 24)\n",
"\n",
"> Initialises the LoRA adapter. This sets up the matrices $A$ and $B$ from the equation above. The matrix $A$ is initialised with random weights from a standard normal distribution; the matrix $B$ is initialised with zeros. The argument *pretrained* is the linear layer from the pre-trained model that should be adapted. The arguments *rank* and *alpha* are the rank $r$ and the hyperparameter $\\alpha$ in the equation above.\n",
"\n",
"**forward** (*self*, *x*)\n",
"\n",
"> Sends an input *x* through the adapter, implementing the equation above."
]
},
{
"cell_type": "markdown",
"id": "6b1d6bac",
"metadata": {},
"source": [
"### 🎓 Task 9: Inject the adapter into the pre-trained model\n",
"\n",
"The final step is to construct an adapted model by injecting the LoRA adapters into the pre-trained model."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "16d4f05c",
"metadata": {},
"outputs": [],
"source": [
"def make_lora_model(rank):\n",
" # TODO: Replace the next line with your own code\n",
" raise NotImplementedError"
]
},
{
"cell_type": "markdown",
"id": "0377612a",
"metadata": {},
"source": [
"Implement the function to match the following specification:\n",
"\n",
"> **make_lora_model** (*rank*)\n",
">\n",
"> Returns a model that is identical to the pre-trained model, except that the query and value linear layers have been wrapped in LoRA adapters, and the LoRA adapters and the head layers of the pre-trained model have been trained on the sentiment data. (The other parameters of the pre-trained model are left untouched.) The rank of the adapters is specified by the argument *rank*. The *alpha* value of the adapters is set to twice the rank (a common rule of thumb)."
]
},
{
"cell_type": "markdown",
"id": "6150e7d7",
"metadata": {},
"source": [
"Run the next cell to evaluate your model for $r = 6$ and $\\alpha = 12$. How many trainable parameters does the adapted model have? What accuracy do you get? How do these value relate to the number of trainable parameters and accuracy of the fully fine-tuned model, in terms of percentages?"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2538e3dc",
"metadata": {},
"outputs": [],
"source": [
"lora_model = make_lora_model(6)\n",