From 74ec80a9409bf59a3dd4de09f0fa231b9415e288 Mon Sep 17 00:00:00 2001
From: Hugo Bjork <hugobjork@me.com>
Date: Wed, 5 Apr 2023 12:04:37 +0200
Subject: [PATCH] updated readme

---
 README.md | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/README.md b/README.md
index f094be4..4a9f12e 100644
--- a/README.md
+++ b/README.md
@@ -12,11 +12,11 @@ The dataset used consist of 50k labeled IMDb movie reviews. Due to hardware cons
 
 <center>
 
-|           | Train | Test  |
-| :-------: | :---: | :---: |
-| Positive  | 5,189 | 4,766 |
-| Negative  | 1,707 | 1,613 |
-| **Total** | 9,955 | 3,319 |
+|           | Train | Valid | Test  |
+| :-------: | :---: | :---: | :---: |
+| Positive  | 4,849 | 1,033 | 1,013 |
+| Negative  | 4,442 |  958  |  979  |
+| **Total** | 9,291 | 1,991 | 1,992 |
 
 </center>
 
@@ -36,7 +36,7 @@ our model, precision, recall and f1-score will serve as a complementary to the a
 
 As baseline for this project, a regular BERT model has been implemented and fine tuned on the task of classifying the sentiment of IMDb reviews.
 
-Training our baseline model for 1 epoch using a batch size of 32 yielded the following results:
+Training our baseline model for 1 epoch using a batch size of 32 yielded the following average results:
 
 <center>
 
@@ -50,7 +50,7 @@ Training our baseline model for 1 epoch using a batch size of 32 yielded the fol
 
 ### Method 1
 
-Method 1 implements a multi layer perceptron to combine the fine-tuned BERT model from our baseline with VAD-scores from VADER. Training the MLP implementation yielded results as follows:
+Method 1 implements a multi layer perceptron to combine the fine-tuned BERT model from our baseline with VAD-scores from VADER. Training the MLP implementation yielded average results as follows:
 
 <center>
 
@@ -62,7 +62,7 @@ Method 1 implements a multi layer perceptron to combine the fine-tuned BERT mode
 
 ### Method 2
 
-Method 2 assigns weights to the individual results from the fine-tuned BERT and VADER and combines the models with different weight-combinations. The best combination of weights yielded the following results:
+Method 2 assigns weights to the individual results from the fine-tuned BERT and VADER and combines the models with different weight-combinations. The best combination of weights yielded the following average results:
 
 <center>
 
-- 
GitLab