diff --git a/README.md b/README.md
index c76ba5c812f4438b6bacb4f58581f224ea4690f0..75e444077a4f64a958b062bc0bb7a3d45b5e324d 100644
--- a/README.md
+++ b/README.md
@@ -15,38 +15,38 @@ sudo apt install -y ./*.deb
 Plese note that enroot has been installed on Berzelius. You can skip this installation step if you plan to use it on Berzeliu.
 
 ## Set up Nvidia credentials
+This step is necessary for importing container images from Nvidia NGC.
 
-Complete step [4.1](https://docs.nvidia.com/ngc/ngc-overview/index.html#account-signup) and [4.3](https://docs.nvidia.com/ngc/ngc-overview/index.html#generating-api-key). Save the API key.  
+- Complete step [4.1](https://docs.nvidia.com/ngc/ngc-overview/index.html#account-signup) and [4.3](https://docs.nvidia.com/ngc/ngc-overview/index.html#generating-api-key). Save the API key.  
 
-Add the API key to the config file at ```~/.config/enroot/.credentials  ```  
+- Add the API key to the config file at ```~/.config/enroot/.credentials  ```  
 ```
 machine nvcr.io login $oauthtoken password your_api_key
 machine authn.nvidia.com login $oauthtoken password your_api_key
 ```
 
-Set the config path by adding the line to ```~/.bashrc```
+- Set the config path by adding the line to ```~/.bashrc```
 ```
 export ENROOT_CONFIG_PATH=/home/xuagu37/.config/enroot
 ```
 
-To make the path valid
+- To make the path valid
 ```
 source ~/.bashrc
 ```
-This step is necessary for importing container images from Nvidia NGC.
 
 ## Import container images
 
 You can import a container image either from Nvidia NGC or Pytorch/Tensorflow official Docker Hub repositories.
 
-From Nvidia NGC 
+- From Nvidia NGC 
 ```
 enroot import 'docker://nvcr.io#nvidia/pytorch:22.09-py3'
 enroot import 'docker://nvcr.io#nvidia/tensorflow:22.11-tf2-py3'
 ```
 For other versions, please see the release notes for [Pytorch](https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/index.html) and [Tensorflow](https://docs.nvidia.com/deeplearning/frameworks/tensorflow-release-notes/index.html).
 
-From Pytorch/Tensorflow official Docker Hub repositories
+- From Pytorch/Tensorflow official Docker Hub repositories
 ```
 enroot import 'docker://pytorch/pytorch:1.12.1-cuda11.3-cudnn8-devel'
 enroot import 'docker://tensorflow/tensorflow:2.11.0-gpu'
@@ -62,19 +62,19 @@ enroot create --name nvidia_pytorch_22.09 nvidia+pytorch+22.09-py3.sqsh
 
 ## Start a container
 
-As the root user
+- As the root user
 ```
 enroot start --root --rw --mount /proj/nsc_testing/xuan:/proj/nsc_testing/xuan nvidia_pytorch_22.09  
 ```
 
-As a non-root user
+- As a non-root user
 ```
 enroot start --rw --mount /proj/nsc_testing/xuan:/proj/nsc_testing/xuan nvidia_pytorch_22.09  
 ```
 
 The flag ```--mount``` mounts your local directory to your container.
 
-You can also start a container and run your command at the same time.
+- You can also start a container and run your command at the same time.
 ```
 enroot start --rw --mount /proj/nsc_testing/xuan:/proj/nsc_testing/xuan nvidia_pytorch_22.09 sh -c 'python path_to_your_script.py' 
 ```