diff --git a/README.md b/README.md
index cf98fefd43afcd1dc877af78df0cd8519a0b88c5..253cfae03c62489a2bc911c2db2c45bd1d0521b4 100644
--- a/README.md
+++ b/README.md
@@ -77,7 +77,7 @@ TF32 (TensorFloat32) mode is for accelerating FP32 convolutions and matrix multi
 AMP (Automatic Mixed Precision) offers significant computational speedup by performing operations in half-precision (FP16) format, while storing minimal information in single-precision (TF32) to retain as much information as possible in critical parts of the network.   
 
 We run 100 iterations for each set of parameters.
-- Observation 1: when batch_size is small (1, 2, 4, 8), throughput_amp ≈ throughput_tf32;  
+**Observation 1**: when batch_size is small (1, 2, 4, 8), throughput_amp ≈ throughput_tf32;  
 when batch_size is large (16, 32, 64, 128), throughput_amp > throughput_tf32.  
 
 <img src="https://github.com/xuagu37/Benchmark_nnU-Net_for_PyTorch/blob/main/figures/benchmark_throughput_batch_size.png" width="400">
@@ -90,11 +90,11 @@ when batch_size is large (16, 32, 64, 128), throughput_amp > throughput_tf32.
 - The expected throughput for dim = 2, nodes = 1, gpus = 8, batch_size = 128 would be 4700 ± 500 (TF32).
 - The expected throughput for dim = 2, nodes = 2, gpus = 16, batch_size = 128 would be 9250 ± 150 (TF32).
 
-- Observation 3: Ideally, the improvement of throughput would be linear when batch_size increases. In practice, throughtput stays below the ideal curve when batch_size > 16.
+**Observation 3**: Ideally, the improvement of throughput would be linear when batch_size increases. In practice, throughtput stays below the ideal curve when batch_size > 16.
 
 <img src="https://github.com/xuagu37/Benchmark_nnU-Net_for_PyTorch/blob/main/figures/benchmark_throughput_batch_size_ideal.png" width="400">
 
-- Observation 4: Ideally, the improvement of throughput would be linear when the number of GPUs increases. In practice, throughtput stays below the ideal curve when the number of gpus increases.
+**Observation 4**: Ideally, the improvement of throughput would be linear when the number of GPUs increases. In practice, throughtput stays below the ideal curve when the number of gpus increases.
 
 <img src="https://github.com/xuagu37/Benchmark_nnU-Net_for_PyTorch/blob/main/figures/benchmark_throughput_gpus_ideal.png" width="400">