From 31b01ee0e43868cdac34f2bd54317347bff33ef0 Mon Sep 17 00:00:00 2001
From: Xuan Gu <xuagu37@gmail.com>
Date: Thu, 20 Oct 2022 13:13:28 +0200
Subject: [PATCH] Update README.md

---
 README.md | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/README.md b/README.md
index 01ecbc5..025581c 100644
--- a/README.md
+++ b/README.md
@@ -84,8 +84,8 @@ when batch_size is large (16, 32, 64, 128), throughput_amp > throughput_tf32.
 
 - Observation 2: The coefficient of variation of throughput for the 100 iterations is smallest when batch_size = 128.  
 
-Benchmarking with dim = 2, nodes = 1,2, gpus = 8, batch_size = 128 can be used for node health check.  
-For example, the expected throughput for dim = 2, nodes = 1, gpus = 8, batch_size = 128 would be ? ± ? (TF32) and ? ± ? (AMP).
+**Benchmarking with dim = 2, nodes = 1, 2, gpus = 8, batch_size = 128 can be used for node health check.  
+For example, the expected throughput for dim = 2, nodes = 1, gpus = 8, batch_size = 128 would be 4700 ± 500 (TF32).**
 
 <img src="https://github.com/xuagu37/Benchmark_nnU-Net_for_PyTorch/blob/main/figures/benchmark_throughput_cv.png" width="400">
 
@@ -95,7 +95,7 @@ For example, the expected throughput for dim = 2, nodes = 1, gpus = 8, batch_siz
 
 - Observation 4: Ideally, the improvement of throughput would be linear when the number of GPUs increases. In practice, throughtput stays below the ideal curve when gpus increases.
 
-<img src="https://github.com/xuagu37/Benchmark_nnU-Net_for_PyTorch/blob/main/figures/benchmark_gpus_ideal.png" width="400">
+<img src="https://github.com/xuagu37/Benchmark_nnU-Net_for_PyTorch/blob/main/figures/benchmark_throughput_gpus_ideal.png" width="400">
 
 
 #### Notes
-- 
GitLab