From a89f0ee22bb5038b0872fde2a47bbc3e422e2cfc Mon Sep 17 00:00:00 2001
From: guney <gueney.tekin@psi.ch>
Date: Mon, 22 Apr 2024 17:42:40 +0200
Subject: [PATCH] improve documentation

---
 README.md | 30 +++++++++++++++++++-----------
 1 file changed, 19 insertions(+), 11 deletions(-)

diff --git a/README.md b/README.md
index 78c42cf..db4f4e5 100644
--- a/README.md
+++ b/README.md
@@ -7,6 +7,14 @@ Original Paper:
 
 
 
+This repository has three main branches:
+
+|   Name   | Explanation                                                                                                                                                                                                                    |
+|:--------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **main** | The original code from the paper with minor bug fixes  & quality of life improvements.                                                                                                                                         |
+|  2d-tf2  | This branch is a 2d implementation of the same network design, but in TensorFlow 2. It is still based on the code from the original paper and uses parts of the original logic. This contains the main results of the project. |
+|  3d-tf2  | This branch contains the same network as of 2d-tf2, but the functions are replaced with their 3d equivalents. **Untested and still needs work.**                                                                               |
+
 ## Requirements
 
 - gmerlin access or an nvidia gpu with cuda & drivers installed.
@@ -15,28 +23,28 @@ Original Paper:
 
 ## Containers
 This project uses [apptainer](https://apptainer.org/docs/user/main/introduction.html) as the main container engine. Basic commands are:
+```shell
+apptainer build target.sif definiton.def #build the definition.def into target.sif
+apptainer exec targe.sif {Your Command} #run {Your Command} in the container.
+apptainer run target.sif #run the preconfigured command of the container.
+```
 
-    apptainer build target.sif definiton.def 
-
-See Apptainer documentation for more details. There are three containers currently provided:
-- basic_image: This is the last available official build of tensorflow 1.14 without gpu support. 
-- basic_image_gpu: This is the last available official build of tensorflow 1.14 *with* gpu support.
-- **nvidia_image_gpu** (use this one): This is the latest build of the nvidia tensorflow 1.x project. Provides modern gpu support for tensorflow 1.15
 
 See documentation for more details. There are three containers currently provided:
+
 - [basic_image](basic_image.def): This is the last available official build of tensorflow 1.14 without gpu support. 
 - [basic_image_gpu](basic_image_gpu.def): This is the last available official build of tensorflow 1.14 *with* gpu support.
 - [nvidia_image_gpu](nvidia_image_gpu.def): This is the latest build of the nvidia tensorflow 1.x project. Provides modern gpu support for tensorflow 1.15
 
 ### On GPU Acceleration
-The container basic_image_gpu is compiled for cuda 10. This is a very old version and does not support modern GPUs. This triggers an on-the-fly recompilation cascade that can take hours depending on how new the GPU is (newer means longer).
+The container basic_image_gpu is compiled for cuda 10. This is a very old version and does not support modern GPUs. This triggers an on-the-fly recompilation cascade that can take hours depending on how new the GPU is (newer means more stuff to recompile).
 
 The nvidia container solves this problem. This container is provided by Nvidia and contains TF 1 that was compiled for a newer version of CUDA. It has the downside of being ca. 8 GB **without** the data. Still, this is the only way to get the code running on modern gpus without triggering hours long recompilation every time 
 
 
 
 ## Dataset & Model
-There is test dataset and model available from the authors in the respective folders. There are also matlab scripts provided by the authors that generate synthetic data. See the folder for more details.
+There is a test dataset and weights available from the authors in the respective folders. There are also matlab scripts provided by the authors that generate synthetic data. See the folder for more details.
 
 ## Status
 We have tested that the main script **[RLN_single.py](RLN_code/RLN_single.py)** is working as intended for the modes:
@@ -50,8 +58,8 @@ The matlab scripts in **[Phantom_generate](Phantom_generate)** work and can be u
 
 
 ## Usage 
-To use the RLN method on your data you need to train and adapt the model to the noise characteristics of your microscope. This can be done by the following steps:
-1. Determine the PSF function of the microscope. 
+To use the RLn network on your data you need to train and adapt the model to the noise characteristics of your microscope. This can be done by the following steps:
+1. Determine the PSF of the microscope. 
 2. Use the Matlab scripts in Phantom_generate to generate phantoms that were blured using the PSF of your microscope.
 3. Train the network using the synthetic data generated by the Matlab scripts.
 4. Use the network on real world images.
@@ -99,7 +107,7 @@ DATA
 └── input_no_noise
     └── 1.tif
 ```
-The number of images depends on a variable in [Phantom_generate.m](Phantom_generate/Phantom_generate.m). We suggest 100 images. This will generate 100 separate .tif files for each output category. For the training we used the output in "input_noise". These have an added gaussian noise on top of the PSF. In our experience this helps stop the network from overfitting.
+The number of images depends on a variable in [Phantom_generate.m](Phantom_generate/Phantom_generate.m). We suggest 200 images. This will generate 200 separate .tif files for each output category. For the training we used the output in "input_noise". These have an added gaussian noise on top of the PSF. In our experience this helps stop the network from overfitting.
 #### Running the scripts
 After creating the files as in Section "Data Structure", you should copy DATA/ground_truth/* to {path to your data}/train/ground_truth and DATA/input_noise/ to {path to your data}/train/input/. Then you can use:
 ```shell
-- 
GitLab