Code indexing in gitaly is broken and leads to code not being visible to the user. We work on the issue with highest priority.

Skip to content
Snippets Groups Projects
guney's avatar
tekin_g authored
db40dbad
History

Reworked Implementation of RLN

This is the 2d version of the original implementation.

Original Paper:

Incorporating the image formation process into deep learning improves network performance.

Requirements

  • gmerlin access
  • apptainer

Tested Environment:

Containers

This project uses apptainer as the main container engine. Basic commands are:

apptainer build target.sif definiton.def 

apptainer run target.sif

See documentation for more details. There are three containers currently provided:

  • basic_image: This is the last available official build of tensorflow 1.14 without gpu support.
  • basic_image_gpu: This is the last available official build of tensorflow 1.14 with gpu support.
  • nvidia_image_gpu: This is the latest build of the nvidia tensorflow 1.x project. Provides modern gpu support for tensorflow 1.15

On GPU Acceleration

The container basic_image_gpu requires cuda 10.x.xxx to run. This is a very old version and does not support modern AI acceleration. A-series cards don't even support it and other cards get severely limited when using cuda 10. It is suggested to use the nvidia container. This container has the downside of being ca. 8 GB without the data. Still, the speed increase when using gwendolen A100s is still much more than the loss during data transfers and container building. If you are going to use basic_container_gpu you need to load the cuda/10.0.130 module from Pmodules. The nvidia container does not require any extra modules.

Dataset & Model

There is test dataset and model available from the authors in the respective folders. There are also matlab scripts provided by the authors that generate synthetic data. See the folder for more details.

Usage

Training

Before you train a RLN model, we reconmmend you to create the folders as follow:

Main folder (rename as you like):
  -- train
        --input
        --(input2) for dual-input mode
        --ground truth
        --model_rl
        --output_rl
        --labels.txt (containing the input raw name)
  -- test
        --input
        --(input2) for dual-input mode
        --(ground truth) for validation
        --output_rl
        --labels.txt

We put the RLN code in the (RLN code). There are configuration for single-input RLN and dual-input RLN.

makedata3D_train_single.py and makedata3D_train_dual.py are the data loading and preprocessing files for single-input and dual-input training respectively. makedata3D_test_single.py and makedata3D_test_dual.py are the data loading and preprocessing files for single-input and dual-input testing respectively.

After preparing the dataset and relative folders, you can set the main parameters for train in the RLN-single.py or RLN-dual.py:

mode: TR:train ; VL:validation, with known ground truth ; TS: test , no ground truth; TSS: test and with stitch (only in RLN-single) relative folders: data_dir = '/home/liyue/newdata1/' # the main folder including the train and test folders model_path='/home/liyue/newdata1/train/model_rl/new_single_used/' #the folder for the trained model to be saved train_output='/home/liyue/newdata1/train/output_rl/' #the train output saved folder test_output='/home/liyue/newdata1/test/output_rl/' # the validating output or testing output folders

train_iter_num=iter_per_epoch*epochs test_iter_num=testing_data_numbers train_batch_size=as you want test_batch_size=1

crop_data_size=365 is the maximum data size for cropping and stitching. normal_pmin=0.01 is the minimum percentage for normalize normal_pmax=99.9 is the maximum percentage for normalize

and you can set the learning rate in: self.learning_rate = tf.train.exponential_decay(0.02,self.global_step,1000,0.9,staircase=False)

After you setting these parameters, you can run: python RLN-single.py or python RLN-dual.py

During training, put training dataset into the input folder and ground truth folder, each input/ground truth (input2) image pair share the same file name. Labels.txt summary the name of all the image pairs. model_rl folder is used to save the trained model. The output_rl model is used to save the output during training, this is not necessary and just to supervise the training procedure.

For 200 epochs with 100 iteration/epoch, it cost 2-3 hours to train the model.

Model Apply

During testing, put testing dataset into the input folder and list the testing data name into the labels.txt. The output_rl model is used to save the test result. If you have ground truth, you can try validation to metric the difference between the output and ground truth.

You need to set the following parameters: mode model_path test_output test_iter_num=testing_data_numbers normal_pmin=0.01 is the minimum percentage for normalize normal_pmax=99.9 is the maximum percentage for normalize

After you setting these parameters, you can run: python RLN-single.py or python RLN-dual.py