Reworked Implementation of RLN
Original Paper:
Incorporating the image formation process into deep learning improves network performance.
Requirements
- apptainer
Containers
This project uses apptainer as the main container engine. Basic commands are:
apptainer build target.sif definiton.def
See Apptainer documentation for more details. There are three containers currently provided:
- basic_image: This is the last available official build of tensorflow 1.14 without gpu support.
- basic_image_gpu: This is the last available official build of tensorflow 1.14 with gpu support.
- nvidia_image_gpu: This is the latest build of the nvidia tensorflow 1.x project. Provides modern gpu support for tensorflow 1.15
On GPU Acceleration
The container basic_image_gpu is compiled for cuda 10. This is a very old version and does not support modern GPUs. This triggers an on-the-fly recompilation cascade that can take hours depending on how new the GPU is (newer means longer).
The nvidia container solves this problem. This container is provided bzy Nvidia and contains TF 1 that was compiled for a newer version of CUDA. It has the downside of being ca. 8 GB without the data. Still, this is the only way to get the code running on modern gpus without triggering hours long recompilation every time
Dataset & Model
There is test dataset and model available from the authors in the respective folders. There are also matlab scripts provided by the authors that generate synthetic data. See the folder for more details.
Status
We have tested that the main script RLN_single.py is working as intended for the modes:
- TR: Training
- TS: Inference
Other modes and other scripts might have unknown bugs.
The matlab scripts in Phantom_generate work and can be used to generate images blurred using custom PSFs. We only tested this for the provided PSF.tif.
Usage
The main command to use is:
apptainer run --nv --bind {path to your data}:/data {container name}.sif --mode {TR for training TS for inference}
see --help or the RLN_single.py for details on the arguments.
Data Structure
This is the required folder structure to be created before using the container or the script.
{path to your data}
├── logs
├── test
│ ├── ground_truth
│ ├── input
│ └── output_rl
└── train
├── ground_truth
├── input
├── model_rl
└── output_rl
File | Usage |
---|---|
logs | Used to save tf.summary objects. Can be used with tensorboard. |
test | Validation & Inference |
test/ground_truth | Used in Validation. Should contain ground truths corresponding to inputs. |
test/input | Used in Validation & Inference. Should contain input files. |
test/output_rl | Output location for Inference & Validation. |
train | Training |
train/ground_truth | Ground Truths for Training |
train/input | Blurred Inputs for Training |
train/model_rl | Output for training checkpoints. Used also in Inference & Validation to get the latest weights. |
train/output_rl | Output for generated images during training. |
The ground truth and blurred versions of the images should have the same names but be in different folders.