Code indexing in gitaly is broken and leads to code not being visible to the user. We work on the issue with highest priority.

Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • tekin_g/rln_original_implementation
1 result
Show changes
Commits on Source (2)
# Reworked Implementation of RLN
This is the 2d version of the original implementation.
This is the 3d version of the tf2 implementation.
## Missing:
- a way to read images
- never tested
Original Paper:
......@@ -8,14 +13,28 @@ Original Paper:
This repository has three main branches:
| Name | Explanation |
|:-------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| main | The original code from the paper with minor bug fixes & quality of life improvements. |
| 2d-tf2 | This branch is a 2d implementation of the same network design, but in TensorFlow 2. It is still based on the code from the original paper and uses parts of the original logic. This contains the main results of the project. |
| **3d-tf2** | This branch contains the same network as of 2d-tf2, but the functions are replaced with their 3d equivalents. **Untested and still needs work.** |
## Requirements
- gmerlin access
- gmerlin access or an nvidia gpu with cuda & drivers installed.
- apptainer
## Containers
This project uses [apptainer](https://apptainer.org/docs/user/main/introduction.html) as the main container engine. Basic commands are:
```shell
apptainer build target.sif definiton.def #build the definition.def into target.sif
apptainer exec targe.sif {Your Command} #run {Your Command} in the container.
apptainer run target.sif #run the preconfigured command of the container.
```
apptainer build target.sif definiton.def
......@@ -24,21 +43,94 @@ This project uses [apptainer](https://apptainer.org/docs/user/main/introduction.
See documentation for more details. There is one container currently provided for the 2d branch:
- tf2_image_gpu.def: This is an apptainer container version of the tf2 docker image. The container itself does not contain any code. The code should be mounted using the --bind option and run using the "exec" command.
## Usage
To use the RLN on your data you need to train and adapt the model to the noise characteristics of your camera or microscope. This can be done by the following steps:
1. Determine the PSF of the microscope or camera.
2. Use the Python script in [Phantom_generate](Phantom_generate) to generate phantoms that were blured using the PSF of your microscope.
3. Train the network using the synthetic data generated by the Python script.
4. Use the network on real world images.
## Usage
### Synthetic Data
#### Synthetic Data
There is a python [script](Phantom_generate/synthetic_data.py) to generate synthetic training data and blur it using the PSF function of the camera. See the python script for more details. There are also helper scripts available to determine the PSF of the camera from images of pinhole light sources. See [this](helper scripts/PSF/VC MIPI/README.md) for more details.
### Training
Before you train a RLN model, we recommend you to create the folders as follows:
Main folder (rename as you like):
-- train
--input
--ground truth
--model_rl
--output_rl
-- logs
#### Data Structure
Before training, we need to create our folder structure. This can be on merlin in the /data directory or locally if you are using a local machine
```
{path to your data}
├── logs
├── test
│ ├── ground_truth
│ ├── input
│ └── output_rl
└── train
├── ground_truth
├── input
├── model_rl
└── output_rl
```
| File | Usage |
|--------------------|:------------------------------------------------------------------------------------------------|
| **logs** | Used to save tf.summary objects. Can be used with tensorboard. |
| **test** | Validation & Inference |
| test/ground_truth | Used in Validation. Should contain ground truths corresponding to inputs. |
| test/input | Used in Validation & Inference. Should contain input files. |
| test/output_rl | Output location for Inference & Validation. |
| **train** | Training |
| train/ground_truth | Ground Truths for Training |
| train/input | Blurred Inputs for Training |
| train/model_rl | Output for training checkpoints. Used also in Inference & Validation to get the latest weights. |
| train/output_rl | Output for generated images during training. |
The ground truth and blurred versions of the images should have the same names but be in different folders.
#### Running the scripts
After creating the files as in Section "Data Structure", you should copy the output of the synthetic data script to their respective folders.
You should also copy the python scripts in [RLN_code](RLN_code) to {path to your data}.
Then you can use:
```shell
apptainer exec --nv --bind {path to your data}:/data {container name}.sif python /data/RLN_x_x.py
```
to run the script RLN_x_x.py in the container
The **--bind** option makes your {path to your data} accessible in the container as /data. The scripts are also made available inside the container if they were in {path to your data}.
The scripts are also programmed to look at this directory as their data directory. We also
The **--nv** option tells apptainer to make the nvidia driver and cuda binaries available to the container.
The container automatically runs the scripts.
The command after [...].sif is run _inside_ the container. In this case we use
```shell
python /data/RLN_x_x.py
```
to run the desired python script. We can also pass arguments to the python script in the usual way.
The scripts:
- [RLN_single_Train.py](RLN_code/RLN_single_Train.py)
- [RLN_single_Test.py](RLN_code/RLN_single_Test.py)
- [RLN_single_Inference.py](RLN_code/RLN_single_inference.py)
are currently available. They depend on [RLN_single_Model.py](RLN_code/RLN_single_Model.py) and [TF2ImageHelpers.py](RLN_code/TF2ImageHelpers.py) being available in the same directory to funciton.
#### Monitoring the Progress
Using tensorboard it is possible to monitor the training progress in real-time from a web browser.
You need to have TensorFlow installed in order to use TensorBoard:
1. Run the training on merlin.
```shell
apptainer run --nv --bind {path to your data}:/data nvidia_image_gpu:.sif --mode TR
```
2. Mount {path to your data}/logs in merlin to a local older using sshfs:
```shell
sshfs user@merlin-l-001.psi.ch:{path to your data}/logs mnt-merlin
```
4. Run tensorboard in a shell that has access to tensorflow (you may need to source the virtual environment):
```shell
tensorbaord --logdir /mnt-merlin
```
5. Now you can access the website from a browser on your computer.