Code indexing in gitaly is broken and leads to code not being visible to the user. We work on the issue with highest priority.

Skip to content
Snippets Groups Projects
Commit 97719718 authored by tekin_g's avatar tekin_g
Browse files

improve documentation

(cherry picked from commit a89f0ee2)
parent 42ac85aa
No related branches found
No related tags found
No related merge requests found
......@@ -18,11 +18,92 @@ This repository has three main branches:
## Requirements
- gmerlin access or an nvidia gpu with cuda & drivers installed.
- apptainer
- gmerlin access or a local nvidia gpu with cuda & drivers installed.
- apptainer (see [Containerized Usage](#containerized-usage)) **or** local TensorFlow installation (see [Local Usage](#local-usage))
## Usage Overview
To use the RLN on your data you need to train and adapt the model to the noise characteristics of your camera or microscope. This can be done by the following steps:
1. Determine the PSF of the microscope or camera.
2. Use the Python script in [Phantom_generate](Phantom_generate) to generate phantoms that were blured using the PSF of your microscope.
3. Train the network using the synthetic data generated by the Python script.
4. Use the network on real world images.
# Local Usage
To use the scripts with a local installation of python. You need to install TensorFlow2. Follow this [guide](https://www.tensorflow.org/install/pip) for more details.
You may want to install TF2 in a virtual environment.
## Usage
#### Synthetic Data
There is a python [script](Phantom_generate/synthetic_data.py) to generate synthetic training data and blur it using the PSF function of the camera. See the python script for more details. There are also helper scripts available to determine the PSF of the camera from images of pinhole light sources. See [this](helper scripts/PSF/VC MIPI/README.md) for more details.
### Training
#### Data Structure
Before training, we need to create our folder structure. This can be on merlin in the /data directory or locally if you are using a local machine
```
{path to your data}
├── logs
├── test
│ ├── ground_truth
│ ├── input
│ └── output_rl
└── train
├── ground_truth
├── input
├── model_rl
└── output_rl
```
| File | Usage |
|--------------------|:------------------------------------------------------------------------------------------------|
| **logs** | Used to save tf.summary objects. Can be used with tensorboard. |
| **test** | Validation & Inference |
| test/ground_truth | Used in Validation. Should contain ground truths corresponding to inputs. |
| test/input | Used in Validation & Inference. Should contain input files. |
| test/output_rl | Output location for Inference & Validation. |
| **train** | Training |
| train/ground_truth | Ground Truths for Training |
| train/input | Blurred Inputs for Training |
| train/model_rl | Output for training checkpoints. Used also in Inference & Validation to get the latest weights. |
| train/output_rl | Output for generated images during training. |
The ground truth and blurred versions of the images should have the same names but be in different folders.
You may need to change the base_dir variable in the scripts.
This is normally set to "/data" , since we mount {path to you data} as "/data" in the container.
Change
```python
base_dir = "/data"
```
to
```python
base_dir = "{path to your data}"
```
replacing "{path to your data}" with the actual path.
#### Running the scripts
After creating the files as in Section "Data Structure", you should copy the output of the synthetic data script to their respective folders.
You should also copy the python scripts in [RLN_code](RLN_code) to {path to your data}.
Then you can use:
```shell
python RLN_code/RLN_x_x.py
```
The scripts:
- [RLN_single_Train.py](RLN_code/RLN_single_Train.py)
- [RLN_single_Test.py](RLN_code/RLN_single_Test.py)
- [RLN_single_Inference.py](RLN_code/RLN_single_inference.py)
are currently available. They depend on [RLN_single_Model.py](RLN_code/RLN_single_Model.py) and [TF2ImageHelpers.py](RLN_code/TF2ImageHelpers.py) being available in the same directory to funciton.
#### Monitoring the Training Progress
Using tensorboard it is possible to monitor the training progress in real-time from a web browser.
You need to have TensorFlow installed in order to use TensorBoard:
```shell
tensorbaord --logdir {path to your data}/logs
```
## Containers
# Containerized Usage
This project uses [apptainer](https://apptainer.org/docs/user/main/introduction.html) as the main container engine. Basic commands are:
```shell
apptainer build target.sif definiton.def #build the definition.def into target.sif
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment