Code indexing in gitaly is broken and leads to code not being visible to the user. We work on the issue with highest priority.

Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • tekin_g/rln_original_implementation
1 result
Show changes
Commits on Source (2)
......@@ -18,11 +18,92 @@ This repository has three main branches:
## Requirements
- gmerlin access or an nvidia gpu with cuda & drivers installed.
- apptainer
- gmerlin access or a local nvidia gpu with cuda & drivers installed.
- apptainer (see [Containerized Usage](#containerized-usage)) **or** local TensorFlow installation (see [Local Usage](#local-usage))
## Usage Overview
To use the RLN on your data you need to train and adapt the model to the noise characteristics of your camera or microscope. This can be done by the following steps:
1. Determine the PSF of the microscope or camera.
2. Use the Python script in [Phantom_generate](Phantom_generate) to generate phantoms that were blured using the PSF of your microscope.
3. Train the network using the synthetic data generated by the Python script.
4. Use the network on real world images.
# Local Usage
To use the scripts with a local installation of python. You need to install TensorFlow2. Follow this [guide](https://www.tensorflow.org/install/pip) for more details.
You may want to install TF2 in a virtual environment.
## Usage
#### Synthetic Data
There is a python [script](Phantom_generate/synthetic_data.py) to generate synthetic training data and blur it using the PSF function of the camera. See the python script for more details. There are also helper scripts available to determine the PSF of the camera from images of pinhole light sources. See [this](helper scripts/PSF/VC MIPI/README.md) for more details.
### Training
#### Data Structure
Before training, we need to create our folder structure. This can be on merlin in the /data directory or locally if you are using a local machine
```
{path to your data}
├── logs
├── test
│ ├── ground_truth
│ ├── input
│ └── output_rl
└── train
├── ground_truth
├── input
├── model_rl
└── output_rl
```
| File | Usage |
|--------------------|:------------------------------------------------------------------------------------------------|
| **logs** | Used to save tf.summary objects. Can be used with tensorboard. |
| **test** | Validation & Inference |
| test/ground_truth | Used in Validation. Should contain ground truths corresponding to inputs. |
| test/input | Used in Validation & Inference. Should contain input files. |
| test/output_rl | Output location for Inference & Validation. |
| **train** | Training |
| train/ground_truth | Ground Truths for Training |
| train/input | Blurred Inputs for Training |
| train/model_rl | Output for training checkpoints. Used also in Inference & Validation to get the latest weights. |
| train/output_rl | Output for generated images during training. |
The ground truth and blurred versions of the images should have the same names but be in different folders.
You may need to change the base_dir variable in the scripts.
This is normally set to "/data" , since we mount {path to you data} as "/data" in the container.
Change
```python
base_dir = "/data"
```
to
```python
base_dir = "{path to your data}"
```
replacing "{path to your data}" with the actual path.
#### Running the scripts
After creating the files as in Section "Data Structure", you should copy the output of the synthetic data script to their respective folders.
You should also copy the python scripts in [RLN_code](RLN_code) to {path to your data}.
Then you can use:
```shell
python RLN_code/RLN_x_x.py
```
The scripts:
- [RLN_single_Train.py](RLN_code/RLN_single_Train.py)
- [RLN_single_Test.py](RLN_code/RLN_single_Test.py)
- [RLN_single_Inference.py](RLN_code/RLN_single_inference.py)
are currently available. They depend on [RLN_single_Model.py](RLN_code/RLN_single_Model.py) and [TF2ImageHelpers.py](RLN_code/TF2ImageHelpers.py) being available in the same directory to funciton.
#### Monitoring the Training Progress
Using tensorboard it is possible to monitor the training progress in real-time from a web browser.
You need to have TensorFlow installed in order to use TensorBoard:
```shell
tensorbaord --logdir {path to your data}/logs
```
## Containers
# Containerized Usage
This project uses [apptainer](https://apptainer.org/docs/user/main/introduction.html) as the main container engine. Basic commands are:
```shell
apptainer build target.sif definiton.def #build the definition.def into target.sif
......
import os
import sys
import time
import numpy as np
import tensorflow as tf
import TF2ImageHelpers as images
from RLN_single_Model import RLN_model_simple, RLN_model
from RLN_single_Model import RLN_model
def get_model(path):
"""
Init a RLN model and restore latest weights from checkpoint directory.
:param path: Checkpoint Directory
:return: instance of Model, step number from training.
"""
print("Loading model...")
model = RLN_model(name="test")
i = tf.Variable(0, trainable=False, dtype=tf.int64)
ckpt = tf.train.Checkpoint(step=i, model=model)
manager = tf.train.CheckpointManager(ckpt, path, max_to_keep=20)
ckpt.restore(manager.latest_checkpoint)
print("restore successful")
return model, i
def inference(model, input):
"""
Inference using RLN model. Normalizes the input to the statistics that the model excepts and runs the model on input
:param model: Should be an instance of RLN model
:param input:should be a 3 (height,width,channel) or 4 (batch,height,width,channel) TensorFlow Tensor.
:return: 4 dimensional (batch,height,width,channel) tensor containing the predictions.
"""
shape = tf.shape(input)
if len(shape) < 3 or len(shape) > 4:
raise Exception("Input image shape wrong. Shape: {}".format(shape))
if len(shape) == 3:
input = tf.expand_dims(input, axis=0)
std = tf.math.reduce_std(input, axis=(1, 2, 3), keepdims=True)
mean = tf.math.reduce_mean(input, axis=(1, 2, 3), keepdims=True)
input = (input - mean) / std
x_i = model(input, training=False)
output = x_i[0] * std + mean
return output
if __name__ == "__main__":
import logging
import argparse
import os
import TF2ImageHelpers as images
import time
logging.basicConfig(filemode="a", encoding='utf-8', level=logging.DEBUG)
base_dir = "/data"
input_dir = ""
run_name = ""
train_model_path = ""
test_output = ""
print(test_output)
base_dir = "/data"
ground_truthdir = base_dir + '/train/ground_truth/'
input_dir = base_dir + "/test/input/"
arg_parser = argparse.ArgumentParser(prog="RLN Inference",
description="Used to run Inference on input pictures. Should support .bmp, "
".png, .npz file formats.")
timestr = time.strftime("%Y%m%d-%H%M%S")
run_name = "/{}/".format(sys.argv[1])
arg_parser.add_argument("--base_dir", default=base_dir,
help="Base Directory for input using the suggested file structure.")
arg_parser.add_argument("--run_name", default=None, help="Run name as used in the default file structure.")
train_model_path = base_dir + '/train/model_rl' + run_name
train_output = base_dir + '/train/output_rl' + run_name
test_output = base_dir + '/test/output_rl' + run_name
log_dir = base_dir + "/logs" + run_name
print(test_output)
arg_parser.add_argument("--checkpoint_dir", default=None, help="Override checkpoint directory.")
arg_parser.add_argument("--input_dir", default=None, help="Override input directory.")
arg_parser.add_argument("--output_dir", default=None, help="Override output directory.")
if not os.path.exists(train_model_path) or not os.path.exists(train_output) or not os.path.exists(
test_output) or not os.path.exists(log_dir):
raise Exception("missing locations")
args = arg_parser.parse_args()
base_dir = args.base_dir
import logging
if args.run_name is None or args.input_dir is None or args.output_dir is None:
raise Exception("You must specify either --run_name, or --input_dir, --output_dir, and --checkpoint_dir.")
logging.basicConfig(filemode="a", encoding='utf-8', level=logging.DEBUG)
if args.checkpoint_dir is not None and args.output_dir is not None and args.run_name is not None and args.input_dir is not None:
raise Exception("You must specify either --run_name, or --input_dir, --output_dir, and --checkpoint_dir.")
if args.run_name is not None:
input_dir = base_dir + "/test/input/"
run_name = "/{}/".format(args.run_name)
model = RLN_model(name="test")
train_model_path = base_dir + '/train/model_rl' + run_name
test_output = base_dir + '/test/output_rl' + run_name
i = tf.Variable(0, trainable=False, dtype=tf.int64)
if args.output_dir is not None and args.checkpoint_dir is not None and args.input_dir is not None:
input_dir = args.input_dir
test_output = args.output_dir
train_model_path = args.checkpoint_dir
ckpt = tf.train.Checkpoint(step=i, model=model)
manager = tf.train.CheckpointManager(ckpt, train_model_path, max_to_keep=20)
ckpt.restore(manager.latest_checkpoint)
logging.info("restore sucessfull")
a = os.listdir(input_dir)
a = [i for i in a if i.split('.')[-1] == 'npz']
a = sorted(a, key=lambda x: int(x.split(".")[0]))
logging.debug(a)
for path_i in a:
logging.debug(path_i)
x = images.read_file(input_dir + path_i)
x = tf.expand_dims(x, 0)
print(f"Input: {input_dir} \n Output: {test_output} \n Checkpoint: {train_model_path}")
st = time.time()
if not os.path.exists(train_model_path) or not os.path.exists(
test_output) or not os.path.exists(input_dir):
raise Exception("missing locations")
std = tf.math.reduce_std(x, axis=(1, 2, 3), keepdims=True)
mean = tf.math.reduce_mean(x, axis=(1, 2, 3), keepdims=True)
a = os.listdir(input_dir)
a = sorted(a, key=lambda x: int(x.split(".")[0]))
print("Input images: {}".format(a))
x = (x - mean) / std
x_i = model(x, training=False)
x_i = x_i[0] * std + mean
x_i = tf.clip_by_value(x_i, 0, 1, name="final_activation_clip")
for path_i in a:
logging.debug(path_i)
x = images.read_file(input_dir + path_i)
x = tf.expand_dims(x, 0)
model, i = get_model(train_model_path)
st = time.time()
x_i = inference(model, x)
et = time.time()
dt = np.array(et - st)
np.savez_compressed(test_output + path_i + ".npz", x=x, f_x=x_i, dt=dt)
et2 = time.time()
i.assign_add(1)
logging.info("Run: {} calc time: {} save time: {} memory: {}".format(i.numpy(), dt, et2 - et,
tf.config.experimental.get_memory_info(
'GPU:0')))
et = time.time()
dt = np.array(et - st)
np.savez_compressed(test_output + path_i + ".npz", x=x, f_x=x_i, dt=dt)
et2 = time.time()
i.assign_add(1)
logging.info("Run: {} calc time: {} save time: {} memory: {}".format(i.numpy(), dt, et2 - et,
tf.config.experimental.get_memory_info(
'GPU:0')))