Code indexing in gitaly is broken and leads to code not being visible to the user. We work on the issue with highest priority.

Skip to content
Snippets Groups Projects
Commit a3a1b850 authored by florez_j's avatar florez_j
Browse files

Update readme.md and set_up_env.sh

parent 1b2184d8
No related branches found
No related tags found
No related merge requests found
# DIMA: Data Integration and Metadata Annotation
## DIMA: Data Integration and Metadata Annotation
DIMA (Data Integration and Metadata Annotation) is a Python package designed for the **Laboratory of Atmospheric Chemistry** to support the integration of multi-instrument data in HDF5 format. It is tailored for data from diverse experimental campaigns, including **beamtimes**, **kinetic flowtube studies**, **smog chamber experiments**, and **field campaigns**.
## Key Features
## Description
**DIMA** (Data Integration and Metadata Annotation) is a Python package developed to support the findable, accessible, interoperable, and reusable (FAIR) data transformation of multi-instrument data at the **Laboratory of Atmospheric Chemistry** as part of the project **IVDAV**: *Instant and Versatile Data Visualization During the Current Dark Period of the Life Cycle of FAIR Research*, funded by the [ETH-Domain ORD Program Measure 1](https://ethrat.ch/en/measure-1-calls-for-field-specific-actions/).
The **FAIR** data transformation involves cycles of data harmonization and metadata review. DIMA facilitates these processes by enabling the integration and annotation of multi-instrument data in HDF5 format. This data may originate from diverse experimental campaigns, including **beamtimes**, **kinetic flowtube studies**, **smog chamber experiments**, and **field campaigns**.
## Key features
DIMA provides reusable operations for data integration, manipulation, and extraction using HDF5 files. These serve as the foundation for the following higher-level operations:
1. **Data Integration Pipeline:** Harmonizes and integrates multi-instrument data sources by converting a human-readable campaign descriptor YAML file into a unified HDF5 format.
1. **Data integration pipeline**: Searches for, retrieves, and integrates multi-instrument data sources in HDF5 format using a human-readable campaign descriptor YAML file that points to the data sources on a network drive.
2. **Metadata revision pipeline**: Enables updates, deletions, and additions of metadata in an HDF5 file. It operates on the target HDF5 file and a YAML file specifying the required changes. A suitable YAML file specification can be generated by serializing the current metadata of the target HDF5 file. This supports alignment with conventions and the development of campaign-centric vocabularies.
2. **Metadata Revision Workflow:** Updates and refines metadata through a human-in-the-loop process, optimizing HDF5 file metadata serialization in YAML format to align with conventions and develop campaign-centric vocabularies.
3. **Visualization pipeline:**
Generates a treemap visualization of an HDF5 file, highlighting its structure and key metadata elements.
......@@ -47,7 +55,7 @@ Navigate to your `GitLab` folder, clone the repository, and navigate to the `dim
Open **Git Bash** terminal.
**Option 1**: Install a suitable conda environment `pyenv5505` inside the repository `dima` as follows:
**Option 1**: Install a suitable conda environment `multiphase_chemistry_env` inside the repository `dima` as follows:
```bash
cd path/to/GitLab/dima
......@@ -62,6 +70,28 @@ Open **Anaconda Prompt** or a terminal with access to conda.
conda env create --file environment.yml
```
<details>
<summary> <h3> Working with Jupyter Notebooks </h3> </summary>
We now make the previously installed Python environment `multiphase_chemistry_env` selectable as a kernel in Jupyter's interface.
1. Open an Anaconda Prompt, check if the environment exists, and activate it:
```
conda env list
conda activate multiphase_chemistry_env
```
2. Register the environment in Jupyter:
```
python -m ipykernel install --user --name multiphase_chemistry_env --display-name "Python (multiphase_chemistry_env)"
```
3. Start a Jupyter Notebook by running the command:
```
jupyter notebook
```
and select the `multiphase_chemistry_env` environment from the kernel options.
</details>
## Repository Structure and Software arquitecture
**Directories**
......@@ -90,34 +120,63 @@ Open **Anaconda Prompt** or a terminal with access to conda.
<img src="docs/software_arquitecture_diagram.svg" alt="Alt Text">
</p>
## Working with Jupyter Notebook on the `multiphase_chemistry_env`
## File standardization module (`instruments/`)
1. Open an Anaconda Prompt as a regular user, ensure that `multiphase_chemistry_env` is in the list of available enviroments and activate it by running the following commands:
```
conda env list
conda activate multiphase_chemistry_env
```
2. Register the associated kernel in Jupyter by running:
### Extend DIMA’s file reading capabilities for new instruments
```
python -m ipykernel install --user --name multiphase_chemistry_env --display-name "Python (multiphase_chemistry_env)"
```
3. Start a Jupyter Notebook by running the command:
```
jupyter notebook
```
and select the `multiphase_chemistry_env` environment from the kernel options.
We now explain how to extend DIMA's file-reading capabilities by adding support for a new instrument. The process involves adding instrument-specific files and registering the new instrument's file reader.
1. Create Instrument Files
You need to add two files for the new instrument:
- A **YAML file** that contains the instrument-specific description terms.
- **Location**: `instruments/dictionaries/`
- A **Python file** that reads the instrument's data files (e.g., JSON files).
- **Location**: `instruments/readers/`
**Example:**
- **YAML file**: `ACSM_TOFWARE_flags.yaml`
- **Python file**: `flag_reader.py` (reads `flag.json` files from the new instrument).
2. Register the New Instrument Reader
To enable DIMA to recognize the new instrument's file reader, update the **filereader registry**:
1. Open the file: `instruments/readers/filereader_registry.py`.
2. Add an entry to register the new instrument's reader.
**Example:**
```python
# Import the new reader
from instruments.readers.flag_reader import read_jsonflag_as_dict
# Register the new instrument in the registry
file_extensions.append('.json')
file_readers.update({'ACSM_TOFWARE_flags_json' : lambda x: read_jsonflag_as_dict(x)})
```
***
### Working with Visual Studio Code (VS Code) on the `multiphase_chemistry_env`
1. Open the project in VS Code, click on the Python interpreter in the status bar and choose the `multiphase_chemistry_env` environment.
## Authors and acknowledgment
Show your appreciation to those who have contributed to the project.
## Data integration workflow
## License
For open source projects, say how it is licensed.
## Metadata review workflow
## Project status
If you have run out of energy or time for your project, put a note at the top of the README saying that development has slowed down or stopped completely. Someone may choose to fork your project or volunteer to step in as a maintainer or owner, allowing your project to keep going. You can also make an explicit request for maintainers.
## How-to tutorials
<details>
<summary> <h3> Data integration workflow </h3>
<details>
<details>
<summary> <h3> Metadata review workflow </h3>
- review through branches
- updating files with metadata in Openbis
## Metadata
#### Metadata
| **Attribute** | **CF Equivalent** | **Definition** |
|-------------------------|-----------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| campaign_name | - | Denotes a range of possible campaigns, including laboratory and field experiments, beamtime, smog chamber studies, etc., related to atmospheric chemistry research. |
......@@ -128,9 +187,8 @@ and select the `multiphase_chemistry_env` environment from the kernel options.
| actris_level | - | Indicates the processing level of the data within the ACTRIS (Aerosol, Clouds and Trace Gases Research Infrastructure) framework. |
| dataset_startdate | - | Denotes the start datetime of the dataset collection. |
| dataset_enddate | - | Denotes the end datetime of the dataset collection. |
| processing_filename | - | Denotes the name of the file used to process an initial version (e.g, original version) of the dataset into a processed dataset. |
| processing_file | - | Denotes the name of the file used to process an initial version (e.g, original version) of the dataset into a processed dataset. |
| processing_date | - | The date when the data processing was completed. | |
## Adaptability to Experimental Campaign Needs
The `instruments/` module is designed to be highly adaptable, accommodating new instrument types or file reading capabilities with minimal code refactoring. The module is complemented by instrument-specific dictionaries of terms in YAML format, which facilitate automated annotation of observed variables with:
......@@ -166,90 +224,7 @@ relative_humidity:
definition: 'Relative humidity represents the amount of water vapor present in the air relative to the maximum amount of water vapor the air can hold at a given temperature.'
```
# Extend DIMA’s file reading capabilities for new instruments
We now explain how to extend DIMA's file-reading capabilities by adding support for a new instrument. The process involves adding instrument-specific files and registering the new instrument's file reader.
## 1. Create Instrument Files
You need to add two files for the new instrument:
- A **YAML file** that contains the instrument-specific description terms.
- **Location**: `instruments/dictionaries/`
- A **Python file** that reads the instrument's data files (e.g., JSON files).
- **Location**: `instruments/readers/`
**Example:**
- **YAML file**: `ACSM_TOFWARE_flags.yaml`
- **Python file**: `flag_reader.py` (reads `flag.json` files from the new instrument).
## 2. Register the New Instrument Reader
To enable DIMA to recognize the new instrument's file reader, update the **filereader registry**:
1. Open the file: `instruments/readers/filereader_registry.py`.
2. Add an entry to register the new instrument's reader.
**Example:**
```python
# Import the new reader
from instruments.readers.flag_reader import read_jsonflag_as_dict
# Register the new instrument in the registry
file_extensions.append('.json')
file_readers.update({'ACSM_TOFWARE_flags_json' : lambda x: read_jsonflag_as_dict(x)})
```
## Getting started
To make it easy for you to get started with GitLab, here's a list of recommended next steps.
Already a pro? Just edit this README.md and make it your own. Want to make it easy? [Use the template at the bottom](#editing-this-readme)!
## Add your files
- [ ] [Create](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#create-a-file) or [upload](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#upload-a-file) files
- [ ] [Add files using the command line](https://docs.gitlab.com/ee/gitlab-basics/add-file.html#add-a-file-using-the-command-line) or push an existing Git repository with the following command:
```
cd existing_repo
git remote add origin https://gitlab.psi.ch/5505/functionspython.git
git branch -M main
git push -uf origin main
```
```
cd existing_repo
git remote add origin https://gitlab.psi.ch/5505/functionspython.git
git branch -M main
git push -uf origin main
```
## Integrate with your tools
## Integrate with your tools
- [ ] [Set up project integrations](https://gitlab.psi.ch/5505/functionspython/-/settings/integrations)
## Collaborate with your team
- [ ] [Invite team members and collaborators](https://docs.gitlab.com/ee/user/project/members/)
- [ ] [Create a new merge request](https://docs.gitlab.com/ee/user/project/merge_requests/creating_merge_requests.html)
- [ ] [Automatically close issues from merge requests](https://docs.gitlab.com/ee/user/project/issues/managing_issues.html#closing-issues-automatically)
- [ ] [Enable merge request approvals](https://docs.gitlab.com/ee/user/project/merge_requests/approvals/)
- [ ] [Automatically merge when pipeline succeeds](https://docs.gitlab.com/ee/user/project/merge_requests/merge_when_pipeline_succeeds.html)
## Test and Deploy
Use the built-in continuous integration in GitLab.
- [ ] [Get started with GitLab CI/CD](https://docs.gitlab.com/ee/ci/quick_start/index.html)
- [ ] [Analyze your code for known vulnerabilities with Static Application Security Testing(SAST)](https://docs.gitlab.com/ee/user/application_security/sast/)
- [ ] [Deploy to Kubernetes, Amazon EC2, or Amazon ECS using Auto Deploy](https://docs.gitlab.com/ee/topics/autodevops/requirements.html)
- [ ] [Use pull-based deployments for improved Kubernetes management](https://docs.gitlab.com/ee/user/clusters/agent/)
- [ ] [Set up protected environments](https://docs.gitlab.com/ee/ci/environments/protected_environments.html)
***
<details>
# Editing this README
......@@ -258,12 +233,6 @@ When you're ready to make this README your own, just edit this file and use the
## Suggestions for a good README
Every project is different, so consider which of these sections apply to yours. The sections used in the template are suggestions for most open source projects. Also keep in mind that while a README can be too long and detailed, too long is better than too short. If you think your README is too long, consider utilizing another form of documentation rather than cutting out information.
## Name
Choose a self-explaining name for your project.
## Description
Let people know what your project can do specifically. Provide context and add a link to any reference visitors might be unfamiliar with. A list of Features or a Background subsection can also be added here. If there are alternatives to your project, this is a good place to list differentiating factors.
## Badges
On some READMEs, you may see small images that convey metadata, such as whether or not all the tests are passing for the project. You can use Shields to add some to your README. Many services also have instructions for adding a badge.
......@@ -289,11 +258,4 @@ For people who want to make changes to your project, it's helpful to have some d
You can also document commands to lint the code or run tests. These steps help to ensure high code quality and reduce the likelihood that the changes inadvertently break something. Having instructions for running tests is especially helpful if it requires external setup, such as starting a Selenium server for testing in a browser.
## Authors and acknowledgment
Show your appreciation to those who have contributed to the project.
## License
For open source projects, say how it is licensed.
## Project status
If you have run out of energy or time for your project, put a note at the top of the README saying that development has slowed down or stopped completely. Someone may choose to fork your project or volunteer to step in as a maintainer or owner, allowing your project to keep going. You can also make an explicit request for maintainers.
#!/bin/bash
# Define the name of the environment
ENV_NAME="pyenv5505"
ENV_NAME="multiphase_chemistry_env"
# Check if mamba is available and use it instead of conda for faster installation
if command -v mamba &> /dev/null; then
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment