Newer
Older
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
# magic_tools
Functions and classes to support MAGIC (**M**achine-learning **A**ssisted **G**enomic and **I**maging **C**onvergence)
Main features:
- view images, segmentations and labels
- tools for semantic and instance annotation
- training and testing of XGBoost machine-learning models
- image processing server for automated microscopy
## Installation
Clone the repository
`git clone https://git.embl.de/cosenza/magic_tools/`
Create a virtual environment (e.g. via conda or venv), magic_tools was tested with Python 3.11.5 on Ubuntu 22.04.4 LTS.
Activate the environment and install the pip requirements file
`python3 -m pip install -r /path/to/repository/magic_tools/requirements.txt`
Then install magic_tools
`pip install /path/to/repository/magic_tools/`
If you want to use it in jupyter notebooks, do not forget to install the kernelspec
`python3 -m ipykernel install --user --name strandtools_env`
Also, to run the image processing server you need to install uvicorn:
`sudo apt install uvicorn`
## Getting started
To get familiar with magic_tools, you can find a series of jupyter notebooks, with tutorials taking you through the main features of the package.
Tutorials can be found in the **notebooks** folder:
- [**semantic_segmentation_example.ipynb**](https://git.embl.de/cosenza/magic_tools/-/blob/main/notebooks/semantic_segmentation_example.ipynb)
this tutorial shows how to extract pixel-level features and create a pixel classifier for semantic segmentation
- [**object_classification_example.ipynb**](https://git.embl.de/cosenza/magic_tools/-/blob/main/notebooks/object_classification_example.ipynb)
this tutorial shows how to extract object-level features and create a classifier for object classification
- [**inference_server_example.ipynb**](https://git.embl.de/cosenza/magic_tools/-/blob/main/notebooks/inference_server_example.ipynb)
this tutorial shows how to interact with the image processing server, sending images and receiving inferences
- [**annotation_tools.ipynb**](https://git.embl.de/cosenza/magic_tools/-/blob/main/notebooks/annotation_tools.ipynb)
this tutorial teaches you how to use tools for pixel and object-level annotation
The **script** folder contains some useful tools:
- [**set_server_configuration.py**](https://git.embl.de/cosenza/magic_tools/-/blob/main/scripts/set_server_configuration.py)
this script allows you to quickly set a configuration for the image processing server
`python set_server_configuration.py -c /path/to/config/file -s server_url -p port`
- [**test_server_configurations.py**](https://git.embl.de/cosenza/magic_tools/-/blob/main/scripts/test_server_configurations.py)
this script tests available configurations to make sure they are working as expected
`python test_server_configurations.py -c /path/to/config/folder -s server_url -p port -i /path/to/test/image`
- [**collect_micronuclei_features.py**](https://git.embl.de/cosenza/magic_tools/-/blob/main/scripts/collect_micronuclei_features.py)
this script runs a batch image processing using the server for image processing, to generate label images and feature tables of micronuclei objects. Useful in the process of generating a new training set.
## Launching the server
The server can be launch with uvicorn.
First open a terminal and navigate to the repository folder
`cd /path/to/repository/magic_tools/`
Then start the server
`uvicorn magic_tools.server.main:app`
You can also define host and port, for example if you run it on a cluster node and want to expose it to the intranet:
`uvicorn magic_tools.server.main:app --host 0.0.0.0 --port 10123`
You will see a confirmation that the server is running at the specified url
```
INFO: Started server process [163403]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:10123 (Press CTRL+C to quit)
```
To learn how to interact with the server and how to configure it for an image analysis task, check out the notebook tutorial **inference_server_example.ipynb** in the notebooks folder.
The server also has a dashboard, where you can monitor statistics about incoming jobs, execution time and prediction outputs.
It can be accessed via the **dashboard** endpoint: http://0.0.0.0:10123/dashboard
## Contributors
- Marco Raffaele Cosenza (maintainer and developer)
## Contact
Please, contact Marco Raffaele Cosenza (marco.cosenza(at)embl.de) in case you have any questions.