Commit aff8dc89 authored by Christian Tischer's avatar Christian Tischer

Refactor all existing modules

parent b1d88ba2
Pipeline #9796 passed with stage
in 28 seconds
# Image analysis automation
## Automated object filtering
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
label_image -> table [label=" measure_shape"];
table -> filtered_objects;
filtered_objects -> label_image [label=" remove"];
}
'/>
### Activity: Automatically remove objects from label image
- Open image: `xy_8bit_labels__four_objects.tif`
- Devise code to automatically remove objects from the label image, e.g.
- Remove all cells larger than N pixels in area
### Formative assessment
---
title: Basic image analysis workflow
layout: page
---
## Typical image analysis workflow
![image](/uploads/b4bdce17515908f40d858b35d5e9256e/image.png)
## Recap
Take few sheets of empty (A4) paper.
Work in pairs of two or three.
* Draw a typical image analysis workflow: From intensity image to objects shape table.
* Write down a few (e.g., two) noteworthy facts about:
* Pixel data types
* Label images
* Intensity measurements
* Object shape measurements
* Write down answers to below questions (there can be multiple answers for some questions):
* How can you split touching objects?
* What can you use a distance map for?
* What can you do to segment spots in prescence of uneven background signal?
* What can you do to remove small objects from a binary image?
# Distance transform
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
"binary image" -> "distance transform" -> "distance map";
"distance map" -> "values are distances";
}
'/>
## Activity: Explore distance transform
- Open image: xy_8bit_binary__two_objects.tif
- Learn:
- It matters what is foreground and what is background.
- The image data type limits the possible distance values.
- There is a difference between calibrated vs. pixel-based distance transforms.
## Actvity: Use distance map for automated distance measurements
- Open reference object image: xy_8bit_binary__single_object.tif
- Compute distance map
- Open label image: xy_8bit_labels__two_spots.tif
- Measure "intensity" of label image objects in distance map
- intensity is distance
## Activity: Use distance map for automated region selection
- Open reference object image: xy_8bit_binary__single_object.tif
- Compute distance map
- Threshold distance map to select regions
### Formative Assessment
TODO
### Learn more
TODO
# Image feature enhancement
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
image -> filter -> "enhanced image";
node [shape=box, color=grey, fontcolor=grey];
"enhanced image" -> "feature" [label=" aka", style=dashed, color=grey, fontcolor=grey, fontsize=10];
"feature enhancement" [shape=box, color=grey, fontcolor=grey, margin=0.05];
filter -> "feature enhancement" [label=" aka", style=dashed, color=grey, fontcolor=grey, fontsize=10];
}
'/>
## Examples
- Difference of Gaussian filter enhances spots
- ...
## Learn next
- filter_difference_of_gaussian.md
---
title: Image math
layout: page
---
## Convolution filters
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
"intensity image" -> "convolution" -> "filtered image";
"small image" -> size;
"small image" -> "pixel values";
"kernel" -> "small image" [label=" is"];
"kernel" -> "convolution";
}
'/>
### Activity: Explore convolution filters
* Open image: xy_8bit__nuclei_noisy_different_intensity.tif
* Try the result of different convolution filters, e.g.
* https://en.wikipedia.org/wiki/Kernel_(image_processing)
* Mean filter
* Gaussian blur
* Edge detection
* Appreciate that the results are (slightly) wrong within the 8-bit range of the input image.
### Activity: Use mean filter to facilitate image segmentation
* Open image: xy_8bit__nuclei_noisy_different_intensity.tif
* Appreciate that you cannot readily threshold the image
* Apply a mean filter
* Threshold the filtered image
### Formative assessment
* Draw the kernel of a 3x3 mean filter.
* Draw three different kernels that enhance edges.
### Learn more
* https://en.wikipedia.org/wiki/Kernel_(image_processing)
---
title: Difference of Gaussian
layout: page
---
## Difference of Gaussian (DoG)
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
image -> "small blur";
image -> "large blur";
"small blur" -> "noise filtered";
"large blur" -> "local background";
"small blur" -> "small blur - large blur" -> "DoG";
"large blur" -> "small blur - large blur" -> "DoG";
"DoG" -> "Laplacian of Gaussian (LoG)" [label=" is related"];
}
'/>
### Activity: Enhance spots in noisy image with uneven background
- Open image: xy_8bit__two_spots_noisy_uneven_background.tif
- Appreciate that you cannot readily threshold the spots
- Compute DoG:
- Copy image and blur with a Gaussian of small sigma -> Gs
- Copy image and blur with a Gaussian of bigger sigma -> Gb
- For the official DoG: `rb = sqrt(2) * rs`
- Create `DoG = Gs - Gb`
- Appreciate that now it is possible to threshold the spots in the DoG image
### Learn more
- https://imagescience.org/meijering/software/featurej/
- https://en.wikipedia.org/wiki/Difference_of_Gaussians
- https://github.com/CellProfiler/CellProfiler/blob/master/cellprofiler/modules/enhanceorsuppressfeatures.py#L4
# Neighborhood filters
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
"neighbourhood filter" -> "central neighbourhood pixel" [label=" replaces"];
"neighbourhood filter" -> "size" [label=" has"];
"neighbourhood filter" -> "shape" [label=" has"];
"neighbourhood filter" -> "convolution filters";
"neighbourhood filter" -> "rank filters";
}
'/>
| | | | | | | | |
|---|---|---|---|---|---|---|---|
| NC | NC | NC | | | | | |
| NC | C, NC | NC | | | | | |
| NC | NC | NC | | | | | |
| | | | | NB | NB | NB | |
| | | | | NB | B, NB| NB | |
| | | | | NB | NB | NB | |
| | | | | | | | |
# Rank filters
## Basic rank filters
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
"rank filters" -> "neighbourhood filters" [label=" are"];
"rank filters" -> minimum;
minimum -> erosion [label=" aka"];
"rank filters" -> maximum;
maximum -> dilation [label=" aka"];
"rank filters" -> median;
"rank filters" -> "size" [label=" have"];
}
'/>
### Activity: Explore rank filters on binary images
- Open image: xy_8bit_binary__two_spots_different_size.tif
- Explore how structures grow and shrink, using erosion and dilation
### Activity: Explore rank filters on grayscale images
- Open image: xy_8bit__two_noisy_squares_different_size.tif
- Explore how a median filter
- removes noise
- removes small structures
- preserves egdes
- Compare median filter to mean filter of same radius
### Formative assessment
True or false? Discuss with your neighbour!
1. Median filter is just another name for mean filter.
2. Small structures can completely disappear from an image when applying a median filter.
Fill in the blanks, using those words: shrinks, increases, decreases, enlarges.
1. An erosion _____ objects in a binary image.
2. An erosion in a binary image _____ the number of foreground pixels.
3. A dilation in a grayscale image _____ the average intensity in the image.
4. A dilation _____ objects in a binary image.
## Morphological opening and closing
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
"opening" -> "rank filter sequence" [label=" is"];
"closing" -> "rank filter sequence" [label=" is"];
"opening" -> "removes small structures";
"closing" -> "fills small gaps";
}
'/>
```
opening( image, r ) = dilation( erosion( image, r ), r )
```
```
closing( image, r ) = erosion( dilation( image, r ), r )
```
### Activity: Explore opening and closing on binary images
- Open image: xy_8bit_binary__for_open_and_close.tif
- Explore effects of morphological closing and opening:
- closing can fill holes
- closing can connect gaps
- opening can remove thin structures
### Formative assessment
True of false? Discuss with your neighbour!
1. Morphological openings on binary images can decrease the number of foreground pixels.
2. Morphological closings on binary images never decreases the number of foreground pixels.
3. Performing a morphological closing a twice in a row does not make sense, because the second closing does not further change the image.
## Top hat filter for local background subtraction
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
"tophat" -> "rank filter sequence";
"tophat" -> "local background subtraction";
}
'/>
```
tophat( image ) = image - opening( image, r ) = image - dilation( erosion( image, r), r )
```
### Activity: Explore tophat filter
- Open image: xy_8bit__spots_local_background.tif
- Use a tophat filter to remove local background
## Activity: Implement a tophat filter
- Devise code implementing a tophat filter, using minimum and maximum filters
## Activity: Explore tophat filter on biological data
- Open image: xy_16bit__autophagosomes.tif
- Appreciate that you cannot readliy segment the spots.
- Use a tophat filter to remove local background.
- Threshold the spots in the tophat filtered image.
## Activity: Explore tophat filter on noisy data
- Open image: xy_8bit__spots_local_background_with_noise.tif
- Use topHat filter to remove local background
- Appreciate that noise poses a challenge to the tophat filter
## Median filter for local background subtraction
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
"median" -> "local background" [label=" approximates"];
"median" -> "radius" -> "> object width";
"radius" -> "< spatial background frequency";
}
'/>
```
median_based_background_correction = image - median( image, r)
```
### Activity: Implement median based background subtraction
- Write code to implement a median based background subtraction
### Activity: Explore median filter for local background subtraction
- Open images:
- xy_8bit__spots_local_background.tif
- xy_8bit__spots_local_background_with_noise.tif
- Use tophat filter to remove local background
- Devise code to implement a tophat filter using basic functions
### Formative assessment
Answer below questions. Discuss with your neighbour!
1. What could one do to close small gaps in a binary image?
2. What could one do to remove small objects in a image?
3. What could you use for local background subtraction in a very noisy image?
## Learn more
- https://imagej.net/MorphoLibJ#Grayscale_morphological_filters
# Image data integrity
#### Prerequisites
- A computer with an image analysis software (e.g. [Fiji](www.fiji.sc)) already installed.
- Basic knowledge of how to use above software, e.g.
- open and save images
- change image display settings
- subtract a value from every pixel in an image
- Please download the training [material](https://git.embl.de/grp-bio-it/image-analysis-training-resources/-/archive/master/image-analysis-training-resources-master.zip)
- Please make sure you can access to this [document](https://git.embl.de/grp-bio-it/image-analysis-training-resources/blob/master/workshops/image-ethics-and-data-integrity.md#image-ethics-and-data-integrity).
#### Duration
1.5 hours
#### Learn more about image data integrity
- http://www.imagedataintegrity.com/about.html
- http://jcb.rupress.org/content/166/1/11.full
- Douglas W. Cromey
- Digital Images Are Data: And Should be Treated as Such
- ...and follow up publications...
## Image data integrity
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
image_data_integrity -> image_content [label=" preserving"];
image_content -> pixel_values;
image_content -> pixel_coordinates;
pixel_coordinates -> array_indices;
pixel_coordinates -> physical_coordinates;
}
'/>
## Image data saving
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
saving_images -> image_content [label=" can change"];
}
'/>
### Motivation
Sometimes it can be necessary to save your images in a different formats.
It needs some training to know how to do this properly.
What could be good reasons to resave your data in a different format (multiple answers)?
1. I want to share my scientific findings on twitter, thus I need to convert an image to a twitter compatible format.
2. I want to import images in PowerPoint, only some formats will work.
3. I need to save disk space, thus I need to find a format that makes the images smaller.
4. I want to use a special software that only accepts certain image data formats.
5. The journal I want to publish in, only accepts certain image formats.
6. I want to have everything in Tiff format, because this is the standard.
7. My boss says that (s)he cannot open .lif (Leica) or .czi (Zeiss) images, thus I should save them in a different format.
### Activity: Save an image
- Open image: `xy_calibrated_16bit__cells_eres_noisy.tif`
- Note down the value and coordinate of the pixel at [218, 332]
- Save the image in **jpg** format
- Reopen the image
- Compare the value and coordinate of the pixel at [218, 332] to your notes, did it change?
Repeat above workflow, but
- adjust the image display before saving
- save as **png**
- open `xy_float__nuclei_probability.tif` and save as **png**
### Formative assessment
What can I do to preserve image integrity during image saving (multiple answers)?
1. I always save in Tiff format, this is safe.
2. I always check pixel values and coordinates before and after saving.
3. I ask my colleagues in the lab and do what they recommend..
4. I keep a copy of the raw data.
## Image display adjustment
### Motivation
Images are a collection of numbers. To visualise those numbers one needs to decide how to map them onto a color and a brightness. There is no default way of doing this. Thus one has be educated and thoughful about this topic. In fact, it is one of the great responsibilties of a microscopist to ajust the image display settings proplery.
### Image display concept map
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
image_content -> numbers [label=" contains"];
numbers -> image_display [label=" lookup table (LUT)"];
}
'/>
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
lookup_table_settings -> scientific_message [label=" affects"];;
lookup_table_settings -> no_default;
}
'/>
### Activity: Quantitative image display
- Open image: `xy_calibrated_16bit__nuclear_protein_control.tif`
- This image shows a nuclear protein in control cells.
- Open image: `xy_calibrated_16bit__nuclear_protein_treated.tif`
- The cells in this image have been subjected to a drug.
- Inspect the images:
- Did the drug affect the amount of the nuclear protein?
- Adjust the lookup-tables (LUTs) of both images to be the same
- Add a LUT calibration to both images
### Formative Assessment
What helps to scientifically convey image intensity information (multiple answers)?
1. Adjust the LUT to the image's full bit-depth.
2. Add a LUT calibration bar.
3. Use the same LUT for images acquired with same settings..
4. Never change the LUT of images! Always keep as in raw data.
## High dynamic range image display
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
biological_images -> high_dynamic_range [label=" "];
paper_reflectance -> low_dynamic_range [label=" "];
computer_monitors -> low_dynamic_range [label=" "];
}
'/>
### Motivation
The number range in images of biological samples can cover large ranges.
For example, a GFP tagged protein could occur in the same cell at different locations either 1 or 10000 times. This means that the dynamic range can be 10^4 or more. Due to limitations of image display and image preception such large dynamics ranges are difficult to display.
### Activity: High dynamic range image display
- Open image: `xy_16bit__nuclei_high_dynamic_range.tif`
- Try to adjust the grayscale LUT such that everything can be seen...
- Try finding other LUTs that help showing all data
- Add LUT calibration to image
### Formative Assessment
What can you do to show images with a high dynamic range (multiple answers)?
1. Adjust the LUT such that only the scientifically relevant information can be seen.
2. Adjust the LUT such that only the scientifically relevant information can be seen
* and state that the LUT has been adjusted in the figure legend
* and show the same image with other LUT settings in the supplemental material.
3. Try to find a LUT that shows all data.
4. Never use multi color LUTs, they are confusing.
5. Already on the microscope change the settings such that only relevant structures are visible, e.g. lower the gain such that dark irrelevant objects have zero pixel values.
6. Adjust LUT settings such that background noise is not visible, because this is distracting.
7. Add a LUT calibration to the image, such that readers can see that not all information might be visible.
## Image math
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
image_math -> pixel_values [label=" changes"];
image_math -> pixel_data_type [label=" does not change"];
image_math -> wrong_pixel_values [label = " can yield"]
}
'/>
### Motivation
It sometimes is necessary to change numberic content of images. It is important to understand how to do this properly in order to avoid uncontrolled artifacts.
What are good reasons to change the pixel values in an image?
1. For intensity measurements, the image background (e.g. camera based offset) should be subtracted from all pixels.
2. For threshold based image segmentation (object detection), it helps to first filter noise in the image.
3. For intensity measurements, it helps to filter noise in the image.
4. The image appears to dark, multiplication of all pixels by a constant number is a means to make it brighter.
5. For uneven illumination (e.g. occuring in wide-field microscopy with large camera chips), one should do flat-field correction, which makes the intensity values even across the image.
6. Our microscope was broken. We took images on a replacement microscope. The pixel values were consistently higher than on our usual microscope. We multiplied the pixels on all images from the replacement microscope by a constant factor to make them comparable to our usual data.
### Activity: Perform pixel based background subtraction
- Open image: `xy_8bit__nuclei_noisy_different_intensity.tif`
- Appreciate the significant background intensity
- Measure pixel value at `[ 28, 35 ]` and `[ 28, 39 ]`
- Measure background intensity in below region:
- upper left corner at `[ 20, 35 ]`
- width = 10
- height = 10
- Subtract the measured background intensity from each pixel
- Measure pixel values again at above coordinates ( `[ 28, 35 ]` and `[ 28, 39 ]` )
- Discuss how the pixel values changed during background subtraction
Repeat above activity, but:
- After opening the image, convert its pixel data type to floating point
### Formative Assessment
Considering image math operations, which of below statements is correct
(multiple answers)?
1. Never change the pixel data type, because it violates image integrity.
2. Changing the pixel data type does not change pixel values.
3. It is scientifically unethical to perform mathematical operations on images, because it changes the pixel values.
4. When performing mathematical operations on images, it should be documented (e.g. by a script of code)
## Display of 3-D images
Biological images are often 3D. However paper and monitors can only show 2D images. It is thus important to understand how to show 3D images in 2D without compromising the scientific message.
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
_3D_data -> visualisation [label=" multiple options"];
visualisation -> scientific_message [label=" affects"];
}
'/>
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
_3D_visualisation -> sum_projection;
_3D_visualisation -> max_projection;
_3D_visualisation -> slice_animation;
_3D_visualisation -> slice_gallery;
_3D_visualisation -> ...;
}
'/>
### Activity: Explore 3D visualisations
- Open image: `xyzt_calibrated_16bit__golgi_bfa.zip`
- Explore and discuss different options how to present this data
- slice gallery
- sum projection
- max projection
- slice animation
### Formative Assessment
Which statements about visualisation and quantification of 3D images are correct (multiple answers)?
1. Always use maximum intensity projection, it is by far the most commonly used.
2. Any visualisation can make sense, you just have scientifically justify it.
3. Intensity quanitifcations ideally should be done in 3D, not in projections.
4. It is impossible to quantify intensities in projections.
# Semantic image segmentation using machine learning
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
"intensity image" -> threshold;
threshold -> "binary image";
"binary image" -> "background value";
"binary image" -> "foreground value";
"intensity image" -> "machine learning";
"annotations" -> "machine learning";
"machine learning" -> "pixel class image";
"pixel class image" -> "class00 value";
"pixel class image" -> "class01 value";
"pixel class image" -> "class.. value";
"pixel class image" -> "class C value";
}
'/>
&nbsp;
&nbsp;
&nbsp;
## Decision tree based image segmentation
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
"Intensity image" -> "filter00 image" -> "Decision tree(s)";
"Intensity image" -> "filter01 image" -> "Decision tree(s)";
"Intensity image" -> "filter02 image" -> "Decision tree(s)";
"Intensity image" -> "filter.. image" -> "Decision tree(s)";
"Intensity image" -> "filter F image" -> "Decision tree(s)";
"Annotations" -> "Decision trees(s)"
"Decision tree(s)" -> "class00 (probability) image";
"Decision tree(s)" -> "class01 (probability) image";
"Decision tree(s)" -> "class.. (probability) image";
"Decision tree(s)" -> "class C (probability) image";
}
'/>
## Activity: Semantic image segmentation
- Open image: xy_8bit__em_fly_eye.tif
- Segment three classes: background, eye, other
- Choose image filters
- Draw few labels in the blurry image background => class00
- Draw few labels on the eye => class01
- Draw few labels on other parts of the animal => class02
- While( not happy):
- Train the classifier
- Inspect the predictions
- Add more labels where the predictions are wrong
TODO: use multiple files to demo that a classifier can be applied on other images.
## Formative assessment
True or false? Discuss with your neighbour!
- In contrast to simple thresholding, using machine learning for pixel classification, one always has more than 2 classes.
- If one wants to learn 4 different classes one has to, at least, add 4 annotations on the training image.
- One cannot classify an image where one did not put any training annotations.
---
title: Intensity measurements
layout: page
---
## Intensity measurements
### Activity: Measure intensities in image regions
* Open image: xy_float__h2b_bg_corr.tif
* Measure for both nuclei:
* Maximum intensity
* Average intensity
* Median intensity
* Sum intensity
* Discuss the interpretation!
* Discuss where to measure!
### Activity: Intensity measurements without pixel based background correction
#### Motivation
There are several good reasons not to subtract the background from each pixel in an image:
* It is a bit tricky to do it right, because one has to convert to float to accomodate floting point and negative values.
* If one has really big image data (TB) one would need (at least) another TB storage for the background corrected version of the image.