Commit aff8dc89 authored by Christian Tischer's avatar Christian Tischer

Refactor all existing modules

parent b1d88ba2
Pipeline #9796 passed with stage
in 28 seconds
# Image analysis automation
## Automated object filtering
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
label_image -> table [label=" measure_shape"];
table -> filtered_objects;
filtered_objects -> label_image [label=" remove"];
}
'/>
### Activity: Automatically remove objects from label image
- Open image: `xy_8bit_labels__four_objects.tif`
- Devise code to automatically remove objects from the label image, e.g.
- Remove all cells larger than N pixels in area
### Formative assessment
---
title: Basic image analysis workflow
layout: page
---
## Typical image analysis workflow
![image](/uploads/b4bdce17515908f40d858b35d5e9256e/image.png)
## Recap
Take few sheets of empty (A4) paper.
Work in pairs of two or three.
* Draw a typical image analysis workflow: From intensity image to objects shape table.
* Write down a few (e.g., two) noteworthy facts about:
* Pixel data types
* Label images
* Intensity measurements
* Object shape measurements
* Write down answers to below questions (there can be multiple answers for some questions):
* How can you split touching objects?
* What can you use a distance map for?
* What can you do to segment spots in prescence of uneven background signal?
* What can you do to remove small objects from a binary image?
# Distance transform
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
"binary image" -> "distance transform" -> "distance map";
"distance map" -> "values are distances";
}
'/>
## Activity: Explore distance transform
- Open image: xy_8bit_binary__two_objects.tif
- Learn:
- It matters what is foreground and what is background.
- The image data type limits the possible distance values.
- There is a difference between calibrated vs. pixel-based distance transforms.
## Actvity: Use distance map for automated distance measurements
- Open reference object image: xy_8bit_binary__single_object.tif
- Compute distance map
- Open label image: xy_8bit_labels__two_spots.tif
- Measure "intensity" of label image objects in distance map
- intensity is distance
## Activity: Use distance map for automated region selection
- Open reference object image: xy_8bit_binary__single_object.tif
- Compute distance map
- Threshold distance map to select regions
### Formative Assessment
TODO
### Learn more
TODO
# Image feature enhancement
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
image -> filter -> "enhanced image";
node [shape=box, color=grey, fontcolor=grey];
"enhanced image" -> "feature" [label=" aka", style=dashed, color=grey, fontcolor=grey, fontsize=10];
"feature enhancement" [shape=box, color=grey, fontcolor=grey, margin=0.05];
filter -> "feature enhancement" [label=" aka", style=dashed, color=grey, fontcolor=grey, fontsize=10];
}
'/>
## Examples
- Difference of Gaussian filter enhances spots
- ...
## Learn next
- filter_difference_of_gaussian.md
---
title: Image math
layout: page
---
## Convolution filters
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
"intensity image" -> "convolution" -> "filtered image";
"small image" -> size;
"small image" -> "pixel values";
"kernel" -> "small image" [label=" is"];
"kernel" -> "convolution";
}
'/>
### Activity: Explore convolution filters
* Open image: xy_8bit__nuclei_noisy_different_intensity.tif
* Try the result of different convolution filters, e.g.
* https://en.wikipedia.org/wiki/Kernel_(image_processing)
* Mean filter
* Gaussian blur
* Edge detection
* Appreciate that the results are (slightly) wrong within the 8-bit range of the input image.
### Activity: Use mean filter to facilitate image segmentation
* Open image: xy_8bit__nuclei_noisy_different_intensity.tif
* Appreciate that you cannot readily threshold the image
* Apply a mean filter
* Threshold the filtered image
### Formative assessment
* Draw the kernel of a 3x3 mean filter.
* Draw three different kernels that enhance edges.
### Learn more
* https://en.wikipedia.org/wiki/Kernel_(image_processing)
---
title: Difference of Gaussian
layout: page
---
## Difference of Gaussian (DoG)
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
image -> "small blur";
image -> "large blur";
"small blur" -> "noise filtered";
"large blur" -> "local background";
"small blur" -> "small blur - large blur" -> "DoG";
"large blur" -> "small blur - large blur" -> "DoG";
"DoG" -> "Laplacian of Gaussian (LoG)" [label=" is related"];
}
'/>
### Activity: Enhance spots in noisy image with uneven background
- Open image: xy_8bit__two_spots_noisy_uneven_background.tif
- Appreciate that you cannot readily threshold the spots
- Compute DoG:
- Copy image and blur with a Gaussian of small sigma -> Gs
- Copy image and blur with a Gaussian of bigger sigma -> Gb
- For the official DoG: `rb = sqrt(2) * rs`
- Create `DoG = Gs - Gb`
- Appreciate that now it is possible to threshold the spots in the DoG image
### Learn more
- https://imagescience.org/meijering/software/featurej/
- https://en.wikipedia.org/wiki/Difference_of_Gaussians
- https://github.com/CellProfiler/CellProfiler/blob/master/cellprofiler/modules/enhanceorsuppressfeatures.py#L4
# Neighborhood filters
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
"neighbourhood filter" -> "central neighbourhood pixel" [label=" replaces"];
"neighbourhood filter" -> "size" [label=" has"];
"neighbourhood filter" -> "shape" [label=" has"];
"neighbourhood filter" -> "convolution filters";
"neighbourhood filter" -> "rank filters";
}
'/>
| | | | | | | | |
|---|---|---|---|---|---|---|---|
| NC | NC | NC | | | | | |
| NC | C, NC | NC | | | | | |
| NC | NC | NC | | | | | |
| | | | | NB | NB | NB | |
| | | | | NB | B, NB| NB | |
| | | | | NB | NB | NB | |
| | | | | | | | |
# Rank filters
## Basic rank filters
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
"rank filters" -> "neighbourhood filters" [label=" are"];
"rank filters" -> minimum;
minimum -> erosion [label=" aka"];
"rank filters" -> maximum;
maximum -> dilation [label=" aka"];
"rank filters" -> median;
"rank filters" -> "size" [label=" have"];
}
'/>
### Activity: Explore rank filters on binary images
- Open image: xy_8bit_binary__two_spots_different_size.tif
- Explore how structures grow and shrink, using erosion and dilation
### Activity: Explore rank filters on grayscale images
- Open image: xy_8bit__two_noisy_squares_different_size.tif
- Explore how a median filter
- removes noise
- removes small structures
- preserves egdes
- Compare median filter to mean filter of same radius
### Formative assessment
True or false? Discuss with your neighbour!
1. Median filter is just another name for mean filter.
2. Small structures can completely disappear from an image when applying a median filter.
Fill in the blanks, using those words: shrinks, increases, decreases, enlarges.
1. An erosion _____ objects in a binary image.
2. An erosion in a binary image _____ the number of foreground pixels.
3. A dilation in a grayscale image _____ the average intensity in the image.
4. A dilation _____ objects in a binary image.
## Morphological opening and closing
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
"opening" -> "rank filter sequence" [label=" is"];
"closing" -> "rank filter sequence" [label=" is"];
"opening" -> "removes small structures";
"closing" -> "fills small gaps";
}
'/>
```
opening( image, r ) = dilation( erosion( image, r ), r )
```
```
closing( image, r ) = erosion( dilation( image, r ), r )
```
### Activity: Explore opening and closing on binary images
- Open image: xy_8bit_binary__for_open_and_close.tif
- Explore effects of morphological closing and opening:
- closing can fill holes
- closing can connect gaps
- opening can remove thin structures
### Formative assessment
True of false? Discuss with your neighbour!
1. Morphological openings on binary images can decrease the number of foreground pixels.
2. Morphological closings on binary images never decreases the number of foreground pixels.
3. Performing a morphological closing a twice in a row does not make sense, because the second closing does not further change the image.
## Top hat filter for local background subtraction
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
"tophat" -> "rank filter sequence";
"tophat" -> "local background subtraction";
}
'/>
```
tophat( image ) = image - opening( image, r ) = image - dilation( erosion( image, r), r )
```
### Activity: Explore tophat filter
- Open image: xy_8bit__spots_local_background.tif
- Use a tophat filter to remove local background
## Activity: Implement a tophat filter
- Devise code implementing a tophat filter, using minimum and maximum filters
## Activity: Explore tophat filter on biological data
- Open image: xy_16bit__autophagosomes.tif
- Appreciate that you cannot readliy segment the spots.
- Use a tophat filter to remove local background.
- Threshold the spots in the tophat filtered image.
## Activity: Explore tophat filter on noisy data
- Open image: xy_8bit__spots_local_background_with_noise.tif
- Use topHat filter to remove local background
- Appreciate that noise poses a challenge to the tophat filter
## Median filter for local background subtraction
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
"median" -> "local background" [label=" approximates"];
"median" -> "radius" -> "> object width";
"radius" -> "< spatial background frequency";
}
'/>
```
median_based_background_correction = image - median( image, r)
```
### Activity: Implement median based background subtraction
- Write code to implement a median based background subtraction
### Activity: Explore median filter for local background subtraction
- Open images:
- xy_8bit__spots_local_background.tif
- xy_8bit__spots_local_background_with_noise.tif
- Use tophat filter to remove local background
- Devise code to implement a tophat filter using basic functions
### Formative assessment
Answer below questions. Discuss with your neighbour!
1. What could one do to close small gaps in a binary image?
2. What could one do to remove small objects in a image?
3. What could you use for local background subtraction in a very noisy image?
## Learn more
- https://imagej.net/MorphoLibJ#Grayscale_morphological_filters
This diff is collapsed.
# Semantic image segmentation using machine learning
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
"intensity image" -> threshold;
threshold -> "binary image";
"binary image" -> "background value";
"binary image" -> "foreground value";
"intensity image" -> "machine learning";
"annotations" -> "machine learning";
"machine learning" -> "pixel class image";
"pixel class image" -> "class00 value";
"pixel class image" -> "class01 value";
"pixel class image" -> "class.. value";
"pixel class image" -> "class C value";
}
'/>
&nbsp;
&nbsp;
&nbsp;
## Decision tree based image segmentation
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
"Intensity image" -> "filter00 image" -> "Decision tree(s)";
"Intensity image" -> "filter01 image" -> "Decision tree(s)";
"Intensity image" -> "filter02 image" -> "Decision tree(s)";
"Intensity image" -> "filter.. image" -> "Decision tree(s)";
"Intensity image" -> "filter F image" -> "Decision tree(s)";
"Annotations" -> "Decision trees(s)"
"Decision tree(s)" -> "class00 (probability) image";
"Decision tree(s)" -> "class01 (probability) image";
"Decision tree(s)" -> "class.. (probability) image";
"Decision tree(s)" -> "class C (probability) image";
}
'/>
## Activity: Semantic image segmentation
- Open image: xy_8bit__em_fly_eye.tif
- Segment three classes: background, eye, other
- Choose image filters
- Draw few labels in the blurry image background => class00
- Draw few labels on the eye => class01
- Draw few labels on other parts of the animal => class02
- While( not happy):
- Train the classifier
- Inspect the predictions
- Add more labels where the predictions are wrong
TODO: use multiple files to demo that a classifier can be applied on other images.
## Formative assessment
True or false? Discuss with your neighbour!
- In contrast to simple thresholding, using machine learning for pixel classification, one always has more than 2 classes.
- If one wants to learn 4 different classes one has to, at least, add 4 annotations on the training image.
- One cannot classify an image where one did not put any training annotations.
---
title: Intensity measurements
layout: page
---
## Intensity measurements
### Activity: Measure intensities in image regions
* Open image: xy_float__h2b_bg_corr.tif
* Measure for both nuclei:
* Maximum intensity
* Average intensity
* Median intensity
* Sum intensity
* Discuss the interpretation!
* Discuss where to measure!
### Activity: Intensity measurements without pixel based background correction
#### Motivation
There are several good reasons not to subtract the background from each pixel in an image:
* It is a bit tricky to do it right, because one has to convert to float to accomodate floting point and negative values.
* If one has really big image data (TB) one would need (at least) another TB storage for the background corrected version of the image.
#### Workflow
* Open image: xy_calibrated_8bit__two_nuclei_high_background.tif
* Measure for both nuclei and a background region:
* Maximum intensity
* Average intensity
* Median intensity
* Sum intensity
* Discuss how to correct the intensities for the background
* Appreciate that you also need the region areas for this task
* Measure the region areas
* Watch out: the image is calibrated!
* Use the area for the correction.
### Formative assessment: Intensity measurements
Fill in the blanks, using these words: integrated, mean, number_of_pixels, decrease, increase, sum
1. Average intensity is just another word for _____ intensity.
2. The _____ intensity is equal to the mean intensity times the _____ in the measured region.
3. In an 8-bit image, increasing the size of the measurement region can only _____ the sum intensity.
4. In a float image, increasing the size of the measurement region can _____ the sum intensity.
---
title: Object shape measurements
layout: page
---
## Shape measurements
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
"label image" -> shape_analysis -> table;
table -> object_rows;
table -> feature_columns;
table -> visualisation;
}
'/>
### Activity: Measure object shape parameters
* Open image: xy_8bit_labels__four_objects.tif
* Perform shape measurements and discuss their meanings.
* Explore results visualisation
* Color objects by their measurement values.
* Add a calibration to the image and check which shape measurements are affected.
* Draw a test image to understand the shape measurements even better.
### Activity: Explore sampling limits
* Draw a square (=circle) of 2x2 pixels (paper, whiteboard, ...)
* Measure area, perimeter and circularity
* Discuss the results
* Discuss the England's coastline paradox
### Formative assessment
True or false? Discuss with your neighbour!
* Circularity is independent of image calibration.
* Area is independent of image calibration.
* Perimeter can strongly depend on spatial sampling.
* Volume can strongly depend on spatial sampling.
* Drawing test images to check how certain shape parameters behave is a good idea.
### Learn more
* Especially surface and perimeter measurements are affected by sampling and resolution, see for example:
* https://en.wikipedia.org/wiki/Coastline_paradox).
* Results visualisation:
* https://imagej.net/MorphoLibJ#Grayscale_morphological_filters: **Label visualization in 3D viewer**
## Learn next
- object_shape_measurement_workflow.md
- intensity_measurements.md
---
title: Object splitting
layout: page
---
# Object splitting
## Requirements
- binarisation.md
- distance_transform.md
## "Intensity based" watershed
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
"intensity image" -> "watershed" -> "label image";
"label image" -> "pond regions";
}
'/>
### Activity: Explore intensity based watershed
- Open image: xy_8bit__touching_objects.tif
- Invert image for watershed
- Apply watershed
### Activity: Use intensity based watershed for object segmentation
- Open intensity image: xy_8bit__touching_objects.tif
- Threshold intensity image => binary image (aka "mask")
- Invert intensity image for watershed
- Apply watershed, using the mask
## "Shape based" watershed
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
"binary image" -> "distance map" -> "watershed" -> "label image";
"label image" -> "thickness ponds";
}
'/>
### Activity: Explore shape based watershed
- Open image: xy_8bit__touching_objects_same_intensity.tif
- Threshold -> Binary image
- Copy binary image (we'll need it as mask later...)
- Binary image -> Distance map
- Distance map -> Watershed
### Learn more
TODO
### Formative Assessment
TODO
......@@ -30,3 +30,7 @@ True or false?
* The lowest pixel index of a 2D image always is `[1,1]`.
* When looking at a 2D image, the lowest pixel indices are always in the lower left corner.
## Learn next
- display.md
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment