Commit b1d88ba2 authored by Christian Tischer's avatar Christian Tischer

Refactor all existing modules

parent b59ab5e0
Pipeline #9795 passed with stage
in 29 seconds
This diff is collapsed.
# Course preamble
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
"learn" -> "concepts";
"concepts" -> "software independent" [label=" are"];
}
'/>
The focus of this course it **not** to learn a specific image analysis software.
In fact, one could probably teach most concepts without a computer.
# Distance transform
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
"binary image" -> "distance transform" -> "distance map";
"distance map" -> "values are distances";
}
'/>
## Activity: Explore distance transform
- Open image: xy_8bit_binary__two_objects.tif
- Learn:
- It matters what is foreground and what is background.
- The image data type limits the possible distance values.
- There is a difference between calibrated vs. pixel-based distance transforms.
## Actvity: Use distance map for automated distance measurements
- Open reference object image: xy_8bit_binary__single_object.tif
- Compute distance map
- Open label image: xy_8bit_labels__two_spots.tif
- Measure "intensity" of label image objects in distance map
- intensity is distance
## Activity: Use distance map for automated region selection
- Open reference object image: xy_8bit_binary__single_object.tif
- Compute distance map
- Threshold distance map to select regions
### Formative Assessment
TODO
### Learn more
TODO
# Image analysis automation
## Automated object filtering
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
label_image -> table [label=" measure_shape"];
table -> filtered_objects;
filtered_objects -> label_image [label=" remove"];
}
'/>
### Activity: Automatically remove objects from label image
- Open image: `xy_8bit_labels__four_objects.tif`
- Devise code to automatically remove objects from the label image, e.g.
- Remove all cells larger than N pixels in area
### Formative assessment
This diff is collapsed.
# Image feature enhancement
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
image -> filter -> "enhanced image";
node [shape=box, color=grey, fontcolor=grey];
"enhanced image" -> "feature" [label=" aka", style=dashed, color=grey, fontcolor=grey, fontsize=10];
"feature enhancement" [shape=box, color=grey, fontcolor=grey, margin=0.05];
filter -> "feature enhancement" [label=" aka", style=dashed, color=grey, fontcolor=grey, fontsize=10];
}
'/>
## Difference of Gaussian (DoG) for spot enhancement
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
image -> "small blur";
image -> "large blur";
"small blur" -> "noise filtered";
"large blur" -> "local background";
"small blur" -> "small blur - large blur" -> "DoG";
"large blur" -> "small blur - large blur" -> "DoG";
"DoG" -> "Laplacian of Gaussian (LoG)" [label=" is related"];
}
'/>
### Activity: Enhance spots in noisy image with uneven background
- Open image: xy_8bit__two_spots_noisy_uneven_background.tif
- Appreciate that you cannot readily threshold the spots
- Compute DoG:
- Copy image and blur with a Gaussian of small sigma -> Gs
- Copy image and blur with a Gaussian of bigger sigma -> Gb
- For the official DoG: `rb = sqrt(2) * rs`
- Create `DoG = Gs - Gb`
- Appreciate that now it is possible to threshold the spots in the DoG image
### Learn more
- https://imagescience.org/meijering/software/featurej/
- https://en.wikipedia.org/wiki/Difference_of_Gaussians
- https://github.com/CellProfiler/CellProfiler/blob/master/cellprofiler/modules/enhanceorsuppressfeatures.py#L4
### Formative Assessment
TODO
# Semantic image segmentation using machine learning
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
"intensity image" -> threshold;
threshold -> "binary image";
"binary image" -> "background value";
"binary image" -> "foreground value";
"intensity image" -> "machine learning";
"annotations" -> "machine learning";
"machine learning" -> "pixel class image";
"pixel class image" -> "class00 value";
"pixel class image" -> "class01 value";
"pixel class image" -> "class.. value";
"pixel class image" -> "class C value";
}
'/>
&nbsp;
&nbsp;
&nbsp;
## Decision tree based image segmentation
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
"Intensity image" -> "filter00 image" -> "Decision tree(s)";
"Intensity image" -> "filter01 image" -> "Decision tree(s)";
"Intensity image" -> "filter02 image" -> "Decision tree(s)";
"Intensity image" -> "filter.. image" -> "Decision tree(s)";
"Intensity image" -> "filter F image" -> "Decision tree(s)";
"Annotations" -> "Decision trees(s)"
"Decision tree(s)" -> "class00 (probability) image";
"Decision tree(s)" -> "class01 (probability) image";
"Decision tree(s)" -> "class.. (probability) image";
"Decision tree(s)" -> "class C (probability) image";
}
'/>
## Activity: Semantic image segmentation
- Open image: xy_8bit__em_fly_eye.tif
- Segment three classes: background, eye, other
- Choose image filters
- Draw few labels in the blurry image background => class00
- Draw few labels on the eye => class01
- Draw few labels on other parts of the animal => class02
- While( not happy):
- Train the classifier
- Inspect the predictions
- Add more labels where the predictions are wrong
TODO: use multiple files to demo that a classifier can be applied on other images.
## Formative assessment
True or false? Discuss with your neighbour!
- In contrast to simple thresholding, using machine learning for pixel classification, one always has more than 2 classes.
- If one wants to learn 4 different classes one has to, at least, add 4 annotations on the training image.
- One cannot classify an image where one did not put any training annotations.
# Object splitting
## "Intensity based" watershed
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
"intensity image" -> "watershed" -> "label image";
"label image" -> "pond regions";
}
'/>
### Activity: Explore intensity based watershed
- Open image: xy_8bit__touching_objects.tif
- Invert image for watershed
- Apply watershed
### Activity: Use intensity based watershed for object segmentation
- Open intensity image: xy_8bit__touching_objects.tif
- Threshold intensity image => binary image (aka "mask")
- Invert intensity image for watershed
- Apply watershed, using the mask
## "Shape based" watershed
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
"binary image" -> "distance map" -> "watershed" -> "label image";
"label image" -> "thickness ponds";
}
'/>
### Activity: Explore shape based watershed
- Open image: xy_8bit__touching_objects_same_intensity.tif
- Threshold -> Binary image
- Copy binary image (we'll need it as mask later...)
- Binary image -> Distance map
- Distance map -> Watershed
### Learn more
TODO
### Formative Assessment
TODO
# Neighborhood filters
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
"neighbourhood filter" -> "central neighbourhood pixel" [label=" replaces"];
"neighbourhood filter" -> "size" [label=" has"];
"neighbourhood filter" -> "shape" [label=" has"];
"neighbourhood filter" -> "convolution filters";
"neighbourhood filter" -> "rank filters";
}
'/>
| | | | | | | | |
|---|---|---|---|---|---|---|---|
| NC | NC | NC | | | | | |
| NC | C, NC | NC | | | | | |
| NC | NC | NC | | | | | |
| | | | | NB | NB | NB | |
| | | | | NB | B, NB| NB | |
| | | | | NB | NB | NB | |
| | | | | | | | |
# Rank filters
## Basic rank filters
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
"rank filters" -> "neighbourhood filters" [label=" are"];
"rank filters" -> minimum;
minimum -> erosion [label=" aka"];
"rank filters" -> maximum;
maximum -> dilation [label=" aka"];
"rank filters" -> median;
"rank filters" -> "size" [label=" have"];
}
'/>
### Activity: Explore rank filters on binary images
- Open image: xy_8bit_binary__two_spots_different_size.tif
- Explore how structures grow and shrink, using erosion and dilation
### Activity: Explore rank filters on grayscale images
- Open image: xy_8bit__two_noisy_squares_different_size.tif
- Explore how a median filter
- removes noise
- removes small structures
- preserves egdes
- Compare median filter to mean filter of same radius
### Formative assessment
True or false? Discuss with your neighbour!
1. Median filter is just another name for mean filter.
2. Small structures can completely disappear from an image when applying a median filter.
Fill in the blanks, using those words: shrinks, increases, decreases, enlarges.
1. An erosion _____ objects in a binary image.
2. An erosion in a binary image _____ the number of foreground pixels.
3. A dilation in a grayscale image _____ the average intensity in the image.
4. A dilation _____ objects in a binary image.
## Morphological opening and closing
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
"opening" -> "rank filter sequence" [label=" is"];
"closing" -> "rank filter sequence" [label=" is"];
"opening" -> "removes small structures";
"closing" -> "fills small gaps";
}
'/>
```
opening( image, r ) = dilation( erosion( image, r ), r )
```
```
closing( image, r ) = erosion( dilation( image, r ), r )
```
### Activity: Explore opening and closing on binary images
- Open image: xy_8bit_binary__for_open_and_close.tif
- Explore effects of morphological closing and opening:
- closing can fill holes
- closing can connect gaps
- opening can remove thin structures
### Formative assessment
True of false? Discuss with your neighbour!
1. Morphological openings on binary images can decrease the number of foreground pixels.
2. Morphological closings on binary images never decreases the number of foreground pixels.
3. Performing a morphological closing a twice in a row does not make sense, because the second closing does not further change the image.
## Top hat filter for local background subtraction
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
"tophat" -> "rank filter sequence";
"tophat" -> "local background subtraction";
}
'/>
```
tophat( image ) = image - opening( image, r ) = image - dilation( erosion( image, r), r )
```
### Activity: Explore tophat filter
- Open image: xy_8bit__spots_local_background.tif
- Use a tophat filter to remove local background
## Activity: Implement a tophat filter
- Devise code implementing a tophat filter, using minimum and maximum filters
## Activity: Explore tophat filter on biological data
- Open image: xy_16bit__autophagosomes.tif
- Appreciate that you cannot readliy segment the spots.
- Use a tophat filter to remove local background.
- Threshold the spots in the tophat filtered image.
## Activity: Explore tophat filter on noisy data
- Open image: xy_8bit__spots_local_background_with_noise.tif
- Use topHat filter to remove local background
- Appreciate that noise poses a challenge to the tophat filter
## Median filter for local background subtraction
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
"median" -> "local background" [label=" approximates"];
"median" -> "radius" -> "> object width";
"radius" -> "< spatial background frequency";
}
'/>
```
median_based_background_correction = image - median( image, r)
```
### Activity: Implement median based background subtraction
- Write code to implement a median based background subtraction
### Activity: Explore median filter for local background subtraction
- Open images:
- xy_8bit__spots_local_background.tif
- xy_8bit__spots_local_background_with_noise.tif
- Use tophat filter to remove local background
- Devise code to implement a tophat filter using basic functions
### Formative assessment
Answer below questions. Discuss with your neighbour!
1. What could one do to close small gaps in a binary image?
2. What could one do to remove small objects in a image?
3. What could you use for local background subtraction in a very noisy image?
## Learn more
- https://imagej.net/MorphoLibJ#Grayscale_morphological_filters
## Recap
Take few sheets of empty (A4) paper.
Work in pairs of two or three.
* Draw a typical image analysis workflow: From intensity image to objects shape table.
* Write down a few (e.g., two) noteworthy facts about:
* Pixel data types
* Label images
* Intensity measurements
* Object shape measurements
* Write down answers to below questions (there can be multiple answers for some questions):
* How can you split touching objects?
* What can you use a distance map for?
* What can you do to segment spots in prescence of uneven background signal?
* What can you do to remove small objects from a binary image?
# Teaching tips
## White-boards
- Try to frequently use white-board, because:
- It makes teaching more interactive.
- It slows down teaching, because you have to draw.
- Make sure you have good pens with high contrast.
- Have two white-boards:
- One for tidy concept maps.
- One for messy notes.
## Stand
- Try not to sit, because:
- Teaching will be more dynamic.
- People can see and hear you better.
- Construct something to have your computer up, such that you can stand in front of it.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment