Commit b1d88ba2 authored by Christian Tischer's avatar Christian Tischer

Refactor all existing modules

parent b59ab5e0
Pipeline #9795 passed with stage
in 29 seconds
# Bioimage analysis fundamentals
## Pixel values and indices
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
image -> pixel [label=" has many"];
pixel -> value;
pixel -> indices;
pixel -> voxel [label=" 3D"];
}
'/>
### Activity: Explore pixel values and indices
* Open image: xy_8bit__nuclei_noisy_different_intensity.tif
* Explore different ways to inspect pixel values and indices
* Check where the lowest pixel indices are in the displayed image:
* Most commonly: Upper left corner, which is different to conventional coordinate systems.
TODO: add animated-histogram.gif
## Image calibration
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
pixel -> indices;
pixel -> coordinates;
indices -> calibration;
calibration -> coordinates;
calibration -> anisotropic [label=" can be"];
image -> calibration [label=" can have"];
}
'/>
### Activity: Explore image calibration
* Open image: xy_8bit__nuclei_noisy_different_intensity.tif
* Add image calibration
* Explore whether and how this affects image display and measurements (e.g. distance between two points)
### Activity: Explore anisotropic 3D image data
* Open image: xy_8bit_calibrated_anisotropic__mri_stack.tif
* Appreciate that the pixels are anisotropic
### Formative assessment
True or false?
* Changing the image calibration changes the pixel values.
* Pixel coordinates depend on image calibration.
* The lowest pixel index of a 2D image always is `[1,1]`.
* When looking at a 2D image, the lowest pixel indices are always in the lower left corner.
&nbsp;
&nbsp;
&nbsp;
<div style="page-break-after: always;"></div>
## Image display
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
LUT -> color;
LUT -> brightness;
min -> LUT;
max -> LUT;
value -> LUT;
}
'/>
```
brightness = ( value - min ) / ( max - min )
0 <= brightness <= 1
contrast = max - min
```
TODO: add animated image contrast demo from twitter
### Activity
* Open image: xy_8bit__nuclei_noisy_different_intensity.tif
* Explore different LUTs and LUT settings
* Appreciate that LUT settings do not affect image content.
### Formative Assessment
Fill in the blanks, using those words: decrease, larger than, increase, smaller than
1. Pixels with values _____ `max` will appear saturated.
2. Decreasing `max` while keeping `min` constant will _____ the contrast.
3. Decreasing both `max` and `min` will _____ the overall brightness.
4. Pixels with values _____ the `min` will appear black, when using a grayscale LUT.
&nbsp;
&nbsp;
&nbsp;
## Image math and pixel data types
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
"data type" -> "pixel values" [label=" restricts"];
"image math" -> "pixel values" [label=" changes"];
"N-bit unsigned integer" -> "0, 1, ..., 2^N-1";
"data type" -> float -> "..., -1031.0, ..., 10.5, ...";
"data type" -> "...";
"data type" -> "N-bit unsigned integer";
}
'/>
### Activity: Pixel based background subtraction
* Open image: xy_8bit__nuclei_noisy_different_intensity.tif
* Appreciate the significant background intensity
* Measure pixel values at `[ 28, 35 ]` and `[ 28, 39 ]`
* Measure the image background intensity in this region:
* upper left corner at `[ 20, 35 ]`
* width = 10
* height = 10
* Subtract the measured background intensity from each pixel.
* Measure the pixel values again.
* Observe that the resuls are incorrect.
Repeat above activity, but:
* After opening the image, convert its data type to floating point.
### Activity: Explore the limitations of `float` data type
* Create an empty image
* Set all pixel values to 1000000000.0
* Add 1.0 to all pixel values
* Be shocked...
...it turns out that from 16777216 on you cannot represent all integers anymore within a float.
### Formative Assessment
True or false?
* Subtracting 100 from 50 in a 8-bit image will result in -50.
* Adding 1 to 255 in a 8-bit image will result in 256.
* Subtracting 10.1 from 10.0 in a float image will result in -0.1
* Adding 1.0 to 255.0 in a float image will result in 256.0
* Adding 1000.0 to 1000000000.0 in a float image will result in 1000001000.0
### Learn more
* [Limitations of float](https://randomascii.wordpress.com/2012/02/13/dont-store-that-in-a-float/)
&nbsp;
&nbsp;
&nbsp;
## Pixel data type conversions
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
"data type conversion" -> "values" [label=" can change"];
"data type conversion" -> "value range" [label=" changes"];
}
'/>
### Activity: 16-bit to 8-bit conversion
* Open image: xy_16bit__two_values.tif
* Convert to 8-bit
* Understand the mathematics underlying the conversion from 16-bit to 8-bit.
### Activity: 16-bit to float conversion
* Open image: xy_16bit__two_values.tif
* Convert to float
### Formative Assessment
True or false? Discuss with your neighbor!
1. Changing pixel data type never changes pixel values.
2. Converting from 16-bit unsigned integer to float never changes the pixel values.
3. Changing from float to 16-bit unsigned integer never changes the pixel values.
4. There is only one correct way to convert from 16-bit to 8-bit.
&nbsp;
&nbsp;
&nbsp;
## Thresholding
In order to find objects in a image, the first step often is to determine whether a pixel is part of an object (foreground) or of the image background. In fluorescence microscopy this often can be achieved by thresholding.
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
"intensity image" -> threshold;
threshold -> "binary image";
"binary image" -> "mask" [label=" aka"];
"binary image" -> "background value";
"binary image" -> "foreground value";
"background value" -> "0";
"foreground value" -> "1";
"foreground value" -> "255";
"pixel value" -> ">= threshold" -> foreground;
"pixel value" -> "< threshold" -> background;
}
'/>
### Activity: Threshold an image
* Open image: xy_8bit__two_cells.tif
* Convert the image to a binary image by means of thresholding.
### Formative assessment
True or false? Discuss with your neighbour!
* For each image there is only one correct threshold value.
* The result of thresholding is a binary image.
* A binary image can have three values: `-1, 0, +1`
* Values below the threshold are always set to `1`.
&nbsp;
&nbsp;
&nbsp;
## Connected components analysis
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
"intensity image" -> "connected component analysis" -> "label image";
connectivity -> "connected component analysis";
}
'/>
### Activity: 2D connected components analysis
* Open image: xy_8bit_binary__nuclei.tif
* Perform connected components analysis
* Explore multi-color LUTs for object labelling
* Explore removing and joining labels
### Activity: 3D connected components analysis
Repeat above activity but use a 3D image:
* Open image: xyz_8bit_binary__spots.tif
### Formative assessment
Fill in the blanks, using these words: less, more, 8, 255, 4, more.
1. In 3D, pixels have _____ neighbors than in 2D.
2. 8-connected connectivity results in _____ objects than 4-connected connectivity.
3. In 3D, pixels have ____ non-diagonal neighbors.
4. In 2D, pixels have ____ non-diagonal neighbors.
5. A 8-bit label image can maximally have _____ objects.
6. The maximum value in a label image is equal to or _____ than the number of objects.
&nbsp;
&nbsp;
&nbsp;
## Shape measurements
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
"label image" -> shape_analysis -> table;
table -> object_rows;
table -> feature_columns;
table -> visualisation;
}
'/>
### Activity: Measure object shape parameters
* Open image: xy_8bit_labels__four_objects.tif
* Perform shape measurements and discuss their meanings.
* Explore results visualisation
* Color objects by their measurement values.
* Add a calibration to the image and check which shape measurements are affected.
* Draw a test image to understand the shape measurements even better.
### Activity: Explore sampling limits
* Draw a square (=circle) of 2x2 pixels (paper, whiteboard, ...)
* Measure area, perimeter and circularity
* Discuss the results
* Discuss the England's coastline paradox
### Formative assessment
True or false? Discuss with your neighbour!
* Circularity is independent of image calibration.
* Area is independent of image calibration.
* Perimeter can strongly depend on spatial sampling.
* Volume can strongly depend on spatial sampling.
* Drawing test images to check how certain shape parameters behave is a good idea.
### Learn more
* Especially surface and perimeter measurements are affected by sampling and resolution, see for example:
* https://en.wikipedia.org/wiki/Coastline_paradox).
* Results visualisation:
* https://imagej.net/MorphoLibJ#Grayscale_morphological_filters: **Label visualization in 3D viewer**
&nbsp;
&nbsp;
&nbsp;
## Object shape measurement workflow
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
"intensity image" -> "binary image" [label=" threshold"];
"binary image" -> "label image" [label=" connected components"];
"label image" -> table [label=" measure_shape"];
}
'/>
### Activity: Segment objects and measure shapes
* Open image: xy_8bit__two_cells.tif
* Segment the cells and measure their shapes.
* Devise code to automate the workflow.
### Formative assessment
Fill in below blanks, using these words: equal_to, larger_than, smaller_than, binary, connected_component_analysis, thresholding
1. A label image is the result of _____ .
2. The number of pixels in a binary image is typically _____ the number of connected components.
3. The number of distinct values in a label image is _____ the number of objects (minus one).
4. Converting an intensity image to a _____ image can be achieved by _____ .
5. The number of connected components can be _____ the maximal label.
&nbsp;
&nbsp;
&nbsp;
## Intensity measurements
### Activity: Measure intensities in image regions
* Open image: xy_float__h2b_bg_corr.tif
* Measure for both nuclei:
* Maximum intensity
* Average intensity
* Median intensity
* Sum intensity
* Discuss the interpretation!
* Discuss where to measure!
### Activity: Intensity measurements without pixel based background correction
#### Motivation
There are several good reasons not to subtract the background from each pixel in an image:
* It is a bit tricky to do it right, because one has to convert to float to accomodate floting point and negative values.
* If one has really big image data (TB) one would need (at least) another TB storage for the background corrected version of the image.
#### Workflow
* Open image: xy_calibrated_8bit__two_nuclei_high_background.tif
* Measure for both nuclei and a background region:
* Maximum intensity
* Average intensity
* Median intensity
* Sum intensity
* Discuss how to correct the intensities for the background
* Appreciate that you also need the region areas for this task
* Measure the region areas
* Watch out: the image is calibrated!
* Use the area for the correction.
### Formative assessment: Intensity measurements
Fill in the blanks, using these words: integrated, mean, number_of_pixels, decrease, increase, sum
1. Average intensity is just another word for _____ intensity.
2. The _____ intensity is equal to the mean intensity times the _____ in the measured region.
3. In an 8-bit image, increasing the size of the measurement region can only _____ the sum intensity.
4. In a float image, increasing the size of the measurement region can _____ the sum intensity.
&nbsp;
&nbsp;
&nbsp;
## Convolution filters
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
"intensity image" -> "convolution" -> "filtered image";
"small image" -> size;
"small image" -> "pixel values";
"kernel" -> "small image" [label=" is"];
"kernel" -> "convolution";
}
'/>
### Activity: Explore convolution filters
* Open image: xy_8bit__nuclei_noisy_different_intensity.tif
* Try the result of different convolution filters, e.g.
* https://en.wikipedia.org/wiki/Kernel_(image_processing)
* Mean filter
* Gaussian blur
* Edge detection
* Appreciate that the results are (slightly) wrong within the 8-bit range of the input image.
### Activity: Use mean filter to facilitate image segmentation
* Open image: xy_8bit__nuclei_noisy_different_intensity.tif
* Appreciate that you cannot readily threshold the image
* Apply a mean filter
* Threshold the filtered image
### Formative assessment
* Draw the kernel of a 3x3 mean filter.
* Draw three different kernels that enhance edges.
### Learn more
* https://en.wikipedia.org/wiki/Kernel_(image_processing)
&nbsp;
&nbsp;
&nbsp;
## Typical image analysis workflow
![image](/uploads/b4bdce17515908f40d858b35d5e9256e/image.png)
&nbsp;
&nbsp;
&nbsp;
# Course preamble
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
"learn" -> "concepts";
"concepts" -> "software independent" [label=" are"];
}
'/>
The focus of this course it **not** to learn a specific image analysis software.
In fact, one could probably teach most concepts without a computer.
# Distance transform
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
"binary image" -> "distance transform" -> "distance map";
"distance map" -> "values are distances";
}
'/>
## Activity: Explore distance transform
- Open image: xy_8bit_binary__two_objects.tif
- Learn:
- It matters what is foreground and what is background.
- The image data type limits the possible distance values.
- There is a difference between calibrated vs. pixel-based distance transforms.
## Actvity: Use distance map for automated distance measurements
- Open reference object image: xy_8bit_binary__single_object.tif
- Compute distance map
- Open label image: xy_8bit_labels__two_spots.tif
- Measure "intensity" of label image objects in distance map
- intensity is distance
## Activity: Use distance map for automated region selection
- Open reference object image: xy_8bit_binary__single_object.tif
- Compute distance map
- Threshold distance map to select regions
### Formative Assessment
TODO
### Learn more
TODO
# Image analysis automation
## Automated object filtering
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
label_image -> table [label=" measure_shape"];
table -> filtered_objects;
filtered_objects -> label_image [label=" remove"];
}
'/>
### Activity: Automatically remove objects from label image
- Open image: `xy_8bit_labels__four_objects.tif`
- Devise code to automatically remove objects from the label image, e.g.
- Remove all cells larger than N pixels in area
### Formative assessment
# Image data integrity
#### Prerequisites
- A computer with an image analysis software (e.g. [Fiji](www.fiji.sc)) already installed.
- Basic knowledge of how to use above software, e.g.
- open and save images
- change image display settings
- subtract a value from every pixel in an image
- Please download the training [material](https://git.embl.de/grp-bio-it/image-analysis-training-resources/-/archive/master/image-analysis-training-resources-master.zip)
- Please make sure you can access to this [document](https://git.embl.de/grp-bio-it/image-analysis-training-resources/blob/master/workshops/image-ethics-and-data-integrity.md#image-ethics-and-data-integrity).
#### Duration
1.5 hours
#### Learn more about image data integrity
- http://www.imagedataintegrity.com/about.html
- http://jcb.rupress.org/content/166/1/11.full
- Douglas W. Cromey
- Digital Images Are Data: And Should be Treated as Such
- ...and follow up publications...
## Image data integrity
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
image_data_integrity -> image_content [label=" preserving"];
image_content -> pixel_values;
image_content -> pixel_coordinates;
pixel_coordinates -> array_indices;
pixel_coordinates -> physical_coordinates;
}
'/>
## Image data saving
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
saving_images -> image_content [label=" can change"];
}
'/>
### Motivation
Sometimes it can be necessary to save your images in a different formats.
It needs some training to know how to do this properly.
What could be good reasons to resave your data in a different format (multiple answers)?
1. I want to share my scientific findings on twitter, thus I need to convert an image to a twitter compatible format.
2. I want to import images in PowerPoint, only some formats will work.
3. I need to save disk space, thus I need to find a format that makes the images smaller.
4. I want to use a special software that only accepts certain image data formats.
5. The journal I want to publish in, only accepts certain image formats.
6. I want to have everything in Tiff format, because this is the standard.
7. My boss says that (s)he cannot open .lif (Leica) or .czi (Zeiss) images, thus I should save them in a different format.
### Activity: Save an image
- Open image: `xy_calibrated_16bit__cells_eres_noisy.tif`
- Note down the value and coordinate of the pixel at [218, 332]
- Save the image in **jpg** format
- Reopen the image
- Compare the value and coordinate of the pixel at [218, 332] to your notes, did it change?
Repeat above workflow, but
- adjust the image display before saving
- save as **png**
- open `xy_float__nuclei_probability.tif` and save as **png**
### Formative assessment
What can I do to preserve image integrity during image saving (multiple answers)?
1. I always save in Tiff format, this is safe.
2. I always check pixel values and coordinates before and after saving.
3. I ask my colleagues in the lab and do what they recommend..
4. I keep a copy of the raw data.
## Image display adjustment
### Motivation
Images are a collection of numbers. To visualise those numbers one needs to decide how to map them onto a color and a brightness. There is no default way of doing this. Thus one has be educated and thoughful about this topic. In fact, it is one of the great responsibilties of a microscopist to ajust the image display settings proplery.
### Image display concept map
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
image_content -> numbers [label=" contains"];
numbers -> image_display [label=" lookup table (LUT)"];
}
'/>
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
lookup_table_settings -> scientific_message [label=" affects"];;
lookup_table_settings -> no_default;
}
'/>
### Activity: Quantitative image display
- Open image: `xy_calibrated_16bit__nuclear_protein_control.tif`
- This image shows a nuclear protein in control cells.
- Open image: `xy_calibrated_16bit__nuclear_protein_treated.tif`
- The cells in this image have been subjected to a drug.
- Inspect the images:
- Did the drug affect the amount of the nuclear protein?
- Adjust the lookup-tables (LUTs) of both images to be the same
- Add a LUT calibration to both images
### Formative Assessment
What helps to scientifically convey image intensity information (multiple answers)?
1. Adjust the LUT to the image's full bit-depth.
2. Add a LUT calibration bar.
3. Use the same LUT for images acquired with same settings..
4. Never change the LUT of images! Always keep as in raw data.
## High dynamic range image display
<img src='https://g.gravizo.com/svg?
digraph G {
shift [fontcolor=white,color=white];
biological_images -> high_dynamic_range [label=" "];
paper_reflectance -> low_dynamic_range [label=" "];
computer_monitors -> low_dynamic_range [label=" "];
}
'/>
### Motivation
The number range in images of biological samples can cover large ranges.
For example, a GFP tagged protein could occur in the same cell at different locations either 1 or 10000 times. This means that the dynamic range can be 10^4 or more. Due to limitations of image display and image preception such large dynamics ranges are difficult to display.
### Activity: High dynamic range image display
- Open image: `xy_16bit__nuclei_high_dynamic_range.tif`
- Try to adjust the grayscale LUT such that everything can be seen...
- Try finding other LUTs that help showing all data
- Add LUT calibration to image
### Formative Assessment
What can you do to show images with a high dynamic range (multiple answers)?
1. Adjust the LUT such that only the scientifically relevant information can be seen.
2. Adjust the LUT such that only the scientifically relevant information can be seen
* and state that the LUT has been adjusted in the figure legend
* and show the same image with other LUT settings in the supplemental material.
3. Try to find a LUT that shows all data.
4. Never use multi color LUTs, they are confusing.
5. Already on the microscope change the settings such that only relevant structures are visible, e.g.