Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

Segmentation

Segmentation is the foundation of the entire pipeline. This step extracts your objects of interests from your images. Each building block performs a single segmentation task. You can choose which images and which channels to segment, what method to use, and to add or not some preprocessing on the images. Segmentation can either be done using classical image analysis methods (thresholding, edge based segmentation, etc.) or be trained on your specific images (conv_paint, deep learning). The pipeline supports all of those, but please note that learning based methods require ground truth annotations to be trained. We provide a few models that work well for our usecase but your mileage may vary.

If you need to annotate images, I warmly recommend you to use the microsam napari plugin.

This repository also includes scripts to retrain all of those methods with ease (no need to code or handle boilerplate !).

Segmentation methods

  • deep_learning : uses deep convolutional neural networks, requires training

  • conv_paint (to be implemented soon) : method similar to ilastik, where you sparsely annotate your images with scribbles, requires training

  • threshold : Otsu thresholding, doesn’t require training

  • double_threshold : modified Otsu thresholding, good at segmenting bright objects on a non uniformely dark background, doesn’t require training

  • edge_based : uses edge detection to come up with an optimal threshold. Gives better result than simple thresholding but is more computationally intensive, doesn’t require training

Options

  • segmentation_column : name of the column containing image file paths you want the segmentation to run on (e.g. “raw”)

  • segmentation_method : selects the segmentation method (e.g. “deep_learning”)

  • segmentation_channels : selects the channel(s) of the images used to perform segmentation, e.g.: [0] will only use the first channel, [0,1] will use the first and second channel at the same time (e.g. for a special deep learning model). You have to put the channel number into brackets, and remember, Python starts counting at 0

  • segmentation_name_suffix : Optional suffix to add to the resulting directory name and column, useful if you want to segment multiple things using the same channel (e.g. “worm”, to make the output analysis/ch1_seg_worm). Defaults to null


  • model_path : path to the saved deep learning / conv_paint model you want to use

  • batch_size: the amount of images that will be processed at once when using deep learning, please adjust based on your GPU’s VRAM and image size

  • predict_on_tiles : if True, cuts up the images into tiles before feeding them to the neural network. Useful if your images are too big to fit in your GPU’s VRAM. Defaults to False

  • tiler_config : how to cut up the images if predict_on_tiles is true (e.g. {‘tile_size’:[1024, 1024], ‘tile_step’:[256, 256]}). Defaults to null

  • enforce_n_channels : some models need N number of channels for input images (architectures expecting RGB images for example), this option will duplicate your images channels to fit this restriction. Defaults to null.

  • activation_layer : which activation function to use after the model (can be either “sigmoid” or "softmax). Defaults to null (many models integrate it in their architecture).


  • gaussian_filter_sigma : used in “edge_based segmentation”, controls the ammount of blur applied on the image before edge detection. Defaults to 1.

Example

Here is the configuration that you could you if you wanted to segment the 1st and 2nd channel of your images (when channel 1 is body and channel 2 is pharynx) :

building_blocks: ["segmentation", "segmentation"]
segmentation_column: ['raw']
segmentation_method: ["deep_learning"]`
segmentation_channels: [[1], [0]]`
model_path: ["pharynx_model.ckpt", "body_model.ckpt"]`
batch_size: [32]

This would create 2 new subdirectories in analysis/ and 2 new columns in the experiment’s filemap :

  • analysis/ch2_seg

  • analysis/ch1_seg