Hostname: page-component-77f85d65b8-5ngxj Total loading time: 0 Render date: 2026-04-21T06:43:34.844Z Has data issue: false hasContentIssue false

Cell segmentation in images without structural fluorescent labels

Published online by Cambridge University Press:  17 July 2023

Daniel Zyss*
Affiliation:
Centre for Computational Biology (CBIO), Mines Paris, PSL University, Paris, France Institut Curie, PSL University, Paris, France INSERM, U900, Paris, France Cairn Biosciences, Inc., San Francisco, CA, USA
Susana A. Ribeiro
Affiliation:
Cairn Biosciences, Inc., San Francisco, CA, USA
Mary J. C. Ludlam
Affiliation:
Cairn Biosciences, Inc., San Francisco, CA, USA
Thomas Walter
Affiliation:
Centre for Computational Biology (CBIO), Mines Paris, PSL University, Paris, France Institut Curie, PSL University, Paris, France INSERM, U900, Paris, France
Amin Fehri
Affiliation:
Cairn Biosciences, Inc., San Francisco, CA, USA
*
Corresponding author: Daniel Zyss; Email: dzyss@cairnbio.com
Rights & Permissions [Opens in a new window]

Abstract

High-content screening (HCS) provides an excellent tool to understand the mechanism of action of drugs on disease-relevant model systems. Careful selection of fluorescent labels (FLs) is crucial for successful HCS assay development. HCS assays typically comprise (a) FLs containing biological information of interest, and (b) additional structural FLs enabling instance segmentation for downstream analysis. However, the limited number of available fluorescence microscopy imaging channels restricts the degree to which these FLs can be experimentally multiplexed. In this article, we present a segmentation workflow that overcomes the dependency on structural FLs for image segmentation, typically freeing two fluorescence microscopy channels for biologically relevant FLs. It consists in extracting structural information encoded within readouts that are primarily biological, by fine-tuning pre-trained state-of-the-art generalist cell segmentation models for different combinations of individual FLs, and aggregating the respective segmentation results together. Using annotated datasets that we provide, we confirm our methodology offers improvements in performance and robustness across several segmentation aggregation strategies and image acquisition methods, over different cell lines and various FLs. It thus enables the biological information content of HCS assays to be maximized without compromising the robustness and accuracy of computational single-cell profiling.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - SA
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike licence (http://creativecommons.org/licenses/by-nc-sa/4.0), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the same Creative Commons licence is used to distribute the re-used or adapted article and the original article is properly cited. The written permission of Cambridge University Press must be obtained prior to any commercial use.
Copyright
© The Author(s), 2023. Published by Cambridge University Press
Figure 0

Table 1. Description of cell line components used in the assays—parental cell line, cell line name, number of annotated images, size of pixels, fluorescent reporter protein (FRP), channel number, localization, and structural characterization.

Figure 1

Figure 1. Image samples from the different assays showing individual fluorescence channels as well as a color version with manual segmentation annotations overlay. The images are cropped for ease of visualization.

Figure 2

Figure 2. Training and inference workflow for the segmentation of cell organelles without the use of structural FL using the channel-wise approach (top) and multi-channel approach (bottom). (I) Training: (a) Training set of multi-modal fluorescent images (three channels represented as red blue and green), (b) Training set annotations of the organelles segmentations, (c) Out-of-the-box pre-trained Cellpose model (Vanilla Cellpose), and (d) Finetuned model trained for each of the individual channels (channel-wise) or trained with a subset of the channels (multi-channel). (II) Inference: (a) Multi-modal fluorescent image (three channels), (b) Models selected from the model zoo corresponding to the image’s cell line and FL channel combination, (c) Spatial flows and probability maps output by the finetuned models for each of the channels, (d) Channel-wise averaging of the maps, and (e) Integration into the segmentation labels.

Figure 3

Table 2. Augmentations applied to the training set during the finetuning of Cellpose models.

Figure 4

Table 3. Description of the segmentation fusion methods considered to generate aggregated segmentations from channel-wise segmentations.

Figure 5

Table 4. Comparison of the performance of the different channel fusion methods on the test set images, assessed with F1-score as a segmentation metric.

Figure 6

Figure 3. Fivefold cross-validated F1-scores for cytoplasm segmentation on all five cell lines. These tables show the evaluation using Vanilla Cellpose (V), the channel-wise (CW) strategy, and the multi-channel (MC) strategy as columns, on the powerset of channels as rows, aggregated together using the Flow Averaging (FA) method.

Figure 7

Figure 4. Fivefold cross-validated F1-scores for nuclei segmentation on all five cell lines. These tables show the evaluation using Vanilla Cellpose (V), the channel-wise (CW) strategy, and the multi-channel (MC) strategy as columns, on the powerset of channels as rows, aggregated together using the Flow Averaging (FA) method.

Figure 8

Figure 5. Segmentation examples using the proposed method on the test set images for (a–e) CL1 to CL5. We compare ground truth, Vanilla Cellpose results for best evaluation channel combination, channel-wise (CW), and multi-channel (MC) fine-tuning strategies. The respective training channel combination and evaluation channel combination are detailed in the figures. The images are cropped for ease of visualization.

Figure 9

Figure 6. Evaluation of the F1 scores on the CL1 cell-line imaged with the $ {A}_2 $ acquisition method (widefield). The tables are organized the same way as in Figures 3 and 4.

Supplementary material: PDF

Zyss et al. supplementary material

Zyss et al. supplementary material

Download Zyss et al. supplementary material(PDF)
PDF 61.7 KB