Investigator
Shandong University
Single‐detector multiplex imaging flow cytometry for cancer cell classification with deep learning
AbstractImaging flow cytometry, which combines the advantages of flow cytometry and microscopy, has emerged as a powerful tool for cell analysis in various biomedical fields such as cancer detection. In this study, we develop multiplex imaging flow cytometry (mIFC) by employing a spatial wavelength division multiplexing technique. Our mIFC can simultaneously obtain brightfield and multi‐color fluorescence images of individual cells in flow, which are excited by a metal halide lamp and measured by a single detector. Statistical analysis results of multiplex imaging experiments with resolution test lens, magnification test lens, and fluorescent microspheres validate the operation of the mIFC with good imaging channel consistency and micron‐scale differentiation capabilities. A deep learning method is designed for multiplex image processing that consists of three deep learning networks (U‐net, very deep super resolution, and visual geometry group 19). It is demonstrated that the cluster of differentiation 24 (CD24) imaging channel is more sensitive than the brightfield, nucleus, or cancer antigen 125 (CA125) imaging channel in classifying the three types of ovarian cell lines (IOSE80 normal cell, A2780, and OVCAR3 cancer cells). An average accuracy rate of 97.1% is achieved for the classification of these three types of cells by deep learning analysis when all four imaging channels are considered. Our single‐detector mIFC is promising for the development of future imaging flow cytometers and for the automatic single‐cell analysis with deep learning in various biomedical fields.
Light scattering imaging modal expansion cytometry for label-free single-cell analysis with deep learning
Single-cell imaging plays a key role in various fields, including drug development, disease diagnosis, and personalized medicine. To obtain multi-modal information from a single-cell image, especially for label-free cells, this study develops modal expansion cytometry for label-free single-cell analysis. The study utilizes a deep learning-based architecture to expand single-mode light scattering images into multi-modality images, including bright-field (non-fluorescent) and fluorescence images, for label-free single-cell analysis. By combining adversarial loss, L1 distance loss, and VGG perceptual loss, a new network optimization method is proposed. The effectiveness of this method is verified by experiments on simulated images, standard spheres of different sizes, and multiple cell types (such as cervical cancer and leukemia cells). Additionally, the capability of this method in single-cell analysis is assessed through multi-modal cell classification experiments, such as cervical cancer subtypes. This is demonstrated by using both cervical cancer cells and leukemia cells. The expanded bright-field and fluorescence images derived from the light scattering images align closely with those obtained through conventional microscopy, showing a contour ratio near 1 for both the whole cell and its nucleus. Using machine learning, the subtyping of cervical cancer cells achieved 92.85 % accuracy with the modal expansion images, which represents an improvement of nearly 20 % over single-mode light scattering images. This study demonstrates the light scattering imaging modal expansion cytometry with deep learning has the capability to expand the single-mode light scattering image into the artificial multimodal images of label-free single cells, which not only provides the visualization of cells but also helps for the cell classification, showing great potential in the field of single-cell analysis such as cancer cell diagnosis.
Differentiating single cervical cells by mitochondrial fluorescence imaging and deep learning‐based label‐free light scattering with multi‐modal static cytometry
AbstractCervical cancer is a high‐risk disease that threatens women's health globally. In this study, we developed the multi‐modal static cytometry that adopted different features to classify the typical human cervical epithelial cells (H8) and cervical cancer cells (HeLa). With the light‐sheet static cytometry, we obtain brightfield (BF) images, fluorescence (FL) images and two‐dimensional (2D) light scattering (LS) patterns of single cervical cells. Three feature extraction methods are used to extract multi‐modal features based on different data characteristics. Analysis and classification of morphological and textural features demonstrate the potential of intracellular mitochondria in cervical cancer cell classification. The deep learning method is used to automatically extract deep features of label‐free LS patterns, and an accuracy of 76.16% for the classification of the above two kinds of cervical cells is obtained, which is higher than the other two single modes (BF and FL). Our multi‐modal static cytometry uses a variety of feature extraction and analysis methods to provide the mitochondria as promising internal biomarkers for cervical cancer diagnosis, and to show the promise of label‐free, automatic classification of early cervical cancer with deep learning‐based 2D light scattering.