Waiting
Login processing...

Trial ends in Request Full Access Tell Your Colleague About Jove

Bioengineering

Automation of the Micronucleus Assay Using Imaging Flow Cytometry and Artificial Intelligence

Published: January 27, 2023 doi: 10.3791/64549

Summary

The micronucleus (MN) assay is a well-established test for quantifying DNA damage. However, scoring the assay using conventional techniques such as manual microscopy or feature-based image analysis is laborious and challenging. This paper describes the methodology to develop an artificial intelligence model to score the MN assay using imaging flow cytometry data.

Abstract

The micronucleus (MN) assay is used worldwide by regulatory bodies to evaluate chemicals for genetic toxicity. The assay can be performed in two ways: by scoring MN in once-divided, cytokinesis-blocked binucleated cells or fully divided mononucleated cells. Historically, light microscopy has been the gold standard method to score the assay, but it is laborious and subjective. Flow cytometry has been used in recent years to score the assay, but is limited by the inability to visually confirm key aspects of cellular imagery. Imaging flow cytometry (IFC) combines high-throughput image capture and automated image analysis, and has been successfully applied to rapidly acquire imagery of and score all key events in the MN assay. Recently, it has been demonstrated that artificial intelligence (AI) methods based on convolutional neural networks can be used to score MN assay data acquired by IFC. This paper describes all steps to use AI software to create a deep learning model to score all key events and to apply this model to automatically score additional data. Results from the AI deep learning model compare well to manual microscopy, therefore enabling fully automated scoring of the MN assay by combining IFC and AI.

Introduction

The micronucleus (MN) assay is fundamental in genetic toxicology to evaluate DNA damage in the development of cosmetics, pharmaceuticals, and chemicals for human use1,2,3,4. Micronuclei are formed from whole chromosomes or chromosome fragments that do not incorporate into the nucleus following division and condense into small, circular bodies separate from the nucleus. Thus, MN can be used as an endpoint to quantify DNA damage in genotoxicity testing1.

The preferred method for quantifying MN is within once-divided binucleated cells (BNCs) by blocking division using Cytochalasin-B (Cyt-B). In this version of the assay, cytotoxicity is also assessed by scoring mononucleated (MONO) and polynucleated (POLY) cells. The assay can also be performed by scoring MN in unblocked MONO cells, which is faster and easier to score, with cytotoxicity being assessed using pre- and post-exposure cell counts to assess proliferation5,6.

Physical scoring of the assay has historically been performed through manual microscopy, since this permits visual confirmation of all key events. However, manual microscopy is challenging and subjective1. Thus, automated techniques have been developed, including microscope slide scanning and flow cytometry, each with their own advantages and limitations. While slide-scanning methods allow key events to be visualized, slides must be created at optimal cell density, which can be difficult to achieve. Additionally, this technique often lacks cytoplasmic visualization, which can compromise the scoring of MONO and POLY cells7,8. While flow cytometry offers high-throughput data capture, the cells must be lysed, thus not permitting the use of the Cyt-B form of the assay. Additionally, as a non-imaging technique, conventional flow cytometry does not provide visual validation of key events9,10.

Therefore, imaging flow cytometry (IFC) has been investigated to perform the MN assay. The ImageStreamX Mk II combines the speed and statistical robustness of conventional flow cytometry with the high-resolution imaging capabilities of microscopy in a single system11. It has been shown that by using IFC, high-resolution imagery of all key events can be captured and automatically scored using feature-based12,13 or artificial intelligence (AI) techniques14,15. By using IFC to perform the MN assay, the automatic scoring of many more cells compared to microscopy in a shorter amount of time is achievable.

This work deviates from a previously described image analysis workflow16 and discusses all steps required to develop and train a Random Forest (RF) and/or convolutional neural network (CNN) model using the Amnis AI software (henceforth referred to as "AI software"). All necessary steps are described, including populating ground truth data using AI-assisted tagging tools, interpretation of model training results, and application of the model to classify additional data, permitting calculation of genotoxicity and cytotoxicity15.

Subscription Required. Please recommend JoVE to your librarian.

Protocol

1. Data acquisition using imaging flow cytometry

NOTE: Refer to Rodrigues et al.16 with the following modifications, noting that the acquisition regions using IFC may need to be modified for optimal image capture:

  1. For the non-Cyt-B method, perform a cell count using a commercially available cell counter following the manufacturer's instructions (see Table of Materials) on each culture immediately before culture and immediately after the recovery period.
  2. If running samples on a single camera imaging flow cytometer, place the Brightfield (BF) in Channel 4. Replace M01 with M04, and M07 with M01.
    NOTE: "M" refers to the camera channel on the IFC.
  3. Use the 40x magnification during acquisition.
  4. On the BF area versus BF aspect ratio plot during acquisition, use the following region coordinates:
    X-coordinates: 100 and 900; Y-coordinates: 0.7 and 1 (Cyt-B method)
    X-coordinates: 100 and 600; Y-coordinates: 0.7 and 1
  5. On the Hoechst intensity plot, use the following region coordinates:
    X-coordinates: 55 and 75; Y-coordinates: 9.5 and 15 (Cyt-B method)
    X-coordinates: 55 and 75; Y-coordinates: 13 and 21 (non-Cyt-B method)
  6. To remove images of apoptotic and necrotic objects from the data, launch the IDEAS 6.3 software package (henceforth referred to as the "image analysis software"; see Table of Materials).
    ​NOTE: The AI software has been designed to work with .daf files that have been processed using the latest version of the image analysis software. Ensure that the image analysis software is up to date.
  7. Save this work as a template (.ast) file.

2. Creating .daf files for all .rif files

  1. The AI software only permits importing .daf files. Create .daf files for all .rif files in the experiment through batch processing.
  2. Under the Tools menu, click on Batch Data Files and then click on Add Batch.
  3. In the new window, select Add Files and select the .rif files to be added to the batch. Under the Select a Template or Data Analysis File (.ast, .daf) option, select the .ast file that was created previously.
  4. Assign a batch name if needed and click on OK to create .daf files for all loaded .rif files.

3. Creating an experiment in the AI software

  1. Refer to the flow chart in Figure 1 that describes the process of creating a deep learning model using the AI software.
  2. Launch the AI software and ensure the most recent version is installed by clicking on About in the bottom left corner of the window. If the most recent version is not installed, contact support@luminexcorp.com to obtain it.
  3. The default screen in the software is the New Experiment screen. Use the Folder icon to choose where to save the experiment, and type a name for the experiment (e.g., "MN model").
  4. Under Experiment Type, click the radio button beside Train to start a training experiment to begin building the CNN model. Click on Next.
    1. Optional: if a model has been previously trained, it can be used as a template for a new AI model, and can be selected as a template for creation of a new model from the Select Template Model screen. If no template model exists simply skip this step by clicking Next.
  5. The next screen is the Define New Model screen. Under Model, the name that was given to the model in step 3.3 will be automatically populated.
    1. Under Description, type a description for the model (optional) and leave the maximum image size at 150 pixels.
    2. Under Channels, click on Add BF to add a brightfield channel to the list. Under Name, double-click on Brightfield and rename this channel to BF. Click on Add FL to add a fluorescent channel to the list. Under Name, double-click on Fluorescent and rename this channel to DNA.
    3. Under Class Names, click on Add. In the pop-up window, type Mononucleated and click on OK. This adds the mononucleated class to the list of class names. Repeat this process to ensure the following six classes are defined in the list:
      Mononucleated
      Mononucleated with MN
      Binucleated
      Binucleated with MN
      Polynucleated
      Irregular morphology
      ​Click on Next.

      NOTE: These six ground truth model classes will represent the key events to be scored, as well as images with morphology that differs from the accepted scoring criteria5.
      1. Optional: If desired, the analysis template from 1.7 can be included to use features from the image analysis software. If you would like to include these features in the AI model, browse for the .ast file, then from the channel specific dropdowns, choose the feature subsets you would like to include.
    4. Under Select Files, click on Add Files and browse for the desired files to be added to the AI software to build the ground truth data. Click on Next.
      NOTE: It is important to add multiple data files (e.g., positive and negative control data) that contain a sufficient number of all key events.
  6. Next, on the Select Base Populations screen, locate the Non-Apoptotic population from the population hierarchy. Right-click on the Non-Apoptotic population and select Select All Matching Populations. Click on Next.
    NOTE: It is important to exclude any populations that should not be classified (e.g., beads, debris, doublets, etc.)
  7. This screen is the Select Truth Populations screen.
    1. If tagged truth populations of the key events have not been created in the image analysis software, then click on Next.
    2. If tagged truth populations have been created in the image analysis software, assign them to the appropriate model class.
    3. To assign a tagged truth population of MONO cells with MN, click on the Mononucleated with MN class under Model Classes on the left. Then click on the appropriate tagged truth population on the right that contains these events.
    4. If the tagged truth populations have been created in more than one data file, right-click on one of the truth populations and select Select All Matching Populations to add tagged populations from multiple files to the appropriate class.
    5. Once all appropriate truth populations have been assigned, click on Next.
  8. On the Select Channels screen, choose the appropriate channels for the experiment. Here, set BF to channel 1 and Hoechst to channel 7. Right-click on a channel and select Apply to All. Click on Next.
  9. Finally, on the Confirmation screen, click on Create Experiment.
  10. The AI software loads images from the data files and creates the model classes defined in step 3.5.3 with the ground truth imagery that was assigned in step 3.7. Click on Finish.
  11. Once the experiment is created, five options are presented:
    Experiment: provides details of the experiment, including data files loaded, channels chosen, and defined ground truth model classes.
    Tagging: launches the tagging tool through which users can populate ground truth data.
    Training: trains a model based on the ground truth data.
    Classify: uses trained models to classify data.
    Results: provides results from both a training experiment and a classify experiment.

4. Populating the ground truth data using AI-assisted tagging tools

  1. Click on Tagging to launch the tagging tool interface.
    1. Click on the zoom tools (magnifying glass icons) to crop the images for easier viewing.
    2. Click on the slider bar to adjust the image size to change how many images are shown in the gallery.
    3. Click on the Display Setting option, and choose Min-Max, which provides the best contrast image for identifying all key events.
    4. Click on Setup Gallery Display to change the color of the DNA image to yellow or white, which will improve the visualization of small objects (e.g., MN).
  2. Click on Cluster to run the algorithm to group objects with similar morphology together. Once clustering is complete, the individual clusters with the number of objects per cluster are shown in a list under Unknown Populations. Select the individual clusters to view the objects within the cluster and assign these objects to their appropriate model classes.
  3. After a minimum of 25 objects have been assigned to each model class, the Predict algorithm becomes available. Click on Predict.
    NOTE: Objects that don't fit well into any population remain classified as Unknown. As more objects are added to the truth populations, the prediction accuracy improves.
  4. Continue to populate the ground truth model classes with appropriate imagery until a sufficient number of objects in each class is reached.
  5. Once a minimum of 100 objects have been assigned to each model class, click on the Training tab at the top of the screen. Click on the Train button to create a model using the Random Forest and CNN algorithms. 
    NOTE: The AI software creates models using both the Random Forest and CNN algorithms, the checkboxes permit creation of models using Random Forest of CNN algorithms only.

5. Assessing model accuracy

  1. Once model training is complete, click on View Results.
  2. Use the results screen to assess model accuracy. Use the pulldown menu to switch between Random Forest and CNN. 
    NOTE: The truth populations can be updated, and the model can be re-trained or used as-is to classify additional data.
    1. To update the truth populations, click on Tagging at the top and follow section 4.

6. Classifying data using the model

  1. Launch the AI software. The default screen is the New Experiment screen. Use the Folder icon to choose where to save the experiment and type a name for the experiment.
  2. Under Experiment Type, click the radio button beside Classify to start a classification experiment. Click on Next.
  3. Click on the model to be used for classification, then click on Next.
  4. On the Select Files screen, click on Add Files and browse for the files to be classified by the CNN model. Click on Next.
  5. Next, on the Select Base Populations screen, click on the checkbox next to the Non-Apoptotic population in one of the loaded files. Right-click on the Non-Apoptotic population and click on Select All Matching Populations to select this population from all loaded files. Click on Next.
  6. Optional: if the data to be classified contains truth populations, they can be assigned to the appropriate model classes on the Select Truth Populations screen. Otherwise, click Next to skip this step.
  7. On the Select Channels screen, choose Channel 1 for brightfield and Channel 7 for the DNA stain. Right-click on a channel and click on Apply to All. Then click on Next.
  8. Finally, on the Confirmation screen, click on Create Experiment. The AI software loads the selected model and all images from the chosen data files. Click on Finish.
  9. Click on Classify to launch the classification screen. Click on the Classify button. This begins the process of using the RF and CNN model to classify additional data and identify all objects that belong in the specified model classes.
    NOTE: The checkboxes can be used to select the RF model and/or the CNN model.
  10. Once the classification is complete, click on View Results.
  11. Click the Update DAFs button to bring up the Update DAFs with Classification Results window. Click on OK to update the .daf files.

7. Generating a report of the classification results

  1. On the Results screen, click on Generate Report. Select the checkbox beside Create Report for Each Input DAF if an individual report for each input daf is required. Click on OK.
  2. Once completed, open the folder where the report files have been saved. Within the folder, there is an experiment .pdf report and a Resources folder.
  3. Open the .pdf to view the report. The report contains model and experiment information, the list of input .daf files, the class counts and class percentages in tabular and histogram format, and a confusion matrix summarizing the median prediction probability across all input .daf files.
  4. Open the Resources folder and then the CNN folder. Within this folder are .png files of the class count and percentage bar graphs, as well as the confusion matrix. Additionally, there are .csv files containing the class counts and percentages for each input file.

8. Determining MN frequency and cytotoxicity

  1. Calculating MN frequency
    1. Non Cyt-B method: To determine MN frequency, open the class_count.csv file from step 7.4. For each input file, divide the counts in the "Mononucleated with MN" population by the counts in the "Mononucleated" population and multiply by 100:
      Equation 1
    2. Cyt-B method: To determine MN frequency, open the class_count.csv file from step 7.4. For each input file, divide the counts in the "Binucleated with MN" population by the counts in the "Binucleated" population and multiply by 100:
      Equation 2
  2. Calculating cytotoxicity
    1. Non Cyt-B method:
      1. Using the initial cell counts and the post-treatment cell counts, first calculate the population doubling (PD) for each sample2:
        Equation 3
      2. Next, calculate the relative population doubling2:
        Equation 4
      3. Finally, calculate the cytotoxicity2 for each sample:
        Equation 5
    2. Cyt-B method:
      1. To calculate Cytokinesis-Block Proliferation Index (CBPI)2, use the counts in the mononucleated, binucleated, and polynucleated classes for each sample from the class_count.csv file:
        Equation 6
      2. To calculate cytotoxicity2, use the CBPI from the control cultures (C) and exposed cultures (T):
        Equation 7

Subscription Required. Please recommend JoVE to your librarian.

Representative Results

Figure 1 shows the workflow for using the AI software to create a model for the MN assay. The user loads the desired .daf files into the AI software, then assigns objects to the ground truth model classes using the AI-assisted cluster (Figure 2) and predict (Figure 3) tagging algorithms. Once all ground truth model classes have been populated with sufficient objects, the model can be trained using the RF or CNN algorithms. Following training, the performance of the model can be assessed using tools including class distribution histograms, accuracy statistics, and an interactive confusion matrix (Figure 4). From the results screen in the AI software, the user can either return to the training portion of the workflow to enhance the ground truth data or, if sufficient accuracy has been achieved, the user can use the model to classify additional data.

Using both the cluster and predict algorithms, 190 segments with a total of 285,000 objects were assigned to the proper ground truth classes until all classes were populated with between 1,500 and 10,000 images. In total, 31,500 objects (only 10.5% of the initial objects loaded) were used in the training of this model. Precision (percentage of false positives), recall (percentage of false negatives), and F1 score (balance between precision and recall) are available in the deep learning software package to quantify model accuracy. Here, these statistics ranged from 86.0% to 99.4%, indicating high model accuracy (Figure 4).

Using Cyt-B, background MN frequencies for all control samples were between 0.43% and 1.69%, comparing well to literature17. Statistically significant increases in MN frequency, ranging from 2.09% to 9.50% for (Mitomycin C) MMC and from 2.99% to 7.98% for Etoposide, were observed when compared to solvent controls and compared well to manual microscopy scoring. When examining the negative control Mannitol, no significant increases in MN frequency were observed. Additionally, increasing cytotoxicity with the dose was observed for both Etoposide and MMC, with both microscopy and AI showing similar trends across the dose range. For Mannitol, no observable increase in cytotoxicity was seen (Figure 5).

When not using Cyt-B, background MN frequencies for all control samples were between 0.38% and 1.0%, consistent with results published in the literature17. Statistically significant increases in MN frequency, ranging from 2.55% to 7.89% for MMC and from 2.37% to 5.13% for Etoposide, were observed when compared to solvent controls and compared well to manual microscopy scoring. When examining the negative control Mannitol, no significant increases in MN frequency were observed. Further, increasing cytotoxicity with the dose was observed for both Etoposide and MMC, with both microscopy and AI showing similar trends across the dose range. For Mannitol, no observable increase in cytotoxicity was seen (Figure 5).

When scoring by microscopy, from each culture, 1,000 binucleated cells were scored to assess MN frequency and another 500 mononucleated, binucleated, or polynucleated cells were scored to determine cytotoxicity in the Cyt-B version of the assay. In the non-Cyt-B version of the assay, 1,000 mononucleated cells were scored to assess MN frequency. By IFC, an average of 7,733 binucleated cells, 6,493 mononucleated cells, and 2,649 polynucleated cells were scored per culture to determine cytotoxicity. MN frequency was determined from within the binucleated cell population for the Cyt-B version of the assay. For the non-Cyt-B version of the assay, an average of 27,866 mononucleated cells were assessed for the presence of MN (Figure 5).

Figure 1
Figure 1: AI software workflow. The user begins by selecting the .daf files to be loaded into the AI software. Once the data has been loaded, the user begins to assign objects to the ground truth model classes through the user interface. To aid in ground truth population, the cluster and predict algorithms can be used to identify imagery with similar morphology. Once sufficient objects have been added to each model class, the model can be trained. Following training, the user can assess the performance of the model using the tools provided, including an interactive confusion matrix. Finally, the user can either return to the training portion of the workflow to enhance the ground truth data or, if sufficient accuracy has been achieved, the user can step out of the training/tagging workflow loop and use the model to classify additional data. Please click here to view a larger version of this figure.

Figure 2
Figure 2: Cluster algorithm. The cluster algorithm can be run at any time on a segment of 1,500 objects randomly selected from the input data. This algorithm groups similar objects within a segment together according to the morphology of both unclassified objects and objects that have been assigned to the ground truth model classes. Example imagery shows binucleated, mononucleated, and multinucleated cells, and cells with irregular morphology. Clusters containing mononucleated cells fall on one side of the object map, while clusters with multinucleated cells are on the opposite side of the object map. Binucleated cell clusters fall somewhere between mono- and multinucleated cell clusters. Finally, clusters with irregular morphology fall in a different area of the object map altogether. The user interface permits adding entire clusters, or select objects within clusters, to the ground truth model classes. Please click here to view a larger version of this figure.

Figure 3
Figure 3: Predict algorithm. The predict algorithm requires a minimum of 25 objects in each ground truth model class and attempts to predict the most appropriate model class to assign unclassified objects within a segment. The predict algorithm is more robust in comparison to the cluster algorithm with respect to the identification of subtle morphologies in images (i.e., mononucleated cells with MN [yellow] versus mononucleated cells without MN [red]). Objects with these similarities are placed in close proximity on the object map; however, the user is easily able to inspect the images in each predicted class and assign objects to the appropriate model class. Objects that the algorithm is unable to predict a class for will remain as 'unknown'. The predict algorithm permits users to rapidly populate the ground truth model classes, particularly in the case of events that are considered rare and challenging to find within the input data, such as micronucleated cells. Please click here to view a larger version of this figure.

Figure 4
Figure 4: Confusion matrix with model results. The results screen of the AI software presents the user with three different tools to assess model accuracy. (A) The class distribution histograms permit the user to click on the bins of the histogram to assess the relationship between objects in the truth populations and objects that were predicted to belong to that model class. In general, the closer the percentage values between the truth and predicted populations are to one another for a given model class, the more accurate the model. (B) The accuracy statistics table allows the user to assess three common machine learning metrics to assess model accuracy: precision, recall, and F1. In general, the closer these metrics are to 100%, the more accurate the model is at identifying events in the model classes. Finally, (C) the interactive confusion matrix provides an indication of where the model is misclassifying events. The on-axis entries (green) indicate objects from the ground truth data that were classified correctly during training. Off-axis entries (shaded orange) indicate objects from the ground truth data that were incorrectly classified. Various examples of misclassified objects are shown, including (i) a mononucleated cell classified as a mononucleated cell with MN, (ii) a binucleated cell classified as a binucleated cell with MN, (iii) a mononucleated cell classified as a cell having irregular morphology, (iv) a binucleated cell with MN classified as a cell having irregular morphology, and (v) a binucleated cell with a MN classified as a binucleated cell. Please click here to view a larger version of this figure.

Figure 5
Figure 5: Genotoxicity and cytotoxicity results. Genotoxicity measured by the percentage of MN by microscopy (clear bars) and AI (dotted bars) following a 3 h exposure and 24 h recovery for Mannitol, Etoposide, and MMC using both the (A-C) Cyt-B and (D-F) non-Cyt-B methods. Statistically significant increases in MN frequency compared to controls are indicated by asterisks (*p < 0.001, Fisher's Exact Test). Error bars represent the standard deviation of the mean from three replicate cultures at each dose point except for MMC by microscopy, where only duplicate cultures were scored. This figure has been modified from Rodrigues et al.15. Please click here to view a larger version of this figure.

Subscription Required. Please recommend JoVE to your librarian.

Discussion

The work presented here describes the use of deep learning algorithms to automate the scoring of the MN assay. Several recent publications have shown that intuitive, interactive tools allow the creation of deep learning models to analyze image data without the need for in-depth computational knowledge18,19. The protocol described in this work using a user interface-driven software package has been designed to work well with very large data files and permit the creation of deep learning models with ease. All necessary steps to create and train RF and CNN models in the AI software package are discussed, permitting highly accurate identification and quantification of all key events in both the Cyt-B and non-Cyt-B versions of the assay. Finally, the steps to use these deep learning models to classify additional data and evaluate chemical cytotoxicity and MN frequency are described.

The AI software used in this work has been created with a convenient user interface and constructed to work easily with large datasets generated from IFC systems. Training, evaluation, and enhancement of deep learning models follow a straightforward iterative approach (Figure 1), and application of the trained models to classify additional data can be accomplished in just a few steps. The software contains distinctive cluster (Figure 2) and predict (Figure 3) algorithms that permit rapid assignment of objects into appropriate ground truth model classes. The protocol in this paper demonstrates how a CNN model, constructed and trained using AI software, is able to robustly identify all key events in the MN assay; it yields results that compare well to traditional microscopy, thus removing the requirement for image analysis and computer coding experience. Furthermore, the interactive model results (Figure 4) permit the investigation of specific events that the model is misclassifying. The iterative process permits assigning these misclassified events to the appropriate model classes so that the model can be trained again to enhance accuracy.

The results presented here (Figure 5) show the evaluation of Mannitol, Etoposide, and MMC using microscopy and a CNN model created in the AI software. Using both versions of the MN assay, evaluated with a single AI model, increases in cytotoxicity are consistent with increasing doses for both MMC and Etoposide, while exposure to Mannitol yields no increase in cytotoxicity, as expected. For genotoxicity evaluation, significant (Fisher's Exact Test, one-sided) increases in MN frequency were demonstrated using MMC and Etoposide but not using Mannitol. Results for both microscopy and the AI model compared well across the dose ranges for each chemical tested.

In several previous publications, it has been shown that an IFC-based MN assay can be performed with straightforward and simple sample preparation steps along with an image-based analysis using masks (regions of interest that highlight pixels in an image) and features calculated using these masks to automatically score all key events12,13,16. This IFC-based assay takes advantage of the strengths of IFC, including high-throughput image capture, simplified sample processing with simple DNA dyes, and automated differentiation of cellular imagery with morphology that aligns with published MN assay scoring criteria. However, this workflow also included disadvantages, such as the complexity of feature-based analysis techniques that are often rigid and necessitate advanced knowledge of image analysis software packages12. The use of deep learning to analyze MN data acquired by IFC demonstrates that CNNs can be used to break away from the restrictions and difficulties of feature-based analyses, yielding results that are highly accurate and compare well to microscopy scoring14,15. While this AI-based approach is promising, further studies with an expanded selection of well-described chemicals should be performed to further test and validate the robustness of the technique. This work further demonstrates the advantages of IFC over more traditional methods, such as microscopy and conventional flow cytometry, to enhance the performance of assays with challenging morphologies and stringent scoring requirements.

Subscription Required. Please recommend JoVE to your librarian.

Disclosures

The authors are employed by Luminex Corporation, a DiaSorin Company, the manufacturer of the ImageStream imaging flow cytometer and the Amnis AI software used in this work.

Acknowledgments

None.

Materials

Name Company Catalog Number Comments
15 mL centrifuge tube Falcon 352096
Cleanser - Coulter Clenz  Beckman Coulter 8546931 Fill container with 200 mL of Cleanser.  https://www.beckmancoulter.com/wsrportal/page/itemDetails?itemNumber=8546931#2/10//0/25/
1/0/asc/2/8546931///0/1//0/
Colchicine MilliporeSigma 64-86-8
Corning bottle-top vacuum filter  MilliporeSigma CLS430769 0.22 µm filter, 500 mL bottle
Cytochalasin B MilliporeSigma 14930-96-2 5 mg bottle
Debubbler - 70% Isopropanol MilliporeSigma 1.3704 Fill container with 200 mL of Debubbler.  http://www.emdmillipore.com/US/en/product/2-Propanol-70%25-%28V%2FV%29-0.1-%C2%B5m-filtred,MDA_CHEM-137040?ReferrerURL=https%3A%2F%2Fwww.google.com%2F
Dimethyl Sulfoxide (DMSO) MilliporeSigma 67-68-5
Dulbecco's Phosphate Buffered Saline 1X EMD Millipore BSS-1006-B PBS Ca++MG++ Free 
Fetal Bovine Serum HyClone SH30071.03
Formaldehyde, 10%, methanol free, Ultra Pure Polysciences, Inc. 04018 This is what is used for the 4% and 1% Formalin. CAUTION: Formalin/Formaldehyde toxic by inhalation and if swallowed.  Irritating to the eyes, respiratory systems and skin.  May cause sensitization by inhalation or skin contact. Risk of serious damage to eyes.  Potential cancer hazard.  http://www.polysciences.com/default/catalog-products/life-sciences/histology-microscopy/fixatives/formaldehydes/formaldehyde-10-methanol-free-pure/
Guava Muse Cell Analyzer Luminex 0500-3115 A standard configuration Guava Muse Cell Analyzer was used.
Hoechst 33342 Thermo Fisher H3570 10 mg/mL solution
Mannitol MilliporeSigma 69-65-8
MEM Non-Essential Amino Acids 100X HyClone SH30238.01
MIFC - ImageStreamX Mark II Luminex, a DiaSorin company 100220 A 2 camera ImageStreamX Mark II eqiped with the 405 nm, 488 nm, and 642 nm lasers was used.
MIFC analysis software - IDEAS Luminex, a DiaSorin company 100220 "Image analysis sofware"
The companion software to the MIFC (ImageStreamX MKII)
MIFC software - INSPIRE Luminex, a DiaSorin company 100220 "Image acquisition software"
This is the software that runs the MIFC (ImageStreamX MKII)
Amnis AI software Luminex, a DiaSorin company 100221 "AI software"
This is the software that permits the creation of artificial intelligence models to analyze data
Mitomycin C MilliporeSigma 50-07-7
NEAA Mixture 100x Lonza BioWhittaker 13-114E
Penicllin/Streptomycin/Glutamine solution 100X Gibco 15070063
Potassium Chloride (KCl) MilliporeSigma P9541
Rinse - Ultrapure water or deionized water NA NA Use any ultrapure water or deionized water.  Fill container with 900 mL of Rinse.
RNase MilliporeSigma 9001-99-4
RPMI-1640 Medium 1x HyClone SH30027.01
Sheath - PBS MilliporeSigma BSS-1006-B This is the same as Dulbecco's Phosphate Buffered Saline 1x  Ca++MG++ free.  Fill container with 900 mL of Sheath.
Sterile water HyClone SH30529.01
Sterilizer - 0.4%–0.7% Hypochlorite VWR JT9416-1 This is assentually 10% Clorox bleach that can be made by deluting Clorox bleach with water.  Fill container with 200 mL of Sterilzer.
T25 flask Falcon 353109
T75 flask Falcon 353136
TK6 cells MilliporeSigma 95111735

DOWNLOAD MATERIALS LIST

References

  1. Fenech, M., et al. HUMN project initiative and review of validation, quality control and prospects for further development of automated micronucleus assays using image cytometry systems. International Journal of Hygiene and Environmental Health. 216 (5), 541-552 (2013).
  2. OECD. Test No. 487: In Vitro Mammalian Cell Micronucleus Test. Section 4. OECD Guidelines for the Testing of Chemicals. , OECD Publishing. Paris. (2016).
  3. Fenech, M. The in vitro micronucleus technique. Mutation Research/Fundamental and Molecular Mechanisms of Mutagenesis. 455 (1), 81-95 (2000).
  4. Bonassi, S., et al. An increased micronucleus frequency in peripheral blood lymphocytes predicts the risk of cancer in humans. Carcinogenesis. 28 (3), 625-631 (2007).
  5. Fenech, M. Cytokinesis-block micronucleus cytome assay. Nature Protocols. 2 (5), 1084-1104 (2007).
  6. Fenech, M. Commentary on the SFTG international collaborative study on the in vitro micronucleus test: To Cyt-B or not to Cyt-B. Mutation Research/Fundamental and Molecular Mechanisms of Mutagenesis. 607 (1), 9-12 (2006).
  7. Seager, A. L., et al. Recommendations, evaluation and validation of a semi-automated, fluorescent-based scoring protocol for micronucleus testing in human cells. Mutagenesis. 29 (3), 155-164 (2014).
  8. Rossnerova, A., Spatova, M., Schunck, C., Sram, R. J. Automated scoring of lymphocyte micronuclei by the MetaSystems Metafer image cytometry system and its application in studies of human mutagen sensitivity and biodosimetry of genotoxin exposure. Mutagenesis. 26 (1), 169-175 (2011).
  9. Bryce, S. M., Bemis, J. C., Avlasevich, S. L., Dertinger, S. D. In vitro micronucleus assay scored by flow cytometry provides a comprehensive evaluation of cytogenetic damage and cytotoxicity. Mutation Research/Genetic Toxicology and Environmental Mutagenesis. 630 (1), 78-91 (2007).
  10. Avlasevich, S. L., Bryce, S. M., Cairns, S. E., Dertinger, S. D. In vitro micronucleus scoring by flow cytometry: Differential staining of micronuclei versus apoptotic and necrotic chromatin enhances assay reliability. Environmental and Molecular Mutagenesis. 47 (1), 56-66 (2006).
  11. Basiji, D. A. Principles of Amnis imaging flow cytometry. Methods in Molecular Biology. 1389, 13-21 (2016).
  12. Rodrigues, M. A. Automation of the in vitro micronucleus assay using the Imagestream® imaging flow cytometer. Cytometry Part A. 93 (7), 706-726 (2018).
  13. Verma, J. R., et al. Investigating FlowSight® imaging flow cytometry as a platform to assess chemically induced micronuclei using human lymphoblastoid cells in vitro. Mutagenesis. 33 (4), 283-289 (2018).
  14. Wills, J. W., et al. Inter-laboratory automation of the in vitro micronucleus assay using imaging flow cytometry and deep learning. Archives of Toxicology. 95 (9), 3101-3115 (2021).
  15. Rodrigues, M. A., et al. The in vitro micronucleus assay using imaging flow cytometry and deep learning. Npj Systems Biology and Applications. 7 (1), 20 (2021).
  16. Rodrigues, M. A. An automated method to perform the in vitro micronucleus assay using multispectral imaging flow cytometry. Journal of Visualized Experiments. (147), e59324 (2019).
  17. Lovell, D. P., et al. Analysis of negative historical control group data from the in vitro micronucleus assay using TK6 cells. Mutation Research/Genetic Toxicology and Environmental Mutagenesis. 825, 40-50 (2018).
  18. Berg, S., et al. ilastik: interactive machine learning for (bio)image analysis. Nature Methods. 16 (12), 1226-1232 (2019).
  19. Hennig, H., et al. An open-source solution for advanced imaging flow cytometry data analysis using machine learning. Methods. 112, 201-210 (2017).

Tags

Micronucleus Assay Imaging Flow Cytometry Artificial Intelligence Automation Low Throughput Score Variability Visual Confirmation Large Scale Screening Toxicity Testing AI Model Convolutional Neural Network CNN Model Image Tagging Algorithms Mononucleated Binucleated Polynucleated Irregular Morphology
Automation of the Micronucleus Assay Using Imaging Flow Cytometry and Artificial Intelligence
Play Video
PDF DOI DOWNLOAD MATERIALS LIST

Cite this Article

Rodrigues, M. A., Gracia García More

Rodrigues, M. A., Gracia García Mendoza, M., Kong, R., Sutton, A., Pugsley, H. R., Li, Y., Hall, B. E., Fogg, D., Ohl, L., Venkatachalam, V. Automation of the Micronucleus Assay Using Imaging Flow Cytometry and Artificial Intelligence. J. Vis. Exp. (191), e64549, doi:10.3791/64549 (2023).

Less
Copy Citation Download Citation Reprints and Permissions
View Video

Get cutting-edge science videos from JoVE sent straight to your inbox every month.

Waiting X
Simple Hit Counter