SLAS2026 Scientific Podium Presentations

SLAS2026 Scientific Podium Presentations

Recorded On: 02/09/2026

Recordings from the Scientific Podium Presentations at SLAS2026, including presentations from nine educational tracks and two keynote presentations:

  • Keynote speakers:
    • Serena Silver, PhD CSO Accent Therapeutics
    • Paul Kenny, PhD Director, Friedman Brain Institute, Chair, Nash Family Department of Neuroscience, Icahn School of Medicine, Mount Sinai


Special thanks to our Opening Keynote Sponsor Thermo Fisher Scientific

image


  • Scientific program presentations:
    • Assay Development and Screening
    • Automation Technologies
    • New Modalities
    • Omics and Spatial Omics
    • Micro- and Nanotechnologies 
    • Cellular Technologies
    • Data Science and AI 
    • Screening Applications and Biomarker Diagnostics



Key:

Complete
Failed
Available
Locked
SLAS2026 Keynotes
Opening Keynote: Addressing Novel Oncology Targets for Tumors with Genomic Instability
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Serena Silver, PhD CSO Accent Therapeutics Accent Therapeutic’s platform and integrated capabilities fuel a robust pipeline of transformative small molecule oncology therapeutics with expansive potential for individuals living with difficult-to-treat cancers. KIF18A is a plus-end-directed kinesin known to play a role in mitosis by facilitating chromosome alignment and spindle microtubule dynamics. KIF18A loss selectively inhibits proliferation in cancer cells with TP53 mutant and chromosomal instability (CIN); those with ongoing segregation defects, like aneuploid or whole genome doubled cells, are especially vulnerable to mitotic disruption. These findings definitively establish KIF18A as a high-potential precision oncology target, opening new avenues for therapeutic intervention. Drug discovery efforts resulted in the discovery of a series of KIF18A inhibitors which were optimized to identify potent KIF18A inhibitors with a high degree of kinesin selectivity, favorable in vitro and in vivo ADME properties, and robust efficacy. Oncology indications of high interest were identified via a large, multiplexed PRISM screen, and responses in ovarian and triple negative breast cancer (TNBC) models were recapitulated in vitro and in vivo. Among CIN-associated biomarkers, Whole Genome Duplication (WGD) was a strong predictor of KIF18A sensitivity. To assess clinical feasibility, an H&E-based WGD detection model was developed using Imagene AI’s OI Suite and TCGA samples. The model achieved >0.7 AUC on a held-out test set within minutes, demonstrating proof-of-concept for AI-based WGD detection in clinical samples. KIF18A inhibitor ATX-295 is currently being evaluated in a first-in-human, Phase 1/2 open-label study assessing safety, pharmacokinetics, pharmacodynamics, and preliminary antitumor activity in patients with advanced solid tumors and ovarian cancer (NCT06799065).
Closing Keynote: Translating Orexin-1 Antagonism into Novel Therapies for Substance Use Disorders
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Paul Kenny, PhD Department Chair Icahn School of Medicine at Mount Sinai Substance use disorders (SUDs) remain a major global health challenge, with few effective medications and high relapse rates. The hypothalamic neuropeptide orexin (orexin; aka hypocretin) regulates arousal and motivation, and is thought to regulate the motivational properties of drugs of abuse, through activation of orexin-1 (OX1) receptors. Chronic exposure to addictive drugs enhances orexin signaling, which may drive their use and the negative withdrawal symptoms that initiate and maintain SUDs. Here, I report the development ET-204 (formerly AZD4041), a first-in-class, potent, and selective small-molecule OX1 receptor antagonist (IC₅₀ = low nM; >1000 nM for OX2 receptors). ET-204 was discovered and advanced through a partnership with AstraZeneca funded by the National Institute on Drug Abuse (NIDA) and the NIH Blueprint Neurotherapeutics Network (BPN). Preclinical studies demonstrated robust target engagement and dose-dependent suppression of drug self-administration and withdrawal-associated seeking in rodents, with efficacy observed at >65% receptor occupancy. In Phase 1 single- and multiple-ascending-dose studies in healthy volunteers, ET-204 was safe, well tolerated, and exhibited favorable pharmacokinetics supporting once-daily dosing. PK/PD modeling predicted up to >90% human OX1 receptor occupancy at steady state levels, consistent with therapeutic efficacy in preclinical models. ET-204 exemplifies how integrating molecular pharmacology, behavioral neuroscience, and quantitative translational modeling can accelerate innovation in therapeutic development for drug addiction. This program further highlights how cross-sector collaboration can transform mechanistic neurobiology into urgently needed treatments for substance use disorders.
Assay Development and Screening
Session: Advanced Imaging and High Content Assays
Session Chair: Kaylene Simpson Advances in high-content imaging continue to redefine how we interrogate cellular phenotypes, target biology, and compound activity. This session will showcase emerging strategies that integrate advanced imaging screens and computational approaches to drive chemical and biological discovery at scale.
Innovation Award Winner: Development and application of AI-powered label-free imaging for assays and screening
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Imaging is a powerful method to assay normal and pathological processes in cells, tissues and in vivo, as well as the effects of genetic, optogenetic and pharmacological perturbations. Among its advantages, imaging offers the possibility of observing dynamic and relatively intact, physiologically relevant and even living biological specimens, including human cell systems that can help elucidate the biology of disease. Historically, scientists have used fluorescent dyes or proteins to label specific cell structures, states or functions, and analyzed images of the fluorescent labels by hand to infer the biology that these labels are designed to report. However, several groups including ours have been able to predict patterns and intensities of fluorescent labels from the images of fixed, unlabeled samples, using artificial intelligence tools and especially deep learning methods in fully automated ways and without the need for segmentation. Recently, we extended this surprising capability to the prediction of fluorescent labels from images of living cells. In this presentation, we will describe some of this work and discuss a number of technical considerations, including the generation of robust training data sets, methods to train deep learning algorithms to predict fluorescent labels from unlabeled cells, practical ways to avoid overfitting and maximize the generalizability of the models developed, and applications of explanatory AI (xAI) tools to reveal key image features discovered by the models. We will also discuss how these approaches and algorithms might be integrated into workflows to expand the depth of phenotypic screens with little or no additional cost, or combined with time-lapse imaging to effectively “look back in time” and discover the earliest phenotypic changes that predict a future fate. Time permitting, we also will discuss new AI tools that can help investigators utilize data from phenotypic screens of unannotated pharmacological compounds to predict the mechanism of action and molecular target of their screens’ hits.
A Screening Workflow for Live Cell Painting: Profiling Drugs at Scale to Generate New insights
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Live cell painting is an emerging high-content morphological profiling method, enabling new phenotypic measurements by studying cells in their native environment and by capturing kinetic data. However, its adoption for high-throughput screens has been slow, mainly because of the requirement for new workflows adapted to live cells, and because of the lack of methods for probing cells without perturbing them. Here, we report the design and deployment of a new HTS workflow, with the goal of providing a reference for drug discovery groups to implement into their own HTS pipelines. This workflow allows to capture new profiling data at scale and in live cells, with tools that preserve cell physiology. Our workflow integrates modular automation with optimized scheduling for timing-sensitive steps, parallel assay handling, precise environmental control, and a non-toxic live cell painting method that is unique in preserving cell physiology. To evaluate performance at scale, we profiled a library of pharmacologically active compounds across multiple cellular models using live cell and fixed cell painting. This study served both as a large-scale benchmarking exercise and as a reference dataset for eventual annotation of new drug candidates in discovery pipelines. The workflow demonstrated high reproducibility, maintained consistent cell health, and delivered robust imaging performance for downstream profiling. Morphological signatures obtained from live cell painting revealed bioactive responses across compound classes, with temporal resolution providing an added dimension for profiling. Together, these results establish automated live cell painting as a scalable approach for integrating into HTS pipelines, providing a powerful bridge between discovery-scale screening and deeper mechanistic biology.
Session: From Virtual Screening to Validation
Session Chair: John Moffat, PhD Advances in machine learning and AI are transforming our ability to predict cellular responses to genetic and chemical perturbations, unlocking new opportunities in target discovery, safety assessment, and therapeutic design. This session will highlight cutting-edge methods for in silico screening, including AI-based modeling of cellular phenotypes, and showcase how these predictions are being paired with experimental validation to accelerate biological insight.
High-throughput patient-derived 3d tumor spheroid assay for CAR-T therapy pre-clinical evaluation
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available
PhenoCompass: A Multimodal Deep Learning Approach for Phenotypically Navigated Virtual Screening
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Accurate prediction of target-pathway interactions for small molecules remains a central challenge in drug discovery. Recent advances in high-content phenotypic screening have ushered in a new chapter in activity prediction through comprehensive mapping of compound-induced cellular phenotypes via scalable and unbiased high-dimensional representations. Still, proper linkage of image-based phenotype and chemical structure remains nontrivial – especially when considering batch effects and the need to generalize to unseen chemical space. Here we present PhenoCompass, a multimodal model that aligns perturbation-induced representations of Cell Painting phenotypes with their corresponding compound structures to perform large-scale virtual screening. Leveraging the JUMP Cell Painting dataset, PhenoCompass learns batch-invariant cell-morphology representations via a self-supervised vision transformer and utilizes graph neural-network embeddings with additional hand-curated molecular features to encode chemical structure. These modalities are aligned into a shared co-embedding to facilitate accurate cross-modal retrieval and structure-to-phenotype prediction. We show PhenoCompass outperforms structure-only, image-only, and existing multimodal models at ranking phenotypic similarity to anchor sets for experimental compounds with novel chemical structure. External to JUMP, PhenoCompass also shows robust prediction of potent binders for historical, in-house HTS assays of anchor-related targets. Additionally, we perform a prospective, large-scale virtual screen, using the 3.8 billion-compound Enamine REAL library. Of the ~400 Enamine compounds best predicted by PhenoCompass to be active, we confirm ~7 by functional assay to be pathway-specific, and 3 to be on-target, and 1 to be target-specific by a direct binding assay. Altogether, PhenoCompass offers a scalable, generalizable, and multimodal approach for virtual screening of pathway-specific modulators.
Predicting cellular responses to perturbation across diverse contexts with State
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Cellular responses to perturbations are a cornerstone for understanding biological mechanisms and selecting drug targets. While machine learning models offer tremendous potential for predicting perturbation effects, they currently struggle to generalize to unobserved cellular contexts. Here, we introduce State, a transformer model that predicts perturbation effects while accounting for cellular heterogeneity within and across experiments. State predicts perturbation effects across sets of cells and is trained using gene expression data from over 100 million perturbed cells. State improved discrimination of effects on large datasets by more than 30% and identified differentially expressed genes across genetic, signaling and chemical perturbations with significantly improved accuracy. Using its cell embedding trained on observational data from 167 million human cells, State identified strong perturbations in novel cellular contexts where no perturbations were observed during training. We further introduce Cell-Eval, a comprehensive evaluation framework that highlights State's ability to detect cell type-specific perturbation responses, such as cell viability. Overall, the performance and flexibility of State sets the stage for scaling the development of virtual cell models.
Chemomics of DEL: Building Protein Structure–Function Maps and Machine Learning Models from Untapped Screening Data
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: DNA-Encoded Library (DEL) screens yield chemistry data at the -omics level, yet the vast majority is unused. By integrating advanced DEL analytics with computational and medicinal chemistry approaches, we generated detailed structure–activity relationship (SAR) insights and predictive AI/ML models across multiple target classes and modes of action. The results demonstrate that comprehensive use of DEL data accelerates the translation from hit identification to lead optimization, compressing years of traditional discovery into a single experiment while revealing novel, previously unexplored chemical space. We conclude that DEL datasets, when fully leveraged, can serve as high-resolution target structure–function maps and robust foundations for predictive modeling. This talk will demonstrate how moving beyond top hits to leverage the full dataset accelerates preclinical programs, uncovers novel chemical space, and redefines what is possible in small-molecule discovery. Future efforts will extend these methods to broader target classes and integrate DEL-derived insights with structural biology to further accelerate preclinical research.
Session: CRISPR screens with High-Dimensional Readouts
Session Chair: John Doench Viability-based CRISPR screens are a mainstay in functional genomics, offering a scalable and straightforward approach to identify genes essential for cell growth or survival. However, the simplicity of the readout limits the types of biological questions that can be addressed. Many phenotypes of interest, such as changes in cell morphology, differentiation state, or transcriptional programs, are often invisible to this approach. As a result, there has been a strong push to expand the phenotypic resolution of CRISPR screens through the integration of more informative readouts. This session will highlight cutting-edge strategies that couple CRISPR perturbations to high-content phenotyping, with a focus on imaging-based assays and single-cell RNA sequencing. Together, these methods are unlocking new dimensions of cellular biology, enabling researchers to connect gene function to complex cellular behaviors with unprecedented depth.
Advances in CRISPR Perturbational Screens Across Modalities and Readouts
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: This talk will highlight recent advances in CRISPR-based screening, starting with updated libraries for gene knockout, interference, and activation. These next-generation libraries incorporate improved genome annotations, select more active reagents, and are optimized for better expression and delivery—resulting in more robust perturbations and cleaner, more interpretable data. The presentation will also explore the application of base and prime editing to model the functional consequences of specific genetic variants, such as mutations that confer resistance to small-molecule therapies. These technologies make it possible to introduce precise edits at scale, offering a powerful platform for variant-to-function studies in disease-relevant contexts. Finally, the talk will cover key experimental considerations for CRISPR-based single-cell transcriptomic screens, such as Perturb-seq. Topics will include construct and barcode design, transcript capture efficiency, and practical strategies for scaling and analyzing these complex datasets. Together, these developments represent a more precise and versatile CRISPR toolkit for functional genomics and therapeutic discovery."
An atlas of 100 million single cells of genome-wide perturb-seq, and its impact for AI, Precision Medicine, and Target Discovery
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available We performed genome-wide CRISPRi and CRISPRa perturb-seq in dozens of cell types, creating an atlas of over 100 million single cells using the Illumina PIP-seq single cell technology. Combined with human genetics cohorts of 1.2 million people with whole genome sequencing and medical record data, we trained deep learning models with applications to interpretation of human genetic variation, mapping novel disease pathways, and identifying candidate drug targets.
High-throughput optical pooled screening of live cell dynamics with multi-camera array microscopy
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Optical pooled screening (OPS) has emerged as a powerful method for high-throughput functional genomics. OPS leverages high-content microscopy to readout image-based phenotypes of cells with pooled CRISPR genetic perturbations, followed by in situ sequencing of DNA barcodes to associate individual cells with their perturbation identity. OPS has been deployed in areas including fundamental cell biology, immunology, and cancer biology, and the largest OPS studies to date have profiled tens of millions of cells enabling genome-wide imaged-based screening. However, imaging at this scale is not trivial, with multi-cycle phenotype readout and in situ sequencing requiring several weeks of continuous imaging time (and significant instrument usage costs in shared facilities). Likewise, challenges of high-frame-rate imaging at large scales with conventional widefield microscopes have meant that OPS is largely limited to static phenotypes in fixed cells. Higher-throughput imaging would (1) accelerate existing genome-scale screens, (2) enable even larger, more complex experiments, like those exploring multi-gene interactions or clonally resolved screens, and (3) make it possible to capture rapid cellular behaviors like immune cell dynamics. Multi-camera array microscopy (MCAM) offers a novel imaging platform for extremely high-throughput imaging. In an MCAM system, dozens of miniaturized microscopes image many fields of view simultaneously, enabling dramatic increases in overall imaging throughput compared to conventional single field-of-view microscopes. Here we propose leveraging a 48-camera array microscope for OPS with 1.2um full-pitch resolution (equivalent to a standard 10x objective), enabling a nominal 48x increase in imaging rate. We have developed a module for multichannel fluorescence without mechanical filter switching to enable rapid readout of OPS barcodes. This module includes a custom microlens array homogenizer for multi-channel illumination and a custom large-format multi-band emission filter. Additionally, we leverage a recently developed high-density LED array to enable high-speed multidirectional illumination for phase contrast microscopy of unstained living cells, providing a platform to assay dynamic cellular phenotypes. Our ongoing and future work is focused on validating MCAM-based barcode readout efficacy at scale as well as developing computational approaches to quantify and co-register barcode identities and live-cell phenotypes.
Massively paralleled genome-wide, optical pooled screen of TMED cargo receptor abundance and subcellular localization
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available The toxic accumulation of misfolded or mislocalized mutant proteins precipitates the development of several forms of tubular kidney disease (i.e. ADTKD-MUC1). Previous work from our lab has identified the involvement of the transmembrane emp24 domain (TMED) family of protein cargo receptors in this disease process. However, the exact roles and functions of TMEDs in regulating protein cargo trafficking within the secretory pathway remain poorly understood. Although the expression and subcellular localization of the TMEDs are crucial for their function, it is unknown what regulates these aspects of TMED biology. We therefore performed a genome-wide optical pooled screen of TMED abundance and localization within the secretory pathway in tubular kidney epithelial cells. We found that TMED expression was ablated by the corresponding gTMED and unaffected in both non-targeting and intergenic controls. Moreover, we identified several novel genetic regulators of TMED abundance and secretory localization. Future research will mechanistically interrogate the role of these regulators in the control of TMED function.
Session: Biochemical and Biophysical Proximity Assays
Session Chair: Amine Sadok, PhD Induced Proximity-based therapeutics rely on small molecules that bring a protein of interest to the vicinity of an effector promoting the formation of a ternary complex to achieve optimal target engagement and functional efficacy. This session will discuss the latest progress in biochemical and biophysical screening strategies to accelerate the discovery and optimization of Molecular Glue Degraders.
Yesterday’s Undruggable Are Today’s Degradable: The Promise of Molecular Glue Degraders
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available I will introduce AMGEN Induced proximity platform and highlight the discovery of a VHL-based Molecular Glue Degrader
Systematic molecular glue discovery with a high-throughput protein remodeling platform
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Realising the promise of new medicines that operate through a targeted molecular glue-induced degradation mechanism requires systematic tools that can uncover the relevant principles of neomorphic protein-protein interactions. Whilst some monovalent glue degraders have been found through serendipity, the rules for small molecule attributes and the pairs or complexes of proteins that are amenable to drug-induced proximity control remain poorly articulated. Here we introduce a new approach to address this by using programmed libraries of intramolecularly edited proteins to expand protein surface landscapes and trigger new druggable interactions. We show that effector proteins, such as the E3 ligase Cereblon, can be engineered to provoke neomorphic activity by inducing the degradation of new client proteins and that these de novo interactions provide a blueprint from which new small molecule degraders can be built. As a demonstration of the approach, we use the platform to identify new non-IMiD molecular glue degraders of the oncology target GSPT1 and we will share new data showcasing the next generation of E3 Ligases for glue development.
Homogeneous proximity assay profiling to evaluate cross-platform reproducibility in identifying high-throughput screening hits
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Homogeneous proximity assays are frequently employed in high-throughput screening (HTS) programs for the discovery and optimization of modulators of protein-protein interactions (PPIs) as therapeutics and chemical probes. To evaluate cross-platform reproducibility of proximity assays, we are conducting a systematic study that will directly compare screening results across multiple assay technologies. This study will help scientists make well-informed decisions about the utility of these assays in early drug discovery and highlight factors that influence the reproducibility of HTS results. In this study, we aim to profile three assay platforms– AlphaLISA, HTRF, and NanoBiT—that will assess the PPI between the SARS-CoV2 receptor binding domain (RBD) and its receptor on the human angiotensin-converting enzyme 2 (ACE2). For all assays, the readout for potential inhibition of the RBD:ACE2 PPI by a compound is a loss of signal. Five assays across these three platforms were either developed in-house or miniaturized from a commercially available kit in 384-well plates. Based on satisfactory assay metrics, four of these assays were advanced to assay optimization in 1536-well plates using the Sigma-Aldrich Library of Pharmacologically Active Compounds (LOPAC). Preliminary results from the LOPAC screen show that only one hit is common to all four assays. We also found that a previously identified hit (Corilagin) from the literature, which inhibits the RBD:ACE2 PPI, is reproducible using our newly developed AlphaLISA assays but only partially inhibits the HTRF assay signal and does not inhibit the NanoBiT assay signal, suggesting that HTS hits may vary across platforms. We are currently advancing three of these assays, one assay per platform, to the NCATS Pharmacologically Active Chemical Toolbox (NPACT) screen, where hits from primary screens and follow-up quantitative HTS will be compared across the selected platform technologies. We will also be conducting counterscreens to help eliminate hits whose apparent bioactivities are due to assay interferences. This study can be extended to additional proximity/binding assay platforms such as NanoBRET, quenched FRET, and fluorescence polarization. Results from this study will inform about reproducibility across widely used early drug discovery assays, highlight key factors influencing assay performance, and guide best practices for HTS campaigns.
Screening Efforts to Support Amgen’s Small Molecule Effector Library for Molecular Glue Discovery
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Screening Efforts to Support Amgen’s Small Molecule Effector Library for Molecular Glue Discovery
Session: Emerging Novel High Throughput Assays
Session Chair: Caitlin Mills, PhD High throughput assays can take many forms. The aim of this session is to highlight novel assays that can be deployed in diverse biological contexts and have potential for widespread adoption beyond the labs where they were established.
An Optimal Kinase Library for high throughput screening and target deconvolution
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Kinase inhibitors are widely used in the clinic and remain the focus of many drug development efforts. However, knowledge of what targets are bound by what inhibitors and at what concentrations is sparse. We assembled a curated Optimal Kinase Library (OKL) comprised of 192 kinase inhibitors with the goal of covering the full kinome (536 kinases) while minimizing target and chemical redundancy. Kinome-wide affinity data were collected for each inhibitor on Eurofins’ SCANmax platform at four concentrations spanning four orders of magnitude. Apparent Kds were determined for each inhibitor-kinase pair resulting in a uniformly and systematically characterized small molecule library that can be deployed for screening in diverse biological systems and can be used to deconvolve context-relevant kinase targets. We illustrate the utility of the library in three distinct screens. In the first, vulnerabilities were identified in a model of platinum-resistant ovarian cancer. In the second, three related kinases were found to be neuroprotective in a model of chemotherapy-induced peripheral neuropathy. Lastly, in the third screen, kinase inhibitors that protect against cytoplasmic dsRNA-induced toxicity in a model of Alzheimer’s disease with TDP-43 pathology were identified. In each case, OKL screens resulted in testable hypotheses that were pursued in follow-up studies. The OKL in available for screening.
BRETTSA: An ultra-sensitive, broadly-applicable BRET method to measure target engagement through protein denaturation in live cells.
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Identifying and characterizing ligand-target interactions in a physiologically relevant context remains a critical challenge in the early phases of drug discovery. Thermal shift assays enable assessments of target engagement in cells, with limited target pre-knowledge and SAR. However, since these methods are reliant upon protein aggregation, they often suffer from limited sensitivity and scalability in HTS settings. Here, we present a novel bioluminescence resonance energy transfer (BRET)-based assay for detecting ligand-protein interactions in intact cells using protein denaturation (BRETTSA). Cell-permeable denaturation-sensitive dyes are used to detect the denatured state of a NanoLuc-tagged target protein after thermal challenge. Ligand interactions are determined as a change in the thermal stability profile of the target protein, which manifests as a dose-dependent loss of the BRET signal in live cells. The BRETTSA method is broadly applicable and has been applied to measure ligand binding to greater than 100 targets across 20 protein families and 6 intracellular locations (including transcription factors and integral membrane proteins). The method has a broad dynamic range and can be used to detect ligand interactions over at least 5 orders of magnitude, surpassing that of aggregation-based thermal shift methods. In addition to small molecule-protein interactions, the method can also be utilized to detect a range of small molecules, including molecular glues, bifunctional degraders, and cooperative ternary complexes. We demonstrate that the BRETTSA method can be scaled for high throughput applications such as hit finding and target identification. Our results demonstrate that BRETTSA is a uniquely robust, sensitive, and scalable method to assess ligand-binding interactions in a cellular context.
A Kinetic Framework for Rational Scheduling of Cancer Drug Combinations
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Background: Drug combinations remain an attractive strategy to improve cancer therapy, however, their efficacy may be limited by suboptimal dose scheduling. While drug concentrations in vitro are typically constant over time, drug concentrations in vivo are transient. Thus, even when combinations appear effective in vitro, suboptimal dose scheduling in vivo may reduce efficacy by causing drugs to act sequentially and additively on cancer cells rather than synergistically. Ultimately this reflects limited technologies to rationally guide dosing schedules for drug combinations. Purpose: The purpose of this study was to develop and test a rational approach to schedule dosing of drug combinations by aligning the peak time of apoptotic signaling, with the goal of enhancing combination efficacy. Methods and Results: Effective therapy requires surpassing an apoptotic signaling threshold within cancer cells. We hypothesized that each drug has a characteristic time after administration at which apoptotic signaling peaks. Temporally aligning these apoptotic signaling peaks within drug combinations may cause more cells to exceed the apoptotic threshold, and would therefore maximize combination efficacy. To test this, we used Dynamic BH3 Profiling (DBP), a high-content imaging method that quantitatively measures how a short drug exposure (1–48 hours) shifts cells toward the apoptotic threshold. DBP provides a functional and sublethal measurement of apoptotic signaling applicable to diverse drugs. In liposarcoma and breast cancer cell lines, we measured the kinetics of apoptotic signaling induced by single drugs, and distinguished rapid versus slow and transient versus durable responses. Finally, we demonstrated that rationally offsetting administration of drug combinations to temporally align apoptotic signaling peaks significantly enhanced cell death in vitro. Conclusions and Next Steps: These findings suggest that rational scheduling of drug combinations, guided by apoptotic signaling kinetics, can improve how therapies are designed and administered. By aligning dosing schedules to maximize cell death, this strategy offers a rational framework to enhance the efficacy of existing treatments, accelerate the development of novel combinations, and improve clinical outcomes for patients with otherwise refractory cancers. Future work will test this hypothesis in pre-clinical models of breast cancer and liposarcoma to validate its translational potential.
Advancing Therapeutic Discovery for Traumatic Brain Injury with a High-Throughput Neurosphere Platform
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Traumatic brain injury (TBI) is a leading cause of long-term disability and death with a significantly increased incidence in the military population. There have been significant advances in knowledge of the complex pathophysiology of TBI, however, more effective therapeutics are required for both acute and chronic disease. A key challenge for drug discovery is the lack of physiologically relevant, scalable models and assay instrumentation that can simulate TBI and capture both the effects of injury and therapeutic interventions. We have developed a high-throughput screening platform that integrates human induced pluripotent stem cell (iPSC)-derived neurospheres with a microfluidic-based system that can induce TBI events and perform neurosphere assays. Using optimized protocols, we generate highly uniform neuronal organoids in large quantities suitable for screening. To model injury, we employ a microfluidic system capable of delivering precisely controlled pressure pulses to hundreds of organoids in parallel, simulating both acute TBI events and prolonged intracranial pressure. We demonstrate that exposure to TBI-like pressure injuries significantly disrupts neuronal activity as measured by calcium ion oscillations in prefrontal cortex–like brain organoids. In parallel, we assess cell viability following injury over short (2 day) and long term (30 day) to further characterize neurosphere responses. Together, this integrated approach provides a powerful new platform to evaluate candidate compounds across multiple phases of TBI, from immediate neuroprotection to long-term recovery. By bridging the gap between experimental models and clinical relevance, this technology has the potential to accelerate therapeutic discovery and improve outcomes for both civilian and military populations affected by brain injury.
Session: Screening Complex in Vitro Models
Session Chair: Talya Dayton, PhD There have been many recent advances in the in vitro modelling of complex, multi-cellular systems leading to the need for more advanced high-throughout and high content assays. This session will highlight innovative screening approaches for complex, multicellular in vitro models used in biomedical research for discovery and drug development.
Integrating gene editing, -omics, and imaging-based analyses of human airway organoids to uncover the gene-environment interactions that drive Neuroendocrine Cancer initiation
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Neuroendocrine (NE) lung cancers account for 20–25% of all lung cancers, and include aggressive lung cancer subtypes such as small cell lung cancer (SCLC) and large cell NE carcinoma (LCNEC). These tumors are thought to originate from pulmonary neuroendocrine cells (PNECs)—rare airway epithelial cells with neuronal and endocrine properties. In healthy lungs, PNECs act as environmental sensors and contribute to immune regulation and tissue repair, but they expand abnormally in chronic lung diseases such as asthma, chronic obstructive pulmonary disease, and cystic fibrosis. The etiology of aggressive NE lung cancers involves environmental exposures like smoking or air pollution. Together, these observations suggest that gene-environmental interactions underlie malignant transformation of PNECs. However, patients with these diseases often present with metastasized disease, preventing the use of human samples to study the early stages of the disease. To address this challenge, we developed a human fetal airway organoid platform (NEr-fAOs) enriched for PNECs. While enriched for PNECs, NEr-fAOs retain diverse airway cell types, preserving epithelial complexity. To define the earliest changes that accompany NE cancer initiation, we combine CRISPR-based genetic engineering, quantitative imaginge pipelines, and multi-omics profiling for the analysis of mutant NEr-fAOs exposed to cancer-associated environmental perturbations. Our analysis tool box includes a custom image analysis pipeline for segmenting and analyzing fluorescently labeled cells within organoids, and long-term volumetric imaging with high-resolution light-sheet microscopy. Together, these approaches enable us to capture the earliest detectable changes in cell identity, morphology, and spatial organization associated with cancer initiation. This integrative organoid-based approach establishes a human model of NE lung cancer initiation and provides a versatile experimental platform for mechanistic discovery, biomarker development, and drug screening. By uniting gene editing, -omics, and imaging technologies within a human organoid context, this work lays the foundation for precision strategies in early detection, drug discovery, and ultimately, prevention of these cancers.
Scalable production of HSC-derived microglia for discovery biology
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Microglia play a significant homeostatic function in the central nervous system, and the response to acute injury, infection, and neurodegenerative conditions. Single-cell gene expression studies in both normal and diseased brains have found that microglia exist in a diverse continuum of cell states. However, the difficulty of purifying primary microglia, and the high cost of single-cell expression profiling, makes it difficult to systematically explore the biology of this cell type. Here, we report a method for generating functional microglia from hematopoietic stem cells, and their use in high-content morphological screening. Using a definitive screening design of experiments approach, we identified culture conditions that allow for the scalable production of microglia-like cells that recapitulate the phenotypic diversity of primary microglia. We further demonstrate the use of these cells in a morphological screen for modulators of the type I interferon response. This work provides a new platform for modeling neuro-immune interactions of the central nervous system that is amenable to both chemical and genetic perturbation studies.
Automation Technologies
Session: AI-Enabled Chemistry: Automating Iterative Discovery and Synthesis
Session Chair: David Calabrese, PhD This session will feature presentations on advances being made in the development and application of automation technologies within the scope of facilitating or advancing the productivity in chemical synthesis, the screening of chemical reactions and automating the DMTA cycle.
Acceleration of chemistry workflows via integration of reaction planners and sample managers to assist high-through put reaction screening
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: High-throughput experimentation (HTE) is a cornerstone of modern chemical and biological research, enabling rapid exploration of reaction conditions and formulation parameters. However, a major bottleneck in scaling and automating HTE workflows lies in the complex data transfer requirements between disparate reaction planners, sample managers, and instruments, especially when systems are sourced from different vendors. In particular, the transfer of reaction setup data from an electronic lab notebook (ELN) to synthesis instruments, and subsequently to analytical platforms, is often fragmented and error-prone. This gap leads to metadata loss, inaccurate experiment tracking, and challenges in interpreting analytical results due to missing or inconsistent context about the reaction components. We present a modular, data integration platform designed to enable seamless communication across multiple platforms during the entire HTE workflow. Our system facilitates the transfer of structured reaction data from ELNs and analytical instrumentation, ensuring accurate reagent and condition metadata is preserved throughout. This enables more precise analytical method execution and improves the quality of data returned to the ELN for downstream visualization and analysis. Through case studies, we demonstrate how our approach reduces manual intervention, minimizes transcription errors, and provides a centralized data repository for more informed decision-making via computer-guided decision-making. Our work highlights a scalable path forward for laboratories looking to unify their digital ecosystem and unlock the full potential of HTE through robust data interoperability.
Consolidating Early Stage Drug Discovery in an Automated Ambient Mass Spectrometry Platform
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: The early drug discovery workflow relies heavily on high-throughput experimentation, both in terms of organic synthesis as well as analysis of complex biosamples. The identification of new biological targets through large-scale biospecimen studies, the generation of large sets of drug candidates and their rapid bioactivity screening, as well as the in vitro and cell-based confirmation of hits followed by lead optimization, all rely on high-throughput strategies which are typically spread out across diverse technologies in specialized facilities. The efficiency of this workflow could benefit from the consolidation of these activities in a single closed-loop platform. Mass spectrometry (MS) is an attractive technique to achieve such consolidation due to the inherent speed of mass analysis, however this advantage is rarely fully utilized due to the widespread use of sample purification approaches (e.g. chromatography) prior to MS. Here we describe an automated system that achieves the consolidation of the early drug discovery pipeline by leveraging the advantages of desorption electrospray ionization (DESI), an ambient ionization technique that allows for the rapid and direct analysis of complex samples, both in qualitative and quantitative manner, without any need for workup. This system results from the combination of custom and commercial software, robotics, and analytical instrumentation, and it is capable of achieving throughputs better than 1 Hz using high-density arrays (up to 6,144 samples per array) and 50-nL samples (< 5 ng analyte). More significantly, the inherent reaction acceleration phenomenon that occurs in microdroplets, such as those generated intrinsically through the DESI process, allows reaction times to be reduced to just milliseconds, effectively providing an on-the-fly synthetic method that can be coupled with in operando MS analysis or nano/microgram scale product collection for bioactivity assessment. The general workflow of this platform involves automated sample preparation or manipulation using a fluid handling workstation, generation of microarrays on PTFE-coated slides using a pin-tool, automated transfer and analysis of spotted slides using ultrahigh-throughput DESI-MS, and real-time processing of the spectral data. This methodology has been extensively demonstrated for the screening of organic reactions for identification of optimal synthesis conditions and the selective late-stage functionalization of complex molecules, as well as for label-free quantitative biological assays using purified targets (e.g. enzymes, receptors), cell cultures, microorganisms, or tissue biopsies, all with no sample cleanup. Examples of all these capabilities will be provided and framed within the overall context of drug discovery showcasing a new-generation system built within the ASPIRE initiative of the US National Center for Advancing Translational Sciences.
Optimization of Robotic Liquid Handling as a Capacitated Vehicle Routing Problem
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: We present an optimization strategy to reduce the execution time of liquid handling operations in the context of an automated chemical laboratory. By formulating the task as a capacitated vehicle routing problem (CVRP), we leverage heuristic solvers traditionally used in logistics and transportation planning to optimize task execution times. As exemplified using an 8-channel pipette with individually controllable tips, our approach demonstrates robust optimization performance across different labware formats (e.g., well-plates, vial holders), achieving up to a 37\% reduction in execution time for randomly generated tasks compared to the baseline sorting method. We further apply the method to a real-world high-throughput materials discovery campaign and observe that 3 minutes of optimization time led to a reduction of 61 minutes in execution time compared to the best-performing sorting-based strategy. Our results highlight the potential for substantial improvements in throughput and efficiency in automated laboratories without any hardware modifications. This optimization strategy offers a practical and scalable solution to accelerate combinatorial experimentation in areas such as drug combination screening, reaction condition optimization, materials development, and formulation engineering.
Development of a Self-Driving / Autonomous Liquid-Liquid Extraction Platform for Chemical Reaction Work-up
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Automating process development workflows and unit operations can dramatically accelerate the discovery and development of new chemistries. There exists an unmet demand for standardized, general, and transferable solutions to various unit operations as it relates to chemical transformations utilized to produce biologically active molecules. Liquid-liquid extraction (LLE) is an essential and ubiquitous post-reaction unit operation utilized in chemical synthesis as a means of purification. However, the optimization and complete understanding of LLE parameters and how these relate to key outputs (purity, yield, sustainability, etc.) are extremely time and labor intensive to fully collect. In this work, we describe our collaborative development of a modular, self-driving LLE platform designed to rapidly optimize purification conditions. We will highlight the automation and machine vision development and demonstrate a first-pass platform performance using real-world LLE examples.
Session: Integrated and Automated Approaches to Accelerate Lead Generation Across Therapeutic Modalities
Session Chair: Jesse Mulcahy This track focuses on the engineering, robotics, and digital infrastructure required to accelerate high quality lead generation across small molecules, biologics, nucleic acid therapeutics, and cell based modalities. Presenters will demonstrate how modern laboratory environments are being architected through modular robotic systems, automated workcells, standardized labware, and interoperable scheduling and control software. Sessions will highlight engineering strategies for unifying sample management, assay execution, and real time instrument level data capture into reliable and scalable automation frameworks. A major emphasis is placed on the digital transformation that enables these systems to function cohesively. Speakers will detail how structured data models, traceability standards, barcoding systems, and cloud compatible data pipelines create AI and ML ready datasets that support predictive analytics and closed loop discovery. Attendees will learn how engineering driven design choices and robotics centric architectures reduce assay development cycle times, increase throughput, improve data reproducibility, and enable consistent system performance across diverse discovery workflows. This track provides a technical roadmap for building automation ecosystems that are maintainable, interoperable, robotics enabled, and foundational to next generation digitally connected discovery environments.
C.O.L.A.B. — Can Orchestrated Labs Actually Balance walk‑up and lights‑out robotics?
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Orchestrated labs must balance scientific flexibility with scalable automation. The path is an architecture that is scientist‑centric, data‑centric, remotely operable, and progressively agentic—so manual, semi‑automated, and robotic work can interleave without losing intent or provenance. Space constraints make mixed‑mode execution the norm; samples should move like data through well‑defined interfaces, whether handled at the bench, by robots, AMRs, or automated storage. We propose a plug‑in architecture: protocols are modular, versioned, and scheduler‑addressable with human and robot variants; FAIR data models capture steps, materials, parameters, locations, and outcomes; capability‑based APIs let new instruments and movement systems declare functions and health; a mode‑aware scheduler coordinates devices, queues, and logistics using telemetry and constraints. Remote operations add unified observability and control (pause, reroute, recover) with auditability. Agentic monitoring layers policies on telemetry to detect anomalies, invoke bounded recovery, and branch protocols based on outcomes—always explainable and governed. People are integral. Users run and recover workflows via guided on/off ramps. Super‑users tune methods and batching, manage queues, and triage alerts. SMEs steward assay intent, demonstration, progression, and agent policies. Technology experts engineer orchestration services, adapters, digital twins, and secure production operations. This model simplifies onboarding new systems, reduces brittle scripts, respects regulated chains through versioning and improves utilization without more floor space. The result: standardized where it counts, flexible where it matters, FAIR by default, remotely operable, and ready for safe autonomy—so errors are resolved faster, decisions adapt to results, and new technology is orchestrated on day one.
End-to-End Automation of the DMTA Cycle: From automated Reagent preparation to Degrader Profiling
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: In order to research more drugs for patients faster, the cycle of design, make, test and analysis is an important element for NCE research. Within Merck HC KGaA, there is a cross-functional digital and automation platform that aims to fully automate the entire process. This automation strategy encompasses both digital automation and wet lab automation. In this study, we present the successful complete automation of cellular and biochemical Tier 1 assays for degrader optimization. We have developed technical solutions that allow us to store cell cryostocks and reagents long-term at our automation systems. Starting from these stored aliquots, the system automatically pipettes ready-to-use reagents, which are then automatically drawn to the bulk dispensers. This enables us to profile the next compound generation immediately after substance availability.
Session: From Bench to Backend: In-House Automation Tools and Custom Software Solutions
Session Chair: Charles Warren Dive into practical examples of how labs are building their own automation tools, LIMS integrations, and custom scheduling software to overcome bottlenecks and align automation with real-world constraints. Hear from teams creating tailored platforms that bridge gaps between commercial solutions and experimental workflows—advancing data integrity, flexibility, and user ownership.
Generating Billions of Measurements to Integrate Drug Discovery and Machine Learning
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Drug discovery and machine learning consume large quantities of data to predict experimental outcomes and facilitate new therapeutic development. Classical drug discovery relies on repetition and reproducibility of experiments involving hundreds of molecules across dozens of assays. Machine learning requires orders of magnitude more data and is more tolerant of mislabelled data. The challenge lies in generating datasets of sufficient size, depth, diversity, and reproducibility to enable both classical drug discovery and machine learning. Leash has built an internal data generation engine and iterative testing framework using DNA-encoded libraries capable of screening millions of molecules against human disease targets and used that to screen hundreds of human protein targets. This talk will provide an overview of our approach and results.
Real-Time Analytics for Accelerated Electrochemistry: The ElectroChemputer Platform
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Electrochemical synthesis is a sustainable, atom-economical alternative to traditional methods, but its potential for high-throughput discovery and optimization has been limited by a lack of integrated automation and real-time control. We introduce the ElectroChemputer, a programmable, modular platform designed for fully automated electrochemical workflows. This system uniquely combines real-time nuclear magnetic resonance (NMR) spectroscopy and electroanalytical monitoring, offering unprecedented structural and quantitative insights into reaction progression and enabling a new paradigm we call "electrochemical fly-by-wire." Over 170 hours of continuous operation, the ElectroChemputer executed more than 1500 coordinated unit operations and acquired over 600 cyclic voltammograms. The platform demonstrated its robust capabilities by enabling real-time stopped-flow NMR analysis and data processing for complex processes. The ElectroChemputer allows for parallel, multi-step reaction protocols to run autonomously, demonstrating its broad applicability across various reaction classes, electrode materials, and configurations. This modular robot provides a powerful tool for accelerating discovery and optimization: a significant step towards democratizing programmable electrochemical synthesis.
Automating Mid-Scale Antibody Production: Bridging the Gap in Discovery Workflows with Novel Liquid Handling and Filtration Technologies
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Early-stage antibody discovery programs are commonly constrained by a tradeoff between throughput and protein yield. Plate-based production formats with 1–5 mL culture volumes often fail to generate sufficient material for downstream purification and characterization, particularly for challenging biologics. Mid-scale formats—such as 6-well plates and conical tube bioreactors—provide increased yields with culture volumes exceeding 30 mL; however, the lack of integrated, hands-off automation solutions has limited the scalability of these workflows, especially when processing hundreds of candidates weekly. Major bottlenecks include reliable liquid handling of volumes >5 mL and efficient harvesting of supernatant for subsequent purification—capabilities not met by standard automation platforms. To address these gaps, we partnered automation/liquid handling and labware vendors to conceptualize and develop novel automation technologies tailored to mid-scale antibody production. Central to this effort is the adaptation of a step motor-driven swappable pipetting head liquid handling system, featuring plunger-based pipetting in formats ranging from 96 to 6 well. For novel labware pieces, we worked with a labware company to engineer a 30 mL pipette tip configuration utilizing four discrete 5 mL channels, enabling sterile and non-contaminated transfer of 30 mL cultures from 6-well formats with just two automated movements. This innovation supports rapid and precise handling of higher-volume cultures within an automated workflow. Supernatant harvest for antibody purification typically relies on centrifugal filtration, yet the height and format of 6-deepwell plates often exclude them from conventional automated centrifuges. To overcome this limitation, we helped design a novel syringe press-based mechanical filtration consumable, that could be compatible with the step motor driven liquid handler with a customized head to slip load the 6well syringes—drawing inspiration from recent mag-rod labware adaptations. We further developed a custom liquid handling system featuring a dropped deck plate, providing increased z-clearance for large-format labware and enabling seamless integration of the new 30 mL pipette tips and filtration modules. Proof-of-concept evaluations demonstrate strong performance, with the 30 mL tips achieving 95% volume precision and our 6-well filtration device recovering 90% of sample volumes. The flexibility and reliability of these new tools facilitate an automation-compatible solution for mid-scale antibody production, enabling higher-throughput discovery workflows with genuinely hands-off operation. These advances lay the foundation for scaling antibody production to even larger cultures that were previously considered out of reach for automation. Ongoing development is focused on further integration into fully autonomous workstations and expanding compatibility across different bioproduction platforms. The presentation will highlight engineering insights, performance data, and future directions for these enabling technologies, opening new possibilities for biologics discovery and screening.
Session: Automated Cell Technologies: Engineering Scalable Platforms for Cell Therapy and Bioprocessing
Session Chair: Wali Malik, MA Explore how automation is transforming the development and manufacture of cell therapies, from cell isolation and expansion to in-process analytics and final formulation. This session will highlight platform strategies, closed-loop systems, and modular workcells that support high-throughput, GMP-aligned cell therapy workflows—enabling scalability, traceability, and therapeutic consistency.
From Bench to Bot: A Roadmap for Scaling Cell Therapy Lab Automation
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available
Advancing Disease Modeling: Scalable, Integrated Platforms for Stem Cell Research
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Cell-based modeling of complex diseases requires large-scale, genetically diverse patient cohorts and the ability to generate disease-relevant cell types at scale. Induced pluripotent stem cells (iPSCs) provide a noninvasive and renewable source of patient-derived cells, yet the parallel manipulation of hundreds of lines remains a significant bottleneck. Manual workflows introduce batch effects and experimental variability that can obscure the subtle biological signals associated with complex, low-penetrance genetic variants. To overcome these limitations, we have developed the NYSCF Global Stem Cell Array®, a fully automated cell-culture platform integrating robotics, liquid handling workstations, automated imaging, and peripheral devices under coordination of custom software and artificial intelligence. Automation clusters are configured to enable end-to-end culture, differentiation, genetic manipulation, and screening of hundreds of cell lines in parallel with reproducibility and standardization unmatched by manual approaches. Proprietary software tools direct flexible liquid handling methods that dynamically adapt to cell type, cell density, and labware format, supporting pooling, splitting, stamping, and re-arraying capabilities. Liquid-handling robots serve as principal drivers of complex culture workflows, submitting requests to a centralized queue that allows peripheral devices to asynchronously and independently execute protocols. Method management software maintains comprehensive catalogs of production and development methods using version control principles and interfaces with a calendar-based reservation system for coordinating workflows. Peripheral devices and services can be taken in- or out-of-service, ensuring accurate execution at runtime. Submethod libraries provide real-time updates to a tracking system, offering live dashboard views of method progression, event forecasting for user interventions, and workflow status tracking with user notifications. Interactive worklist generators with flexible GUIs reference our sample tracking databases and incorporate business logic into multi-parametric, data-rich worklists that define liquid-handling protocols. By capturing comprehensive metadata including donor information, cell line characteristics, cell growth kinetics, and integration of nightly imaging across all stages of experimentation, we can serve data through an integrated custom Laboratory Information Management System. This infrastructure provides precise contextualization of experimental data, including protocol parameters, plate maps, and well tracking, thereby reducing error and facilitating design-of-experiment, optimization, and validation of experimental protocols. The scale offered by the system has also allowed us to model gene x environment interactions, such as in Post-Traumatic Stress Disorder patient-derived NGN2 neurons following glucocorticoid treatment. Rich morphological embeddings can be extracted from patient-derived cells using high-content imaging assays, allowing unbiased detection of disease-associated signatures, such as in Parkinson’s Disease. Through this integration of automation, data fidelity, and advanced analytics, the platform enables the discovery of clinically relevant insights into complex disease biology.
An Intelligent Robotic Agent for Research-Scale Cell Culture Automation
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Purpose of the study: Manual cell culture is a labor-intensive and error-prone bottleneck in research, and existing automation often fails research labs due to inflexibility and high cost. We aim to develop an intelligent, end-to-end platform that is compact, modular, and affordable, capable of handling both multi-well plates and common T-flasks. The system integrates natural-language control for intuitive protocol setup and uses AI-powered image analysis for automated confluency assessment and passaging decisions. This platform automates the full cell culture workflow and interoperates with other robotic systems to support downstream processes. Experimental Procedures: The system architecture integrates three core subsystems: a high-capacity automated incubator, a modular liquid handling platform, and an on-board liquid management system, all supervised by a central AI agent. The incubator is designed as a flexible, centralized cell culture core for multi-user environments, accommodating diverse labware like flasks and plates to bridge large-scale expansion and high-throughput assays. The gantry-based liquid handler enables efficient parallel processing of multiple vessels. It features modular, swappable workstations for serviceability and an access port for interoperability with external robotic arms, facilitating sample hand-off for downstream analysis. An integrated liquid management system provides on-demand, temperature-conditioned sterile reagents to the platform. The entire platform is orchestrated by an AI agent that translates natural language requests from scientists into executable robotic workflows. A digital twin allows for pre-execution simulation and protocol optimization, while the AI leverages real-time image analysis to enable automated, data-driven decisions during culture, such as initiating passaging once cells reach pre-defined biological thresholds. Summary of Data: Two functional liquid handler prototypes were developed and validated at Carnegie Mellon University. An initial version successfully integrated with a Thermo Fisher Spinnaker™ Mover for automated T-75 flask handling, demonstrating interoperability. The second prototype showcases high-throughput parallel processing, autonomously managing four T-175 flasks simultaneously. The platform has executed over 20 unattended passage and expansion protocols across four diverse cell lines (HEK293T, C2C12, MRC5, iPSC-MSC). The onboard image-analysis model, trained on C2C12, HEK293T, and iPSC-MSC data, matches expert labels within ±5 percentage points for confluency and ±5% for cell counts (n=5), enabling automated, data-driven decisions. Furthermore, AI-powered natural language protocol generation has been achieved; scientists can now verbally describe an experiment, generating an executable robotic protocol in under one minute. Conclusion: This work validates an intelligent robotic platform that translates natural language commands into reproducible, parallel cell culture workflows, demonstrating a powerful and accessible end-to-end solution to eliminate critical bottlenecks in life science research. Next Steps: The automated incubator and liquid management modules are currently in the prototyping stage, with full system integration targeted for completion and exhibition at SLAS 2026. Upcoming experiments will focus on validating large-scale cell expansion, harvesting, and seamless interoperability with other robotic systems.
Session: Streamlining Lab Operations: Innovations in Automated Transport for Labware
Session Chair: Nicolas Houvenaghel Embark on a journey exploring innovations in automated transport for scientific labware utilizing mobile robotics and beyond. Navigate how cutting-edge technologies optimize efficiency, minimize manual intervention, and elevate productivity in laboratory settings, revolutionizing the way we conduct research and experiments.
Modalities of Intelligent Transport: Building the Physical Infrastructure for Autonomous Discovery
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available This talk will review current intelligent transport modalities (Autonomous Mobile Robots, Magnemotion track systems, planar motors, etc.) and their application in the lab, analyzing the advantages, disadvantages, current limitations, and technical maturity of each. It will then look to the future and highlight key areas of research and development that will unlock capabilities critical for these systems to serve as the foundation of the next generation of autonomous laboratories.
MilliporeSigma introduces its new Autonomous DMTA Laboratory for Artificial Intelligence Drug Discovery.
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available The Design-Make-Test-Analyze (DMTA) cycle is a fundamental framework in product development, particularly in the pharmaceutical industry. The integration of automation can be used to enhance the efficiency and effectiveness of the DMTA cycle. By leveraging advanced technologies such as machine learning, artificial intelligence, and robotic process automation, Merck KGaA has streamlined each phase of the cycle. Automation facilitates rapid prototyping and testing, reduces human error, and accelerates data analysis, leading to faster decision-making and improved product quality. Furthermore, automated systems can provide real-time feedback, allowing for iterative design improvements and fostering a culture of continuous innovation. This approach not only reduces time-to-market but also optimizes resource allocation, ultimately driving competitive advantage in an increasingly dynamic marketplace. The findings suggest that embracing automation within the DMTA cycle can significantly enhance productivity, innovation, and responsiveness to market demands.
Integrating AI, ML and Custom Robotics into Pharmaceutical R&D Automated Workflows for Small Molecule Drug Development
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Pharmaceutical small molecule drug development involves finding new synthetic routes and optimizing existing routes to deliver desired drug molecules on the large scales required in manufacturing. Such route scouting and optimization requires significant screening to be performed in R&D labs to explore large parameter spaces (e.g., testing combinations of solvents and reagents at different concentrations and/or temperatures), searching for conditions meeting the required criteria for yield, cost, robustness, etc. These parameter spaces can be extremely large (>10,000 possible combinations of variables), requiring sophisticated tools to find an optimal condition as quickly as possible lest the screening slow down the drug’s development timeline. At Takeda, we have implemented a combination of AI/ML tools, off-the-shelf automation and custom robotics to achieve two goals: a) perform as many experiments as possible, and b) use existing data to narrow down the number of experiments to be performed. Overall, this allows us to rapidly screen large parameter spaces with limited experiments, providing results as soon as possible to chemists and chemical engineers who are waiting on screening data to inform their next experiments. We herein present two workflows, both of which hinge upon the use of AI and ML tools to perform experiments, analyze data, and determine which experiments should be run next. Earlier in the pipeline, high throughput experimentation (HTE) is needed to optimize or scout new routes. Takeda’s HTE workflow starts with the use of Microsoft Copilot to determine the first conditions to be screened, then uses a combination of Atinary (a Bayesian Optimization platform) and Katalyst (a web-based platform that integrates with our existing tools) to plan and execute the experiment on Unchained Labs robots. A mobile robot delivers the samples to our UPLC for analysis, then the data is analyzed in Katalyst, sent to Atinary to plan the next experiment, and the cycle continues for the desired number of generations or until a suitable hit is found. Later in the pipeline when a route has been optimized but requires robustness testing, Design of Experiments (DoE) is used to determine allowable thresholds for process variables. Takeda’s late-stage DoE workflow leverages an LLM-based solution developed by b-12 Labs to translate existing procedures into experimental protocols for a Chemspeed robotic platform designed for scaled up medium throughput experiments. Ultimately, both workflows will be presented in their current and planned future states, illustrating the rapid changes we have been implementing in Takeda’s small molecule R&D environment.
Session: Next-Gen Intelligence: AI-Driven Operations and Smart Lab Automation
Session Chair: Joel Karpiak Discover how artificial intelligence is reshaping lab operations—from predictive maintenance and smart scheduling to AI-guided experimentation and adaptive automation. This session showcases how integrating machine learning with laboratory systems accelerates discovery, improves reproducibility, and drives more autonomous lab environments.
A Modular, Scalable Framework for Bringing AI-Driven Autonomous Chemistry into the Laboratory
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Recent advances in artificial intelligence are reshaping how we conceptualize autonomous laboratory systems, making the long-standing vision of fully closed-loop experimentation increasingly achievable. While automation has seen significant progress in biology, chemistry has lagged due to the complexity of reactions, lack of standardization, and the diversity of desired outcomes. Large language models (LLMs) offer a new way forward: their ability to interpret natural language, perform structured reasoning, and make adaptive decisions makes them well-suited for managing the intricacies of chemical experimentation and drug discovery. As a proof-of-concept, we integrated a custom Biotage Initiator Plus reactor platform with SciBORG, an LLM-driven framework designed for AI driven experimentation. This integration demonstrates not only that a reaction platform can be operated via natural language instructions but also that reaction parameters can be optimized autonomously. By combining memory of system state with robust planning capabilities, SciBORG can reliably execute protocols, enforce correct sequencing, and generate reproducible outputs—all while reducing the expertise barrier required to run complex automation, making it more accessible to chemists. Reaction optimization is a particularly strong target for automation. It requires reproducible protocols, high-throughput experimentation, and the systematic exploration of large parameter spaces—conditions ideally suited to robotic execution and AI-driven decision-making. In our studies, SciBORG performed iterative experimentation and outcome evaluation across defined solvent, base, duration, and temperature conditions. From this data, the system autonomously identified improved reaction outcomes, highlighting its potential for closed-loop optimization. Beyond this single platform, SciBORG is designed with modularity at its core, allowing rapid integration of additional instruments and drivers. The proof-of-concept with the Initiator Plus illustrates how this approach can scale: with the addition of automated workup and data analysis tools, the system could run continuously in a self-evaluating loop until optimal parameters are reached. This level of high-throughput, AI-driven experimentation has direct implications for drug discovery, where accelerating optimization can dramatically shorten development timelines and expand the search for novel therapeutics.
Building Robust Automated Workflows at Scale for Therapeutic Antibody Discovery: In-house LIMS-Automation with Bidirectional Communication and Software Engineering practices
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: BigHat Biosciences (~100 employees; San Mateo, CA) develops therapeutic antibodies via a platform combining AI/ML design with high-throughput wet-lab execution. Our automation group built the infrastructure at the heart of our platform, Milliner, to enable fast AI-guided design cycles. In this talk we focus on the automation architecture itself, specifically how Reccy, our in-house “LIMS++” (LIMS and workflow manager), orchestrates a fleet of robotic workcells to turn designs into standardized, traceable operations, and how this integration underpins speed, data integrity, and reproducibility across therapeutic programs with partners such as Merck, Johnson & Johnson, Eli Lilly, and AbbVie. We also describe the multi-layer infrastructure used to track our lab automation fleet in real time, as well as the software engineering practices that enable flexible, robust scaling of automation coverage across wet-lab workflows. Milliner runs iterative, ML-guided Design-Built-Test-Learn (DBTL) cycles in which Reccy issues instructions to a fleet of robotic workcells. Reccy handles design registration, ordering, work planning, and machine-readable commands to instruments and robots. Our purpose-built API exposes instructions for workcells and returns execution feedback to Reccy. Reccy ingests these data and metadata to provide complete lineage, real-time visibility into processing status and antibody performance, and a comprehensive execution record. The automation layer is engineered for scale. Beyond Reccy’s bidirectional communication with the fleet, our infrastructure enables automated fleet management with continuous method deployment, health and usage tracking, automated testing and more. Our bidirectional LIMS-automation infrastructure drives efficiency, data integrity, and robustness at scale. We track integration signals such as Reccy-automation communications per run, data volumes exchanged, and automation coverage measured as the share of methods connected to Reccy. Operational impact is quantified by manual touchpoints removed and reduced operator hands-on time. These metrics link infrastructure to therapeutic outcomes: end-to-end lineage tracing each construct through Milliner down to pipetting actions and metadata (barcodes, traces, videos…), higher data quality reproducible by partners, and flexible coverage across programs. Serving as the control plane for Milliner’s DBTL cycles, Reccy standardizes execution and passively captures metadata across workcells. Supported by version control, observability, and fleet management, this stack delivers reliable ML training data while compressing design-to-data timelines from weeks to days. We will expand the number and depth of Reccy-automation integrations throughout our workflows, to increase coverage and improve metadata capture . We will continue to develop software tools grounded in standard software engineering practices to maximize infrastructure efficiency, robustness, and scalability. Our goal is to translate these platform infrastructure improvements into measurable increases in the rate at which therapeutic antibody candidates advance toward the clinic.
Cellular Technologies
Session: Complex In Vitro Models (CIVM) in Early Discovery
Session Chair: Celine Eidenschenk Attendees will learn from industry and academic leaders about novel cellular technologies and their application in the early discovery phase of pharmaceutical development . Discussions will include application of medium-to-high throughput complex in vitro models (CIVM) for efficacy, safety, and ADME.
Modeling the Tumor Microenvironment with 3D Bioprinting to Advance PDAC Drug Discovery
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: The tumor microenvironment (TME) is a complex milieu composed of non-cancerous cells, extracellular matrix (ECM), and soluble factors that interact with tumor cells. These interactions play crucial roles in tumorigenesis and therapeutic response. Conventional in vitro models, including organoids mono-cultured in Matrigel, fail to fully capture the structural and biophysical complexity of the TME and lack cancer-associated fibroblasts (CAFs). Pancreatic ductal adenocarcinoma (PDAC), characterized by a dense desmoplastic stroma, exemplifies the limitations of current modeling approaches. To address this, we developed a custom ECM that mimics the PDAC TME. Integrated into Inventia Life Science’s Rastrum 3D bioprinting platform, this matrix incorporates PDAC-relevant proteins and physical characteristics, enabling scalable and reproducible tissue modeling. The platform supports robust growth and self-organization of CAFs and PDAC organoids into structures resembling native tumor tissue. Tumor cell–TME interactions and CAF heterogeneity are maintained, preserving inflammatory and myo-fibroblastic CAF subtypes. Finally, we evaluate phenotypic responses to therapeutic assets and standard of care drugs using this model compared to traditional in vitro assays. This platform can be translated into other high unmet need indications to support a wide range of drug discovery programs.
Overcoming Stroma-Driven Resistance: Humanized Models for Precision Cancer Therapy
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Despite the recognized therapeutic importance of the tumor microenvironment (TME), clinical decision-making, biomarker discovery, and drug development remain focused primarily on the epithelial tumor compartment. This narrow perspective likely contributes to limited patient responses to standard therapies and the frequent failure of new drugs in clinical trials. To achieve more precise and personalized cancer care, it is essential to identify stromal cues that influence therapeutic outcomes. However, existing preclinical models rarely capture intratumoral heterogeneity, particularly the complex interplay between cancer cells, cancer-associated fibroblasts (CAFs), and immune cells. We aimed to dissect stromal drivers of resistance and develop complementary platforms to test interventions that restore treatment response. We analyzed independent cohorts of breast cancer patients and established fully humanized tumor-in-a-dish models (HuTME) incorporating cancer, immune, and CAF components to characterize stromal features linked to resistance to targeted therapy. In parallel, we developed the Micro Immune Response On-chip (MIRO) platform, a microfluidic system that reconstructs the cancer–stroma interface and enables mechanistic, spatially resolved analysis of immune–stromal interactions under therapeutic challenge. Patient tumor analyses identified a CAF subset associated with immune exclusion and reduced trastuzumab efficacy. Targeted stimulation of the IL2 pathway in HuTME models recapitulating these resistance features restored immune surveillance and reinstated therapeutic responses via antibody-dependent cellular cytotoxicity (ADCC). Using MIRO, we further demonstrated that stromal barriers not only suppress immune activity but also physically impede immune cell movement, whereas IL2-driven modulation enhanced immune cell speed, spreading, and infiltration, effectively overcoming stromal suppression. Together, patient tumor data and preclinical models indicate that the efficacy of antibody-based targeted therapies depends strongly on preexisting stromal attributes, with direct implications for therapeutic decision-making and for refining molecular and cellular classifications of cancer. These models provide controllable, reproducible systems to evaluate cancer–stroma interactions, establish CAFs as key determinants of therapeutic resistance, and highlight IL2 pathway stimulation as a strategy to restore sensitivity. Ongoing efforts integrate patient-derived HuTME models with automation and live-cell imaging for high-throughput drug screening, while leveraging MIRO to dissect spatial mechanisms of resistance at high resolution. By combining patient tumor analyses with patient-specific models, our approach captures interpatient heterogeneity, improves the prediction of therapy responses, and opens new avenues for patients with limited treatment options. Ultimately, these platforms aim to personalize therapeutic strategies and overcome current challenges in managing stroma-rich cancers of diverse origins.
Expanding the repertoire of accessible cell phenotypes to access untapped reservoirs of novel MoA
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Deciphering cell types, their specialized functions within tissues, and the dynamics of cellular interactions are fundamental to all cell biology fields. Single-cell RNA-sequencing and flow cytometry have been transformative in identifying transcriptionally defined cell types, but next steps aiming to map functions to these cell types remain challenging. Current technologies are well-suited for evaluating cell-intrinsic phenomenon, such as proliferation or surface protein expression, but remain limited in directly evaluating cell-extrinsic functions by single cells, such as cytotoxicity, phagocytosis, or neuron firing. As a result, opportunity to discover therapeutics that modulate these phenomena remains constrained. To address these limitations, we developed a novel cell biology technology (Cellanome’s R3200) to tackle three technology challenges – 1) disruption of cell-cell interactions inherent to single-cell analysis, 2) decoupling of molecular pathways from cellular function when adherent cells are suspended in droplets, and 3) loss of temporal dynamic insights not retrievable from static snapshots. The R3200 enables longitudinal analysis of individual cells or groups of cells, followed by transcriptomic profiling of the same cells. Leveraging Cellanome’s computer vision infrastructure, individual cells are dynamically detected and enclosed within permeable hydrogel compartments called CellCage™ enclosures (CCEs), in which cells are cultured, imaged, and lysed for downstream RNA-sequencing. Tens of thousands of CCEs can be analyzed in parallel, enabling cell behavior and interactions to be directly studied at scale through time-lapse imaging rather than indirectly with surrogate markers. Using this approach, we established new models of cell-cell interactions by enclosing individual T-cells with proximal tumor “passenger” cells, generating tens of thousands of miniaturized cell-killing co-cultures. Longitudinal imaging revealed functional heterogeneity, enabling us to stratify individual T-cells by their speed of tumor cell killing. We further established CCE compatibility with adherent cells by profiling microglia, validating that enclosed cells were functioning at the time of RNA processing. Thus, we were able to capture RNA from individual, unperturbed microglia and identify gene signatures associated with high versus low phagocytic activity. Last, we established a data analysis pipeline integrating longitudinal imaging data with terminal transcriptomic analysis from individual CCEs, linking temporal dynamic properties to underlying molecular regulators. Collectively, these results demonstrate that our novel CellCage™ technology extends the study of cell biology beyond static, intrinsic features, enabling direct access to cell-extrinsic functions mapped to transcriptional profiles. By coupling these new models of complex cellular phenomena with CRISPR and small molecule perturbations on the R3200 platform, Cellanome presents substantial opportunity to accelerate discovery of first-in-class therapeutics.
Automation and Standardization in Organ-on-a-Chip Systems: From academic excellence to commercial solutions
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Organ-on-Chip technologies (aka microphysiological systems) are rapidly changing the methodology and workflow in preclinical stages of drug development in the pharmaceutical industry. For both an increase in quality of data points obtained with such systems as well as an increase in user’s acceptance it is necessary to progress from insular solutions to standardized, modular and validated components. This will also help in the commercialization of the technology. The project „UNLOOC“, funded by the European Union and comprising 51 partners from 10 countries, is demonstrating this approach in 5 use cases for different organ models. We report here on the use of standardized microfluidic modules for the cultivation either of individual cells or organ-like cell populations. These modules follow the concept of interoperability with existing lab equipment, namely with respect of external dimensions (microscopy slide or double slide format) and fluidic port spacing (usually 4.5 mm, similar to a 384-well plate). Presented examples for this approach are a microfluidic device for a skin model on chip which allows the continuous measurement of transepithelial resistance as a quantitative measure of cell barrier formation and microfluidic modules with identical footprint for fluidic routing and controlled cell extraction for cell transfer to other microfluidic modules such as sensor or cell imaging modules. Other examples of such devices address cancer-on-chip, blood-brain-barrier and lung-on-chip models. First results show a high viability and metabolic activities of such organ models in combination with an improved usability of the microfluidic components. The results of the work also generates input for the standardization efforts currently under way in the ongoing revision of ISO 22916 „Microfluidic devices: Interoperability requirements for dimensions, connections and initial device classification“.
Session: Complex In Vitro Models (CIVM) in Lead Optimization
Agree to terms to continue.
Agree to terms to continue. Session Chair: Amanda Ouchida, PhD During lead optimization of pharmaceutical development, increasingly complex models may be employed for mechanistic investigations, issue resolution, or to probe nonclinical-to-clinical translation. Attendees will hear from presenters utilizing complex models such as microphysiological systems (e.g., tissue-on-chip) to address key questions arising in late discovery.
Pancreatic Cancer Organoids-on-a-Chip for CAR T cell Therapy Evaluation and Screening
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Purpose of the study Pancreatic ductal adenocarcinoma (PDAC) has a 5-year survival rate of around 10% and responds poorly to current therapies due to its immune-cold tumor microenvironment (TME). Although chimeric antigen receptor (CAR) T cell therapy shows promise in blood cancer, its efficacy in PDAC is hindered by stromal barriers, immunosuppression, and antigen heterogeneity. This underscores the need for advanced in vitro models with integrated analytics to assess CAR T cell responses and uncover resistance mechanisms. Organoids capture patient-specific tumor features, while organ-on-a-chip recreate key TME components. Coupled with AI-powered analysis, such platform enables mechanistic insights and therapeutic response evaluation in a personalized context. Description of experimental procedures  We developed a patient-derived pancreatic cancer organoids-on-a-chip model engineered with a vascularized and immunocompetent PDAC TME including autologous patient-derived organoids (PDOs), cancer-associated fibroblasts (CAFs), tumor associated macrophages (TAMs) and regulatory T cells (Tregs), as well as a vascular network for the transport of CAR T cells. In addition, deep learning-based models were integrated with the chip to assess CAR T cell therapeutic responses through analysis of their infiltration and spatiotemporal dynamics. Summary of data Immunofluorescence (IF), scanning electron microscopy (SEM), and histology confirmed that key components of the TME were preserved on-chip, collectively forming an immune-cold niche. scRNA-seq analysis further revealed that both stromal and immune niches promoted PDAC progression. Upon infusion of mesothelin (Meso)-CAR T cells into the chip, we analyzed CAR T cell infiltration and dynamic behavior, recapitulating heterogeneous responses across patients and demonstrating that the desmoplastic stroma served as a key barrier to CAR T cell infiltration. This observation was further supported by a deep learning model trained on clinical pathological images. Subsequent administration of fibroblast activation protein (FAP)-CAR T cells followed by Meso-CAR T cells led to enhanced infiltration and improved anti-tumor responses. Additionally, IL-18-secreting Meso-CAR T cells were tested on chip, showing improved therapeutic efficacy associated with increased CAR T cell activation. Moreover, a deep learning model was developed to extract early-stage morphological and trajectory features of CAR T cells on chip and to examine their associations with therapeutic responses. Notably, the model exhibited reasonable accuracy in estimating responses and identified distinct early-stage characteristics such as perimeter, circularity, and speed associated with different response groups. Conclusion statement This patient-derived, AI-assisted pancreatic cancer organoid-on-a-chip platform effectively recapitulates patient-specific TME and enables CAR T cell therapy evaluation and screening. It holds strong potential for uncovering mechanisms of therapy resistance and advancing personalized medicine. Next steps and future experiments Clinical trial-derived patient samples will be integrated into the chip model, and associated clinical data will be incorporated into deep learning-based models. Together, this will establish a clinical-trial-on-a-chip pipeline for therapy evaluation and response estimation to inform clinical decision-making.
Circavent: High-Throughput Brain Organoid Platform for Scalable Neuropsychiatric Drug Discovery
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Increasing translational demands in neuroscience and neuropsychiatric drug discovery require highly reproducible, scalable, and data-rich platforms for disease modeling and compound screening. To meet this need, we present an automation-enabled brain organoid workflow, for multi-omic data acquisition and drug testing called Circavent. Our platform leverages automated liquid handling and high-throughput imaging to maintain and monitor region-specific organoids derived from human iPSC lines. This robust automation enables consistency and throughput across hundreds of samples, supporting scalable, standardized experimentation and quality control for complex cell-based models. Discovery efforts combine comprehensive multi-omic integration in brain organoids with analytical pipelines ranging from transcriptomics over posttranslational protein modifications and metabolomics to identify robust cellular and molecular disease features. AI-driven drug prediction, informed by large-scale aggregated datasets, enables prioritized selection of candidate therapeutics without prior disease-specific bias. Predicted compounds are then evaluated in patient-derived organoids, where molecular, proteomic, and functional data are used to assess therapeutic efficacy. Applying this platform to bipolar disorder, we identified novel targets and are validating compounds capable of reversing disease-relevant cellular and molecular phenotypes, both in organoid and animal models. Our workflow represents an end-to-end pipeline, spanning unbiased drug prediction, model development, quality-controlled data acquisition, and preclinical evaluation. This platform is readily adaptable to a wide range of neurological disorders, providing an efficient and reliable foundation for drug target identification and validation.
Session: Complex In Vitro Models (CIVM) to Support Regulatory Filing
Session Chair: Graham Marsh, PhD Encouragement to utilize CIVMs for regulatory purposes has been evident since updates to ICH guidelines and have garnered greater attention following new FDA modernization legislation.  This session covers both technology development of CIVMs to support regulatory filings as well as needs/experiences of those directly involved in these processes.
Harnessing Bioconvergence: Validated MPS data to help transform drug discovery & development while reducing reliance on animal testing
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Ensuring the safety and efficacy of new drugs is critical to delivering impactful therapies for patients. One key reason for current inefficiencies is a reliance on animal models. Importantly, the European Commission and FDA are actively creating plans to enable the elimination of animal testing. At Merck we are focusing on a pragmatic method to guide our efforts in this regard. Therefore, we have sorted our animal tests into “three baskets." Basket 1 includes tests for which alternatives have been identified; Basket 2 consists of tests that cannot yet be replaced today but already have hypotheses for potential alternatives; Basket 3 contains tests with no current hypotheses for replacement, signaling areas needing further investigation (Kleinschmidt-Doerr et al, submitted to ALTEX). As part of an internal ONE Merck project (Bioconvergence) and under the “Basket 2” umbrella, we are developing a more accessible organ-on-a-chip system that leverages years of human-based organoid development, combined with unique semiconductor / microelectronics fabrication. The combined result is a custom silicon semiconductor “organ-chip”, integrated with organoid biology, that is potentially scalable from a single organ model configuration (intestine) to a connected multi-organ model configuration (first iteration: liver and intestine). By utilizing modern semiconductor technology, we can incorporate many sensors directly into the “GUT chip” (optical, electrical, and potentially photonic) including barrier integrity and cell viability (TEER), metabolic parameters (O2, ROS, glucose, lactate;), and key proteins (organ specific), in a fashion compatible with rich multiomic endpoint measurements. This design not only enhances physiological relevance but also generates large volumes of time-resolved data that can feed into AI models, making it a powerful tool for predictive toxicology. The human small intestine plays a critical role in drug metabolism, drug absorption, and immune response, making it a key target for pharmacological and toxicological research. However, traditional models often fail to capture the complexity and patient-specific physiology. Cellular characterization of human intestinal-like organoids from induced pluripotent stem cells (iPSC) confirmed the presence of key intestinal cell types, including enterocytes, goblet cells (MUC2), Paneth cells (LYZ), enteroendocrine cells (CHGA), epithelial cells (E-cadherin), and brush border structures (Villin). When seeded on to the “Gut chip” an enhanced and prolonged functionality was observed. Thus, this iPSC-derived “Gut-chip” offers a promising tool for toxicology screening, while enabling integration into more advanced MPS, including organ-on-chip. The elimination of animal testing in pharmaceutical development is both a moral imperative and a scientific necessity. One major route to achieve this is the development and validation of novel human cell-based in vitro systems utilizing sophisticated semiconductor-based chips. Ultimately, the “chip” platform being developed will allow precise, relevant measurements in 2D cultures and/or 3D organoids representing diverse human tissues connected into systems to model tissue/organ crosstalk in drug exposure/toxicity assessments.
Industrializing Organ-on-a-Chip Technology: High-Throughput, Automation-Ready Toxicology on the AVA™ Emulation System
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Background: Despite strong scientific promise, Organ-on-a-Chip (OoC) technologies are not yet standard in pharma workflows due to limited throughput, high manual burden, and an absence of large reference datasets. Building on the established predictivity of the Emulate Liver-Chip S1 for drug-induced liver injury (DILI) on the Zoë Culture Module1, we evaluated the AVA™ Emulation System performance for equivalency. AVA is a self-contained, automation-ready OoC workstation that runs 96 independent Organ-Chip Emulations per run and addresses operational bottlenecks that hinder scale-up and routine adoption. Methods: Using standardized SOPs across three independent laboratories, we evaluated the performance of the human quad-culture Liver-Chip to detect DILI using a blinded set of small molecules known to be clinical DILI positive or negative. Chips were perfused under controlled flow on AVA with preset environmental regulation. We assessed (i) assay performance vs. first generation (Chip-R1) Liver-Chip data sets by dosing drugs at concentrations up to 300x the human plasma Cmax for efficacy, (ii) inter- and intra-run reproducibility across the 96 Emulations, and (iii) operational aspects such as real time microscopy. Results: Operating as an integrated tissue culture incubator, microscope, and flow engine, AVA fully supported all 96 Emulations throughout the seven day experimental phase. AVA-run Liver-Chips were able to discriminate between DILI-positive and DILI-negative drugs while maintaining biological consistency across the 96 Emulations and the three independent test sites. The study also demonstrated reduced manual touchpoints when compared to the operation of the first-generation platform (Zoë Culture Module) whilst also generating multi-modal datasets ready for upload into AI/ML engines. Conclusion: By combining predictive biology with automation, the AVA Emulation System removes key barriers to industrial adoption of OoC. By reducing hands-on user time, increasing run capacity, and simplifying multi-modal data generation, AVA is well-positioned to make Organ-Chip assays routine, scalable components of preclinical safety and efficacy decision-making. Next steps and future experiments: We are (i) expanding Liver-Chip to include assessment of metabolic activity for ADME-relevant applications, (ii) extending biological applications to include additional Organ-Chip models, (iii) integrating high-content imaging and transcriptomic endpoints for richer AI/ML feature extraction, and (iv) collaborating with regulatory-science initiatives toward qualification of AVA-enabled assays for decision support. 1) Ewart et al., 2022 “Performance assessment and economic analysis of a human Liver-Chip for predictive toxicology” Communications Medicine, 2, 154.
Development and Miniaturization of a Primary Human Hepatic Spheroid Model for Drug Optimization and Prediction of Drug-Induced Liver Injury (DILI)
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: The lack of translation between preclinical and clinical drug discovery stems from the use of insufficiently predictive preclinical models of disease. While cell-based screening historically relies on immortalized and transformed cell lines, these often fall short in recapitulating the complex environment and disease states found in patient cohorts. Models with improved disease-relevance and contexts, such as patient-derived primary cells, should therefore be used more systematically throughout the drug discovery process, to improve prioritization of starting points and their optimization into safe and efficacious candidate drugs. To address some of the barriers associated with broader deployment of patient centric-cell models, such as cost-effective access to primary cell and tissue material, we established ‘The Nanoscale Drug Testing Collaboration’, an international, multidisciplinary consortium involving biotechnology companies, the pharmaceutical industry, and academia. Drug induced liver injury (DILI) is the most common cause of acute liver failure, late-stage clinical drug attrition, and withdrawal of existing drugs from the market. To enable scalable and cost-effective long-term drug toxicity studies, we established a 500-cell spheroid model of primary human hepatocytes that remain metabolically active and express key hepatic markers, while cultured in 1536-well plates for up to 21 days. We demonstrated the robustness of the model by conducting toxicity and bioactivation studies across different hepatic donors, and validated consistency with larger spheroid models. Finally, we explored short- and long-term toxicity of 170 compounds with varying DILI severity across two pharmaceutical companies and an academic institution utilizing viability assessments and Cell Painting for comprehensive phenotypic evaluation. Data demonstrate that the miniaturized spheroid model is fit-for-purpose, while delivering high test capacity at reduced cost. Ongoing activities focus at identifying novel biomarkers of DILI using global proteomic profiling of single 500-cell spheroids, further model miniaturization through technology developments by partners in the collaboration and AI classification of DILI mechanisms and phenotypes. The integration of high-content image analysis with omics data will deepen our understanding of DILI-driving mechanisms and significantly advance therapeutic discovery. Taken together, these collaborative efforts are geared towards creating more physiologically relevant in vitro models for pharmacology and safety assessment, enabling earlier and more accurate prediction of DILI and supporting drug prioritization. By embracing innovative technologies for model generation, scalable high-capacity profiling, and sophisticated image analysis platforms, we aim to drive cost-effective and impactful advances in functional drug testing and personalized medicine.
Data Science and AI
Session: Predictive ML Models in Drug Discovery and Development
Session Chair: Peter McLean, PhD This session focuses on leveraging complex biological data, exploring advances in model design, validating models against experimental results for scientific relevance, and ultimately applying predictive ML models in drug discovery & development pipelines to drive actionable insights. We encourage sharing of novel approaches or early-stage research advancing predictive capabilities in this domain.
Industrializing Drug Discovery with ML: From Single-Task Models to Integrated Research Platforms.
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Developing new medicines remains a slow, high-failure endeavor despite continuous increases in the availability of data, access to high performance computing, and technological breakthroughs in the field of machine learning. This is because drug discovery is a fundamentally unkind learning problem characterized by vast chemical and biological search spaces, critically few labels in the form of approved medicines, and costly feedback cycles that support continued learning. This presentation outlines a pragmatic, two-pronged strategy to navigate this reality. First, we will look at how specialized, high-performance ML models are having an impact right now for discrete, data-rich drug discovery tasks. We will discuss examples of state-of-the-art models in high-content phenomics, molecular property prediction, and patient connectivity, and identify how these point-solutions contribute to the larger drug discovery process. Second, we will introduce our framework for moving beyond today's point-solutions toward a more integrated discovery engine. This approach centers on creating a unified, data-centric view of our entire discovery portfolio by integrating our diverse biological, chemical, and patient-centric data layers, metrics, and ML models. This unified view serves as a foundation from which we can systematically define, test, and refine our computational methods and discovery strategies in a data-driven manner. This strategy provides a tangible path forward in this exceptionally challenging field— leveraging ML models for maximal impact today in domains well suited for it, while building the integrated architecture required to transition drug discovery from a series of bespoke projects into a cohesive, continuously learning discovery engine.
Pairwise Molecular Learning Accelerates Drug Discovery and Nanoformulation Design
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Machine learning holds tremendous promise to accelerate and optimize drug discovery and development, but many of the most pressing questions are constrained by limited data -hampering the accurate training of predictive algorithms. To extend the reach of machine learning into data-sparse problems, our laboratory is developing advanced approaches such as pairwise molecular learning. Pairwise molecular learning is a novel paradigm that reframes absolute property prediction into relative comparisons. By converting molecular learning tasks into pairwise relationships, this strategy effectively expands usable training data while focusing models on the molecular features most critical to governing properties. We have demonstrated that this approach enhances the performance of message-passing deep learning algorithms on small datasets, enables the incorporation of inexact bounded data points to broaden training inputs, and further strengthens active learning strategies to rapidly identify potent molecules in iterative design-make-test cycles. Beyond advancing these algorithms, our laboratory is integrating them with robotic laboratory automation to accelerate, optimize, and de-risk drug development and nanoparticle design. For example, we have established a platform that applies pairwise molecular learning to design novel prodrugs of anticancer agents and antibiotics with reduced side effects compared to both the parent drugs and existing prodrugs. Likewise, our drug–excipient nanoparticles improve solubility, bioavailability, and duration of action, enhancing efficacy in cancer and anesthetic applications. Taken together, machine learning-driven laboratory automation has the potential to transform therapeutic development by accelerating discovery, reducing risk, and ensuring that medicines are safer and more effective. By tailoring and integrating bespoke algorithms, workflows, and materials, we seek to unleash synergies that amplify innovation and deliver meaningful benefits to society.
Mathematical Modeling-Driven Integration of Single-Cell Transcriptomics and High-Throughput Screening Data to Predict Adaptive Drug Resistance
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Purpose: Drug resistance remains a major obstacle in oncology, with adaptive mechanisms emerging even after initially effective treatments. While high-throughput screening provides short-term efficacy data, it cannot capture long-term adaptive trajectories. Single-cell transcriptomics offers insights into cellular heterogeneity and drug-tolerant states, but predictive integration with screening data is lacking. We aimed to construct a mathematical framework that integrates single-cell transcriptomic profiles with high-throughput drug response data to predict adaptive resistance trajectories and optimize dosing schedules. Methods: We integrated multiple datasets from the Gene Expression Omnibus (GEO) and public screening resources. First, melanoma single-cell RNA sequencing data (GSE72056; >4,600 cells from 19 patient tumors) were used to parameterize baseline heterogeneity and resistance-linked gene expression, including GPX4, SLC7A11, and ATP-binding cassette transporters. Second, drug-exposed cell line single-cell profiles (GSE149383; 33 samples from lung and melanoma lines treated with erlotinib or vemurafenib) were analyzed to capture transcriptional shifts under drug pressure. These inputs informed a nonlinear system of ordinary differential equations describing transitions between sensitive, tolerant, and resistant states, coupled with a Markov decision process to model stochastic trajectories. Validation employed the Broad Institute’s PRISM Repurposing dataset (23Q2; ~4,500 compounds across 578 cell lines). Dimensionality reduction (principal component analysis and uniform manifold approximation and projection) defined cellular state clusters; model parameters were estimated using Runge–Kutta solvers with stochastic perturbations; predictions were compared to observed dose–response curves. Graph neural networks and SHAP values were applied to interpret regulatory drivers. Results: The model predicted resistance onset a median of 29.8% earlier than empirical screening curves (interquartile range 25.1–34.5%), with a cross-validation area under the curve of 0.813 (95% confidence interval: 0.774–0.852) across 10-fold splits. Adaptive trajectories clustered into three dominant patterns: (i) epigenetic reprogramming via histone modifiers in 39% of simulations, (ii) efflux pump activation in 34%, and (iii) quiescence induction in 27%. Application of optimized adaptive dosing reduced simulated resistance onset by 22.4% (95% confidence interval: 19.6–25.3%; p < 0.001) compared to fixed schedules, with mean inhibitory concentration shifts decreased from 3.0-fold to 2.4-fold over 14 passages. Validation against PRISM data confirmed concordance across 168 compound–cell line pairs, with prediction error for viability at 11.2% ± 3.1%. SHAP analysis identified GPX4, EZH2, and ABCB1 as top resistance determinants, contributing 17.9%, 13.6%, and 10.8% of model variance, respectively. Model identified resistance inflection points between passages 6–8, with variance reduction of 14.5% (p = 0.002), confirming reproducible adaptive trajectories across independent simulation replicates. Conclusions: This mathematical framework integrates single-cell data with large-scale screening viability profiles to anticipate adaptive resistance and optimize therapy. By coupling nonlinear modeling with stochastic control, the study provides a robust computational tool with direct translational value for drug discovery and resistance management.
Regularized Single-cell Imaging Enables Generalizable AI models for Stain-free Cell Viability Screening
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Cell viability assays are fundamental tools in biomedical research and drug discovery, enabling the evaluation of compound activity on target cells. Recent advances in artificial intelligence (AI) have shown the possibility for label-free viability prediction directly from microscopy images at single-cell resolution. However, these methods suffer from poor generalizability – they must be retrained on each new cell type and compound in order to achieve their predictive accuracy. A truly generalizable model that performs reliably on previously unseen cell types and compounds has not been demonstrated. To address these challenges, we developed a generalizable AI-based cell viability assay by isolating single cells in nanoliter wells (nanowells) integrated into standard glass-bottom microwell plates. This strategy, which we term “regularized imaging”, simplifies cell segmentation and normalizes single-cell images to enable AI models to learn morphological features specific to live and dead cells. We trained our model on a single cell line (MDA-MB-231) exposed to four cytotoxic conditions (ethanol, andrographolide, daunorubicin, and serum starvation). To evaluate the generalizability of our model on previously unseen chemical compounds, we tested it on MDA-MB-231 cells treated with eight additional anticancer agents with diverse death mechanisms: bortezomib, carfilzomib, cisplatin, staurosporine, doxorubicin, mitoxantrone, epirubicin, and aprepitant. In all these cases, the IC50 concentrations reported by our AI models were within 10% of the values obtained using standard live/dead staining assays, highlighting the strong generalizability and robustness of our AI-based assay in drug efficacy assessment without the need for fluorescent staining. To investigate whether our AI model has learned morphologies that could be generalized to other cells, we tested it on four previously unseen cell lines: Jurkat and THP-1 (suspension leukemia cells) and PC3 and UM-UC-13 (adherent prostate and bladder cancer cells). Our model achieved high accuracy, with average precision, recall, and F1 scores of 0.92 ± 0.06, 0.99 ± 0.06, and 0.95 ± 0.04, respectively, confirming generalizability across both adherent and suspension cell morphologies. A further advantage of our assay is its ability to monitor kinetic drug responses without interfering with cellular activities. Our AI-reported results revealed consistently lower IC50 values for bortezomib (1 µM, 48 nM, 18 nM) compared to cisplatin (>50 µM, >50 µM, 25 µM), confirming that bortezomib acts more rapidly on MDA-MB-231 cells. Moreover, we validated compatibility with multiple imaging platforms. In summary, we presented a generalizable AI-powered assay capable of determining cell viability directly from microscopy images with high generalizability across compounds, cell types, and imaging platforms. This approach opens new opportunities to identify morphological signatures of diverse cell states and phenotypes, streamlining the development of next-generation cell-based drug assays.
Session: Foundation Models and Agents for Target Identification
Session Chair: Tommaso Biancalani Target identification and assessment are foundational steps in drug discovery. Advances in artificial intelligence, particularly the rise of foundation models trained on biological data or text, offer powerful tools to extract scientific insights from vast multi-modal datasets. This session will explore recent developments in foundation models and the emerging role of AI agents, which orchestrate multiple models to enable reasoning and decision-making for target discovery.
Machine learning for target identification and hit finding
Open to view video.
Open to view video. Abstract: This talk will explore how we are leveraging AI to advance target identification and assessment, as well as to discover compounds capable of modulating those targets. At the heart of our approach are foundation models, cutting-edge AI innovations that enable automated reasoning across complex datasets and extract meaningful insights from diverse data types. I will provide an overview of the foundation models we have developed, highlighting applications ranging from regulatory element design to gene signature search, as well as our work in virtual screening. Finally, I will introduce an agent-based system designed to integrate and orchestrate these models, illustrating how this holistic framework enhances decision-making and accelerates scientific discovery.
AI-Driven Instrument Integration: Automating Setup and Data Analysis for Life Science Labs
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: This study demonstrates how artificial intelligence (AI) can improve instrument integration in modern laboratories by addressing persistent bottlenecks in instrument setup and data analysis. Despite advances in laboratory automation, scientists frequently spend hours configuring instruments, manipulating Excel files, and reshaping data to comply with reporting standards and enable downstream analysis. These manual processes slow experimental progress and reduce reproducibility. To address these challenges, we developed Cypher, an AI software platform that enables researchers to create AI-driven workflows capable of configuring instruments, managing data pipelines, and converting raw outputs into structured, analyzable formats without programming expertise. Cypher guides scientists through dynamic, form-based interfaces that simplify instrument setup while also processing diverse outputs (e.g., plate reader Excel sheets, flow cytometry files) into visualizations, statistical summaries, and reproducible reports. The platform employs large language models (LLMs) to parse experimental or data analysis intent expressed in natural language, then generates and executes Python code within a built-in execution environment to produce instrument-ready files or data analysis results. By abstracting integration complexity, the platform allows scientists to implement digital workflows that are adaptable and consistent. We present three case studies of Cypher deployed in active research environments. For DropGenie’s CRISPR cell engineering platform, workflows that traditionally relied on Excel macros to coordinate electroporation setup and Beckman Coulter Echo transfer lists were replaced with intelligent forms. Here, experimental intent expressed in natural language was parsed by a large language model (LLM), which then generated instrument-ready files by producing and executing Python code within Cypher’s built-in code execution environment. This approach reduced spreadsheet errors and accelerated experiment design. For the Formulatrix Tempest liquid handler, Cypher generated liquid transfer files that replaced spreadsheet-driven workflows and repetitive instrument software steps. The system accounted for instrument constraints, reduced setup time from hours to minutes, and improved user confidence in the process. In another case, a biotech startup used Cypher to parse and analyze plate reader outputs, shortening weeks of manual Excel-based data wrangling to near real-time visualization and interpretation. Across pilot sites, researchers reported more than a 50% reduction in experimental setup and analysis time alongside improvements in reproducibility and data quality. These studies suggest that AI-driven instrument integration offers a practical and scalable approach to reducing manual effort in laboratory workflows. By lowering the burden of repetitive setup and analysis tasks, the approach supports more reproducible data generation and more efficient experimental design. Future work will extend to sequencers, chromatography systems, and advanced imaging platforms, with enterprise-ready deployments under development to ensure compliance and scalability.
PhenoSpace: AI-Driven High-Content Imaging to Accelerate Hit ID and DMTA for Intractable Targets
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Classical phenotypic screens often depend on distal readouts (reporters, viability) that result in slow target deconvolution and prolonged DMTA timelines - especially for intractable targets. High-content imaging (HCI) can provide more proximal cellular readouts, but assay development remains slow and subjective, relying on hand-picked endpoints and manual thresholds that miss complex or novel phenotypes. We introduce PhenoSpace, an AstraZeneca AI/ML platform that automates single-cell phenotyping to deliver robust, scalable HCI for drug discovery. Using a combination of contrastive deep learning and phenotypic classification, PhenoSpace eliminates manual feature selection and workflow tuning, enabling unbiased detection of subtle and emergent phenotypes across diverse models (including organoids and complex cell systems). The platform integrates object detection, phenotypic classification, and closed-loop assay optimisation to reduce development cycles, increase reproducibility, and expand biological signal capture. By providing target-proximal, high-dimensional readouts, PhenoSpace has the potential to deliver target proximal Hit ID, sharpens DMTA decision-making, and shortens cycle times, directly addressing bottlenecks in classical cascades. This AI-first approach is particularly impactful for intractable targets where mechanism is unknown or evolving, enabling faster hypothesis generation, earlier target deconvolution, and more efficient progression from phenotypic signal to actionable chemistry.
Session: AI-Driven Lab Experiments
Session Chair: Yao Fehlis This session explores how AI is transforming experimental discovery by powering self-driving lab systems. From guiding experiment design to automating data analysis, AI models are creating dynamic feedback loops between prediction and execution. Emerging tools, including agentic AI, are introducing new layers of autonomy—enabling labs to adapt, iterate, and accelerate innovation with minimal human intervention.
Agentic AI for the DMTA Cycle: From Orchestration to Multi-Agent Workflows
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Accelerating the Design-Make-Test-Analyze (DMTA) cycle is central to modern drug discovery. We highlight three complementary advances that enable faster and more reliable lab operations. Artificial (arXiv:2504.00986) provides whole-lab orchestration and scheduling, integrating instruments, robotics, and LIMS/ELN into unified workflows. Cycle Time Reduction Agents (CTRA) (arXiv:2505.21534) use LangGraph-based agents to analyze run logs and telemetry, uncovering bottlenecks and recommending targeted optimizations. Tippy (arXiv:2507.09023; arXiv:2507.17852) introduces a multi-agent framework tailored to DMTA, with Supervisor, Molecule, Lab, Analysis, and Report agents operating under safety guardrails. Together, these systems show how agentic AI and orchestration can accelerate DMTA by ensuring reliable execution, providing continuous cycle-time feedback, and distributing specialized decision-making across agents. Attendees will gain a practical blueprint for implementing AI-driven automation that reduces idle time, increases throughput, and shortens DMTA cycles.
Artificial Intelligence (AI) Solutions for Computational Chemistry and Organic Chemistry Tasks
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: In this talk, we will provide an overview of the latest developments in machine learning and AI methods and application to the problem of drug discovery and molecular design at Isayev’s Lab at CMU. We identify several areas where existing methods have the potential to accelerate computational chemistry research and disrupt more traditional approaches. We developed a novel ML-guided molecular discovery platform combining synergistic innovations in automated flow synthesis, reinforcement learning-guided agents, and generative AI.
µSpaceM1: A First-in-Class Microliter-Scale Molecular Space Derived from Large-Scale Reaction Data
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available This talk introduces µSpaceM1, a molecular space comprising over 1 billion compounds designed for plate-based synthesis at the microliter scale. The cornerstone of this technology is our proprietary chemical reactivity dataset—the largest of its kind—generated from over 300,000 experimental reactions measured using LC/MS-UV. This data powers reaction outcome prediction models that achieve accuracy surpassing human experts, enabling us to rigorously define and curate the available chemical space. This data-driven approach allows for the rapid synthesis of novel molecules at costs starting as low as $15 per compound. By intentionally designing the dataset to minimize human bias and exploring reactivity in an open-ended fashion, we ensure exceptional structural diversity. We will discuss the underlying technology and present a case study applying µSpaceM1 to hit discovery via direct-to-biology campaigns.
Closing the Loop: Automated Organoid Phenotypic Screening and AI-Driven DMTA Cycles for Next-Generation Precision Oncology Drug Discovery
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Purpose of the Study Overcoming resistance and transient responses to KRAS inhibitors (KRASi) in pancreatic ductal adenocarcinoma (PDAC) requires innovative strategies to accelerate drug discovery and identify synergistic combinations. We developed an end-to-end, integrated automation and AI-driven phenotypic screening platform at the University of Antwerp’s DrugVision.AI screening facility, powered by two university spin-offs: Orbits Oncology, which specializes in automated image analysis, and Sightera Biosciences, an AI-enabled drug discovery company. This platform uniquely combines robotics, patient-derived organoid (PDO) models, and advanced analytics to generate rich phenotypic datasets that drive a closed-loop design–make–test–analyze (DMTA) cycle. Experimental Procedures KRAS-mutant PDAC PDOs were screened using a high-throughput robotic platform, evaluating KRASi both alone and in combination with targeted agents. Automated longitudinal brightfield and fluorescent imaging was performed, and Orbits Oncology’s AI-powered pipeline extracted phenotypic fingerprints directly from label-free brightfield images. These fingerprints quantified cytostatic versus cytotoxic effects, response heterogeneity, drug synergy, and onset/recovery kinetics. The resulting standardized datasets were integrated into Sightera Biosciences’ AI-driven drug design engine, which prioritized and designed novel compounds that were synthesized and re-screened within the same platform, completing an automated feedback loop. Summary of Data Phenotypic analysis revealed distinct response dynamics, identifying combination partners capable of shifting KRASi-treated PDAC organoids from cytostasis to irreversible cytotoxicity. Automated image acquisition and data analysis pipelines delivered reproducible, high-volume data across hundreds of drug conditions, significantly reducing manual effort and accelerating turnaround time. This iterative framework enabled direct translation of PDO-based phenotypic data into predictive models for drug design, improving the prioritization of synergistic therapies and novel compounds. Conclusion Statement We present a university spin-off-driven, fully integrated automation platform that redefines drug discovery workflows. By coupling robotics, PDO screening, and AI-driven image analysis, we introduce a “data-generation-first” paradigm in DMTA cycles, leveraging rich phenotypic fingerprints to drive compound design and optimization. This scalable, label-free, and cost-effective approach demonstrates the translational power of academic innovation, bridging preclinical discovery with actionable therapeutic development. Next Steps and Future Experiments Already, our pipeline has delivered novel small molecules that synergize with KRAS inhibitors and standard-of-care therapies in PDAC, demonstrating in vivo efficacy and favorable ADMET profiles. These results validate the platform’s power to generate clinically relevant leads and accelerate drug development. We are expanding screening to a pan-cancer PDO panel with multi-omics integration to refine predictive models and biomarker discovery for our first small molecule assets. Going forward, the platform’s scalability will enable applications across diverse cancer types and therapeutic modalities, showcasing how academic ecosystems, through spin-offs like Orbits Oncology and Sightera Biosciences, can create transformative, AI-enabled drug discovery pipelines. Related publications and webinars: www.drugvision.ai/blog. www.drugvision.ai www.sightera-biosciences.com www.orbits-oncology.com
Micro- and Nanotechnologies
Session: Fundamentals in micro- and nanofluidic systems driving innovation in lab automation
Session Chair: Sumita Pennathur, PhD This session highlights the foundational discoveries that are catalyzing transformative technologies in diagnostics, sample prep, assay development, and high-throughput biological workflows. We will spotlight cutting-edge advances in our understanding of how materials interact with biological systems at the micro and nanoscale, how flow can be precisely manipulated at these dimensions, and how novel materials can be engineered into systems that uniquely control fluid dynamics. Together, these insights are driving a new era of lab automation—where biology can be digitized, multiplexed, and miniaturized with unprecedented efficiency.
Electrochemistry for Biology
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Despite major success in commercial glucose sensing devices, electrochemical methods are rarely used in bioanalytical and lab automation tools. However, electrochemistry offers the potential of powerful, scalable, and inexpensive methods to transduce biological events into electrical signals, making it perhaps one of the most ideal methods for automation in both diagnostics and drug discovery applications. In this talk, I will start by presenting some fundamental principles of electrochemistry as well as describing our recent work leveraging fundamental electrochemical principles to develop new methods for probing protein dynamics, structure, and assembly. Specifically, I will introduce electrochemical aptamer-based (EAB) sensors, which couple molecular recognition with impedance measurements to provide sensitive detection of various metabolites in vivo, in real time. Then, I will present our recent work showing how using electrical potentials can be advantageous not just to measure, but also to manipulate protein structure and assembly. Using Tau protein, a key player in neurodegenerative disease, we show how voltage can drive conformational changes and aggregation. Taken together, this talk highlights how electrochemistry can open entirely new avenues for studying and controlling complex biological processes, with broad implications for diagnostics, therapeutics, and biotechnology.
Fundamental Microfluidic and Nanofluidic Electrokinetics for Lab Automation Applications.
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Understanding and harnessing electrokinetic (EK) flow in micro- and nanofluidic systems opens entirely new modalities for the analysis, concentration, and detection of laboratory samples. In this talk, we will first highlight the key physical concepts underlying EK flow at the micro- and nanoscales, including the crucial role of the electric double layer and the unique transport phenomena that emerge under nanoscale confinement. We will then provide examples of how our group has exploited these principles to achieve powerful modes of analyte manipulation—including preconcentration, separation, and detection of proteins and nucleic acids. For example, we have shown that nanofluidic channels with finite double layers enable direct measurement and control of ionic conductivity, and that nonlinear conductivity gradients in such systems can be leveraged to create new modalities for analyte concentration and sensing. These insights have led to the development of robust platforms to study biomolecular transport and kinetics, as well as a new generation of nanofluidic devices capable of translating conductivity changes into quantitative information about analytes. The result can translate to a set of cheap, disposable, real-time sensors integrated into compact chips. Finally, beyond fundamental studies, we have demonstrated applications ranging from lab-on-a-chip diagnostics to wearable and implantable multi-analyte biosensors, bridging fundamental electrokinetics with practical biomedical tools. Together, these advances illustrate how a deep understanding of micro- and nanofluidic EK flow can spark innovation in lab automation and enable entirely new approaches to molecular analysis.
Accelerating Microfluidics: 3D Features Beyond Lithography
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Microfluidics requires precise microscale features, but conventional UV lithography restricts device design to straight walls, fixed depths, and limited geometries. The purpose of this study was to demonstrate how two-photon polymerization (2PP) can overcome these limitations by enabling true 3D features with lithographic accuracy, and how this approach can accelerate the transition from prototyping to scalable manufacturing. We applied 2PP to fabricate microfluidic masters with sub-micron resolution, precise depth control, and rounded or angled geometries that are not possible with lithography. These masters were used for replication and injection molding to evaluate scalability into final thermoplastic materials. In parallel, we explored direct in-chip printing of microstructures, such as channels, mixers, and valves, to highlight cases where injection molding is not applicable. The fabricated masters achieved dimensional tolerances of ±5 µm with surface roughness below 100 nm. Injection-molded replicas faithfully reproduced sub-micron features, confirming the viability of rapid scaling. Direct in-chip printing successfully integrated functional 3D features within molded devices, demonstrating added flexibility in device design. Both approaches showed fast iteration cycles, with tooling and replication achievable within weeks instead of months. 2PP microfabrication provides lithography-level precision while enabling true 3D freedom, including depth control and complex geometries. When combined with injection molding, it offers a pathway from rapid prototyping to high-volume production, while direct in-chip printing enables applications where conventional replication cannot be applied. Ongoing work focuses on expanding material options for final devices, validating functional performance in biological assays, and integrating this workflow into automated pipelines. Future experiments will address long-term durability of molded devices and explore hybrid fabrication combining 2PP and conventional processes to further accelerate the design–make–test–analyze (DMTA) cycle.
A Novel Solution for Environment-Independent High Sensitivity Fluid Characterization
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Fluid flow as a function of viscosity is well understood for Newtonian fluids. The relationship between viscosity and biological sample characteristics (hyperviscosity in blood and plasma, cell density, etc) is also well studied. However, the relationship between fluid flow and biological sample characteristics is complicated by the non-Newtonian and heterogeneous nature of biological fluids. Despite this, fluid flow models represent an opportunity for streamlining biological fluid characterization, particularly for high sensitivity viscosity characterization. One key application of high sensitivity viscosity characterization is in cell density counting. Cell density is typically determined through fluorescence assays which, while accurate, are difficult to incorporate into high throughput workflows. As biotechnology has advanced, high throughput liquid handlers have become accessible to research laboratories, enabling fully self-sustaining automated laboratory conditions (ex. automated cell culture). These automated systems have increased efficiency and reproducibility, but they also require novel tools and methods for rapid, non-invasive sample characterization at a microscale. A method that can seamlessly integrate with these platforms to provide quick and reliable viscosity data could significantly enhance their capabilities. The work presented here introduces such a method. It utilizes a custom-designed two-sided pipette tip with a micro-constriction on each side to determine the relative viscosity of an unknown fluid to a known standard. The core of this technique is the analysis of real-time pressure data during aspiration. As each fluid traverses the constriction, it creates a distinct pressure drop. The time difference between the two pressure drops function as a measure of the relative viscosity. Because of the relationship between viscosity and cell density, by creating a regression curve of time difference between a media standard to known cell densities of particular cell types, this method can serve as an automation friendly, highly sensitive cell counter. This method has been utilized both for Newtonian fluids of varying viscosities (with water as a standard), and for a variety of cell lines, including HepG2, GB2, CaOV, and AML-12s, (with media as a standard) demonstrating the versatility of the method. This work establishes the relationship between the time difference in aspiration and both fluid viscosity and cell count, establishing the viability of this approach for high-sensitivity biological applications. Further work will involve integration with high throughput systems and expanded application testing.
Session: Personalized Medicine in the Wearable Age: Monitoring, Analysis, and Applications
Session Chair: Elliot Botvinick, PhD Personalized Medicine in the Wearable Age: Monitoring, Analysis, and Applications
From Oxidases to Dehydrogenases: Broadening the Scope of Optical Multi-Analyte Biosensors
Open to view video.
Open to view video. Abstract: Continuous molecular monitoring remains a persistent challenge in both clinical diagnostics and industrial bioprocessing. Our lab developed an optical biosensing platform that translates enzymatic oxygen consumption into real-time luminescence signals using a phosphorescent porphyrin dye (PtTPTBP). While oxidase-based sensors have enabled continuous tracking of oxygen, glucose, and lactate in vivo, their utility is limited by a narrow enzymatic catalog. To address this, we expanded our platform to integrate NAD-dependent dehydrogenases, enzymes that generate NADH as a redox product that allow indirect coupling to oxygen consumption via NADH oxidase (NOX). We first engineered a continuous NADH sensor by immobilizing NOX within a polyethyleneimine (PEI)–poly(ethylene glycol) diglycidyl ether (PEGDGE) hydrogel. The system showed linear, reversible detection across biologically and industrially relevant NADH ranges (0–6 mM), with tunability via PEI molecular weight. Phase-based lifetime measurements were captured using modulated LED excitation and photodiode emission detection. Building on this design, we developed a dual-enzyme sensor for β-hydroxybutyrate (βHB), a clinically relevant ketone associated with diabetic ketoacidosis. The sensor couples βHB dehydrogenase (βHBDH) with NOX, using NAD⁺ as a diffusible cofactor. To prevent cofactor leaching, NAD⁺ was electrostatically entrapped within the PEI matrix and stabilized by a permselective Poly(4-vinylpyridine) (P4VP) and PEGDGE topcoat. The βHB sensor demonstrated linear, reversible responses across physiologic to pathologic concentrations (0–4 mM). Together, these results establish a modular framework for incorporating NAD⁺-dependent dehydrogenases into optical biosensors. By pairing redox chemistry with oxygen-resolved phosphorescence, our platform enables continuous, multi-analyte monitoring with broad enzymatic compatibility. This approach opens new possibilities for real-time metabolic sensing across diverse applications, from personalized health monitoring to precision biomanufacturing.
Clinical validation of a wearable ultrasound sensor of blood pressure
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Options for the continuous and non-invasive monitoring of blood pressure are limited. Cuff-based sphygmomanometers are widely available, yet provide only discrete measurements. The clinical gold-standard approach for the continuous monitoring of blood pressure requires an arterial line, which is too invasive for routine use. Wearable ultrasound for the continuous and non-invasive monitoring of blood pressure can elevate the quality of patient care, yet the isolated sonographic windows in the most advanced prototypes can lead to inaccurate or error-prone measurements, and the safety and performance of these devices have not been thoroughly evaluated. Here we describe validation studies, conducted during daily activities at home, in the outpatient clinic, in the cardiac catheterization laboratory and in the intensive care unit, of the safety and performance of a wearable ultrasound sensor for blood pressure monitoring. The sensor has closely connected sonographic windows and a backing layer that improves the sensor’s accuracy and reliability to meet the highest requirements of clinical standards. The validation results support the clinical use of the sensor.
Rapid and Precise Fabrication of Electrochemical Microfluidic Devices via CNC Milling of Shrink Polystyrene
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Polystyrene (PS) is a widely used thermoplastic in microfluidic manufacturing, offering benefits such as low permeability, high optical clarity, and cost-effectiveness. Previous research has leveraged the inherent shrinkage properties of biaxially pre-stressed thermoplastic sheets to create thinner and deeper microfluidic channels upon heating. However, limited research has explored the fabrication of shrink PS devices using computer numerical control (CNC) milling systems, a step closer to industrial manufacturing. This work demonstrates the ease of using both industrial-level and bench-top amateur-level CNC milling systems to fabricate shrink microfluidic devices with high precision and resolution compared to conventional polystyrene microfluidic chips. By combining this technique with a shrink PS sheet with a gold electrode pattern, a complete microfluidic chip with electrochemical functionality can be rapidly produced at low cost. Our results demonstrate that this method can fabricate miniature-scale microfluidics with high precision. Electrochemical characterization using 5 mM ferro/ferricyanide in phosphate-buffered saline showed that the microfluidic chip could measure electrochemical signals with as little as 60 nL of sample. This study presents a novel approach to fabricating high-precision, shrinkable polystyrene microfluidic devices using CNC milling systems. The method enables the creation of miniaturized channels with predictable shrinkage ratios and excellent reproducibility. By integrating shrinkable gold electrodes, this allows for the rapid production of functional electrochemical microfluidic devices, making it highly suitable for point-of-care diagnostic applications.
Session: Next Generation Droplet Microfluidic Technologies
Session Chair: Alison Hirukawa, PhD This session spotlights the latest innovations in droplet-based platforms driving advances in cell programming, genome engineering, and phenotypic screening. Talks will cover technologies that harness droplets for high-throughput cell manipulation, synthetic biology, and distributed biomanufacturing, with applications spanning mammalian, microbial, and cell-free systems. Emphasis will be placed on translational potential and the intersection of micro/nanoscale systems with scalable bioengineering solutions.
Enabling Novel Protein Synthesis Through High Resolution Digital Microfluidics
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Active proteins can now be obtained in less than 48 hours with an automated benchtop system that leverages cell-free protein synthesis to rapidly screen expression and purification conditions. At the heart of this instrument is a microfluidics cartridge with a high-resolution thin film transistor (TFT) array that enables the simultaneous screening of 192 combinations of constructs and expression conditions. This backplane is the result of collaborative research between biochemistry experts at Nuclera and microelectronics experts at E Ink. Details of this cartridge, its capabilities, and the development collaboration will be described along with a peek into possible future cartridge directions.
High-throughput screening of antibody function with droplets compatible with commercial fluorescent-activated cell sorters
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Antibody discovery is limited by a multi-step process, first screening millions of antibodies for binding to targets, but then only about 20 for biological function. This leads to less potent drugs, higher risks, costs, and timelines, and a 90% failure rate of treatments for patients. We are building a proprietary platform that can screen 1000x more drugs for what really matters, biological function. Unlike other high-throughput platforms, we can quantify millions of drugs for multiple aspects of drug function, for almost any disease including cancer, autoimmune disorders, and GPCR targets. We combine this high-throughput screening platform with an agentic AI platform to design antibodies to enable build, test, train cycles for millions of antibodies. Our Xcell Drops are proprietary double emulsion drops created with a minimum oil shell thickness, allowing more cells to be put into a droplet of a given volume. Unlike other drops and particles compatible with fluorescent-activated cell sorting (FACS), Xcell Drops have no antibody leakage between drops due to an oil shell that is impermeable to large molecules while being highly permeable to oxygen and CO2. We put two different cell types into each Xcell Drop: an antibody-producing cell and a target cell that can include a reporter of function in a key disease pathway. With our proprietary methods we have demonstrated some of the highest loading of two cell types in droplets and an ability to screen our droplets with commercial FACS at rates up to 2000 events / sec and 50% - 70% sort yields. We then showed an ability to perform a functional screen of several antibodies and rank-ordered the top antibodies in the same order as a traditional low-throughput screen. Our Xcell Drop technology platform is compatible with antibody screens from libraries of B cells, AI-predictions, and multispecific antibodies. Our quantitative screening technology unlocks the power of AI in drug discovery by screening every predicted molecule for binding and function, both assessing each molecule and training AI with very large quantitative data sets. Xcell Drop compatibility with FACS allows us to scale our operations cheaply and rapidly and enables partnerships with pharmaceutical companies or contract researchers across the globe. Future screens can move beyond antibodies to other biologics include peptides, siRNA, nanobodies, and cell therapies. In summary, our quantitative screens will cut discovery time in half and derisk drug development, bringing only the best therapies forward for patients.
Microfluidics to Propel Defense: From DNA Transfer To Reconnaissance
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: This talk presents four complementary and unique advances demonstrating how microfluidics can dramatically accelerate discovery, screening, and device innovation. We will explore microfluidics applications in synthetic biology, high-throughput screening, and rapid prototyping, culminating in a compelling case study for reconnaissance. Supported by compelling visuals and data, this presentation will illustrate the potential of these advances to deliver transformative capabilities and redefine the landscape of microfluidic applications. Microfluidics is increasingly central to laboratory automation, enabling miniaturization, parallelization, and precision at scales that outpace traditional workflows. At the U.S. Army DEVCOM Army Research Laboratory (ARL), we are advancing microfluidic platforms to address unique national defense needs while simultaneously creating broadly useful technologies for the life sciences (Wippold, Archives BMEN & Biotech 2025). First, to overcome barriers in genetic engineering of non-model microorganisms, we developed DNA ENTRAP, a droplet microfluidic system that encapsulates donor and recipient bacteria in picoliter bioreactors. By enforcing spatial proximity and integrating automated assay steps, ENTRAP enhances DNA conjugation efficiency while reducing donor-to-recipient ratios by several orders of magnitude compared to benchtop methods. This marks the first fully optimized demonstration of conjugation in droplets, with implications for high-throughput microbial domestication and biomanufacturing (Wippold, NEW Biotechnology 2024). Second, for materials screening, we introduced the kappa(κ)Chip, a microfluidic device capable of performing 24 simultaneous adhesion assays under graded shear stresses from a single input. Paired with automated image analysis software (kappaCellCV), kappa(κ)Chip enables rapid rank-ordering of bioinspired adhesive proteins such as fungal hydrophobins on diverse polymer substrates. This integration of modular design, laser-based fabrication, and machine-learning–assisted analysis represents a step change in high-throughput adhesive discovery, with potential applications in coatings, repair, and biomaterials (Wippold, Lab Chip 2025). Third, to accelerate device development cycles, we created PRIMDEx (Prototyping Rapid Innovation of Microfluidic Devices for Experimentation), a workflow that combines stereolithographic 3D printing with benchtop injection molding. PRIMDEx produces biocompatible, high-fidelity microfluidic devices in under 24 hours, with per-unit costs in the cents range, enabling rapid design-test iterations that match the timelines of biological experiments. This approach bridges the gap between prototyping and scalable production (Kruk, Wippold, SLAS 2025). Fourth, the team spearheaded the development of a remote, microfluidics-based biological threat detection system, transitioning it upstream within the Army R&D pipeline. His work focused on consumable components, leveraging sophisticated multiphysics modeling, precise device fabrication, and pilot-scale production to enable a 17-target assay with results delivered in under 15 minutes to complete biological reconnaissance missions. These platforms demonstrate how microfluidics can enable both automation and agility in life sciences research: from streamlining microbial gene transfer, to parallelizing biomaterials screening, to democratizing device manufacturing. By adapting these advances from defense-driven challenges, cross-sector innovation can accelerate laboratory automation for a wide range of life science applications.
New Modalities
Session: Advances in Proximity-Inducing Modalities
Session Chair: Dane Mohl, PhD Molecular glues, bifunctional small molecules, and innovative covalent chemistry are bringing previously challenging drug targets into focus. This session will highlight recent advancements in these new modalities, emphasizing the specific challenges associated with their application and the unique biochemical and cell-based tools required to identify more potent and effective molecules.
Optimizing the tool kit for induced proximity drug discovery
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: The concept that a molecule can achieve a pharmacological effect by inducing proximity between two proteins began at the turn of the new century with the discovery of the molecular mechanism of the natural hormone Auxin and natural antibiotic Rapamycin. The field really exploded in the 2010’s with the development of PROTAC’s that can induce degradation by linking targets to ligandable ligases such as VHL and CRBN, and the discovery that the drug thalidomide works as a molecular glue that binds to CRBN and induces degradation of key transcription factors. The promise of molecular glue drugs is that in addition to bringing novel pharmacology they have potential to drug targets that are not ligandable by traditional binary methods. Induced proximity drug discovery is now an integral part of drug discovery at many biotech companies, including Genentech. We have discovered that successful drug discovery in the space requires novel approaches and strategies. In this talk I am going discuss the suite of cellular, biochemical and biophysical assays that we have applied and the lessons that we have learned in the pursuit of discovering CRBN molecular glues for novel targets. First, I will discuss the application of HiBiT technology to directly screen molecular glue libraries for depletion of selected targets. While this led to our first successful molecular glue lead, challenges with off-target degradation and low sensitivity contributed to a low success rate for our targets of interest. I will share some data suggesting that directly screening for induced protein interactions between your target of interest and CRBN has the promise to be both more sensitive and less prone to artifact then degradation screening. While PPI screens are more powerful, they are also more resource intensive, making appropriate target selection the key bottleneck in molecular glue discovery. I will discuss our effort to get around this by shifting away from target centric drug discovery to target agnostic, un-biased screening approaches such as cell painting and proteomic technologies. I will share some preliminary data from our own early attempts at unbiased screening, which led to the discovery of exiting new lead. In conclusion, I will share why our experiences have led us to believe that unbiased screening approaches, particular those that utilize mass spectrometry-based detection of protein-protein interactions, are the future of molecular glue discovery.
Probing Degron Specificity to Expand the Reach of Molecular Glue Degraders
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Molecular glue degraders are redefining the druggable target space. Unlike PROTACs or small-molecule inhibitors, molecular glues do not require a canonical binding pocket on the target. Instead, they function by modulating the substrate specificity of E3 ligase substrate receptors. This ability to redirect E3 ligases underscores the central importance of understanding E3–degron interactions in molecular glue discovery. To probe these interactions, we design and implement screening strategies to identify compounds that degrade targets with specific degron mutations. These approaches illuminate degron–ligase–glue relationships and provide insights that guide the design of focused molecular glue libraries. Degron mapping also enables validation of on-target activity by linking compound action directly to degron engagement. By incorporating both identified degrons and mutations that disrupt their recognition, we run counter-screens that generate clean hit lists and uncover weak degraders that may be missed by conventional thresholds. Together, these degron-centric strategies enhance molecular glue library design, strengthen on-target validation, and enable deeper mining of screening data for meaningful hits.
High-throughput biochemical assays identify new small molecule ligands for human ASGR1
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Discovering new, potent small molecule ligands for human hepatocyte-specific Asialoglycoprotein receptor (ASGPR) holds significant importance in advancing the liver cell-targeted delivery field and expanding its applications in Lysosome Targeting Chimeras (LYTACs). However, success of novel potent ASGPR ligand identification has been limited due to the lack of robust high-throughput assays amenable to High-Throughput Screening (HTS). In this presentation, we describe the design and development of two novel biochemical competition binding assays using recombinant human trimeric ASGR1 (ASGPR subunit 1) protein as a mimic of the native multimeric complex and a reference Alexa-647 fluorophore-labelled tri-GalNAc ligand as a tracer. Both ASGR1 time-resolved fluorescence resonance energy transfer (TR-FRET) assay and fluorescence polarization (FP) assay are in 384-well microplate format and have a large detection range (IC50 of 2.5 nM - 100 µM), suitable for both monovalent and multivalent ASGPR ligands as well as oligonucleotide conjugates. The ASGR1 FP assay was miniaturized into a 1536-well assay format and a pilot screen of a small molecule library of about 7500 compounds was conducted, identifying 23 positive hits with IC50 values between 12 - 100 µM. Five of the primary hits were validated in orthogonal TR-FRET and Surface Plasmon Resonance (SPR) binding assays and one of them was successfully docked into the ASGR1, with the docking pose closely matching the binding mode of a structurally analogous compound found to be co-crystalized with ASGR1. This work provides a new reliable and cost-effective platform for an HTS campaign on small molecule collections to discover new small molecule ligands of ASGPR for liver-targeted delivery of efficient therapeutic agents and LYTACs.
Ultra-High-Throughput Screening of Neo-Cysteine Molecular Glues for Targeting Mutated SMAD4 Protein
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Neo-protein-protein interactions (neoPPIs), directed by genetic mutation-encoded neo-amino acid residues, represent a promising class of precision medicine targets. Small molecules can mimic genetic mutational effects, creating neo-surfaces and acting as molecular glues to mediate neoPPIs and reprogram biological circuitry. This convergence of genomic alterations and chemical interventions highlights a strategy for targeting disease-associated mutations using neo-amino acid residue-directed molecular glues. Among these, neo-cysteine at the protein-protein interaction (PPI) interface represents unique opportunities to develop covalent molecular glues. Despite this promise, identifying neo-cysteine molecular glues (neoCMGs) remains challenging. Here, we report the discovery of a neoCMG through an unbiased chemical screening approach, using SMAD4, a frequently mutated tumor suppressor gene, as a model system. We established a robust PPI biosensor assay for ultra-high-throughput chemical screening, leading to the identification of neoCMG101, a molecule capable of inducing proximity between the neo-cysteine of SMAD4-R361C and SMAD3. Biophysical and biochemical characterization revealed that neoCMG101 selectively and covalently modifies the neo-C361 residue on SMAD4 with its unique covalent warhead through an unexpected 1,4-nucleophilic addition followed by aminobenzothiazole ring-opening reaction. This discovery demonstrates the feasibility of leveraging neo-cysteine-directed molecular glues to restore mutant PPIs, supporting a generalizable strategy for rapidly identifying neoCMG hits through unbiased chemical screening. Such an approach has the potential to unlock neoPPIs and their altered networks for biological exploration and therapeutic development.
Session: Emerging Technologies for Drug Discovery
Session Chair: David McSwiggen, PhD The landscape of drug discovery is being rapidly transformed by a wave of breakthrough technologies that are expanding what’s possible in target identification, molecular design, and therapeutic development. This session will showcase pioneering tools and platforms that are redefining how we discover and develop medicines.
Utilizing Affinity Screening Approaches to Accelerate Hit Discovery
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available The drug discovery landscape is evolving, with an increased focus on rapid hit identification, novel therapeutic modalities, and the targeting of increasingly challenging disease-associated proteins. While these targets offer innovative ways to treat disease, they often lack distinct binding sites or are catalytically inert, thereby making conventional hit-finding approaches difficult. Here, we share our experiences implementing an Affinity Screening platform, utilising techniques such as Affinity Selection Mass Spectrometry (ASMS) and DNA-Encoded Library (DEL) screening, to enable rapid hit discovery and advance drug development efforts for these challenging targets.
Plug-and-Play: Click chemistry enabled bottom-up complexity
Open to view video.
Open to view video. Abstract: Click chemistry provides powerful, reliable bond-forming reactions for rapidly assembling bioactive molecules. We demonstrate sulfur(VI) fluoride exchange (SuFEx) as a second-generation click platform that leverages robust S(VI)–F hubs to connect diverse nucleophilic “plug-ins” via a modular “plug-and-play” strategy. Iminosulfur oxydifluoride (–NS(O)F₂) serves as a multi-electrophilic, aqueous-compatible connective core that enables staged substitution with phenols and amines; notably, aqueous buffers accelerate sulfamide formation within minutes at near-ambient temperature. Building on this biocompatibility, we integrate SuFEx with high-throughput experimentation: difluoride-bearing diversification handles are appended to an initial hit and coupled against hundreds of amines in plate format, with crude products assayed directly. Case studies targeting diverse disease-causing proteins including cysteine protease SpeB from Streptococcus pyogenes, ENL YEATS epigenetic reader domain, and glycoimmune checkpoints Siglec-7 will be discussed. Typically, parallel SuFEx diversification is implemented via a miniaturized workflow in 1536-well plate in picomole-scale. Overnight reactions offer clean reaction crudes that can be directly used in biological assays, often leading to much improved compounds compared to the input starting scaffold. Followed by structural and biophysical validation (X-ray co-crystal, SPR, DSF), the SuFEx-enabled platform offers a general, efficient route to fast hit-to-lead optimization, with clear potential to power scalable drug discovery.
A Novel High-Throughput Screening Platform Identifies New Antimalarial Compounds Targeting Mosquito Stages of Plasmodium falciparum
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Malaria remains a primary global cause of death and disability from infectious disease. Plasmodium falciparum, the deadliest among several human malaria parasite species, is transmitted exclusively by mosquitoes of the Anopheles genus. Given widespread mosquito resistance to insecticides, paralleled by the rise of drug resistance in parasites, the discovery of novel transmission-blocking agents and strategies is essential to achieve global malaria eradication. We recently generated an alternative control strategy based on killing P. falciparum parasites in the mosquito using antimalarial drugs delivered via treated surfaces like bed nets. This approach requires new, effective compounds with modes of action distinct from standard antimalarials used in humans, to minimize the risk of cross-resistance that could undermine their therapeutic use. However, identification of compounds targeting P. falciparum development in mosquitoes is limited by the lack of a reproducible and robust in vitro system suitable for large-scale screenings. In this study, we present the first platform optimized for high-throughput screening of compounds against multiple mosquito stages of P. falciparum development. We use a transgenic parasite line expressing green fluorescent and firefly luciferase proteins to generate mature gametocyte cultures (sexual transmission stages from humans to mosquitoes), that are seeded in 384-well plates with insect cells and matrigel to achieve the formation of ookinetes and oocysts (mosquito stages) in a rate comparable to in vivo settings. This system is automated using a liquid handling workflow, allowing reliable matrigel-coating of the plates, parasite seeding, media change and non-contact compound dispensing. Activity is assessed four days post-seeding using a bioluminescence-based assay and validated through high-content imaging. The platform was initially used to test the activity against both ookinete formation and oocyst growth of a library from Medicine for Malaria Venture (MMV) consisting of 107 approved antimalarial drugs and other bioactive compounds with known mechanisms of action. Next, a high-throughput screen using a library of 89,968 diverse small molecules was conducted to identify compounds that inhibited ookinete formation. The assay was consistently robust, with an average of Z’ factor of 0.81, and signal-to-background (S/B) ratio of ~30. Approximately 751 compounds (0.83%) inhibited > 50% of parasite growth at 10 uM, with 256 hits inhibiting > 90% of parasite growth. Current efforts are focused on secondary and tertiary high-content imaging and Prestoblue assays to evaluate parasite inhibition and evaluate toxicity in insect cells. Chemical clustering identified compounds from the same class, allowing preliminary structure-activity relationship analyses. Concentration-response profiling to determine IC50 of the selected hits is currently ongoing. Parallel efforts are focused on broadening this system to assess drug activity against in vitro oocysts, as well as on validating our top hits in vivo.
Session: Utilizing alternative pathways for targeted protein degradation beyond PROTACs
Session Chair: Matthew Calabrese, PhD This session will focus on pathways and methods that extend beyond the use of heterobifunctional chimeric degraders (PROTACs). This will include contemporary approaches for the design of molecular glues, strategies to modulate protein homeostasis beyond the ubiquitin-proteasome system, and novel screening approaches in TPD.
Unveiling the landscape of degrader targets with chemoproteomics
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Small molecules that induce protein degradation via ligase-mediated ubiquitylation are emerging as a promising pharmacological class. Global expression analysis is the primary method for exploring the target space and selectivity of these molecules, leading to the degradation of numerous targets, including transcription factors and protein kinases. Despite this progress, structural studies suggest thousands of potential IMiD targets remain undiscovered. To identify these elusive targets, we developed a high-throughput lysate-based IP-MS for unbiased identification of molecular glue targets of IMiD-CRBN. Our study offers a comprehensive catalog of CRBN-recruited targets (>290 targets) and introduces a scalable workflow for discovering new drug-induced protein interactions in cell lysates. This method enhances our understanding of the breadth and mechanisms of action of IMiDs, potentially opening new avenues for targeted therapy development.
A mechanistic framework for the characterization and optimization of reversible and covalent molecular glues
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Monovalent molecular glues are an emerging modality with significant potential for targets broadly considered intractable, such as transcription factors and scaffolding proteins. Heterobifunctional molecules such as PROTACs have established mechanistic and mathematical models linking ternary complex formation to the underlying thermodynamics of the system. Such models are needed, particularly in drug discovery, to inform the design of glues based on derived thermodynamic parameters and to correlate results from biochemical assays to those of cellular assays. However, as molecular glues are usually monovalent and can stabilize pre-existing protein-protein interactions (PPIs), they require distinct theoretical and experimental frameworks. In this work, we describe for the first time a comprehensive analytical framework for characterizing molecular glue-induced ternary complexes in biochemical assay systems (FRET and fluorescence polarization assays). We demonstrate that by measuring PPI levels from a single matrix experiment, it is possible to determine all of the key thermodynamic parameters driving ternary complex generation. Namely: the basal affinity between protein partners (K1), the glue’s affinity for its binding partner (K2), and glue-induced cooperativity (a). We have validated our framework using thermodynamic modeling and experimental data across multiple targets, including the published model system “B-catenin:B-TrCP:NRX-252262”. After establishing and validating our framework and associated analytic equations under pseudo-first-order conditions, we then proceeded to expand our work to cover most cases a researcher might encounter, including characterizing glues under tight-binding conditions, situations where there is no basal affinity between the two proteins of interest, and practical and theoretical guidance on the treatment of covalent molecular glues. In conclusion, we have established a framework and associated analytic equations to determine all of the key thermodynamic parameters driving molecular glue-induced ternary complex formation from a single biochemical ternary complex matrix experiment. We have experimentally validated this approach using different assay technologies and thermodynamic modeling. Lastly, we build upon this work to cover a broad set of experimental situations, with a particular focus on covalent molecular glues.
Identification of an allosteric site on the E3 ligase adapter cereblon
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Cereblon (CRBN) is an E3 ligase substrate adaptor that was first reported in 2004 in studies mapping a disease locus associated with intellectual disability.1 The C-terminal domain of CRBN was later identified as the primary binding target of thalidomide and was thus termed the thalidomide-binding domain (TBD).2 Thalidomide and its derivatives engage CRBN and promote the recruitment of chemically-induced substrates (“neosubstrates”) for ubiquitination and proteasomal degradation.3, 4 The degradation of neosubstrates through CRBN partly underlies the therapeutic efficacy of thalidomide analogs in the treatment of hemopoietic malignancies3, 4,5 and has also been implicated to cause birth defects during pre-natal development.6, 7 These findings have inspired significant efforts in the targeted protein degradation field to design extensive libraries of ligands that bind to the TBD8, 9 toward discovering neosubstrates as therapeutic targets and assessing substrate selectivity. Despite intensive efforts focused on the TBD,10 our understanding of other domains and binding pockets on CRBN that regulate degradation outcomes is limited. Although allostery plays an important role in the regulation of E3 ligases like CRBN,11, 12 the identification and development of allosteric ligands for E3 ligases remains limited13 ¬¬with no cooperative allosteric ligands identified to date. Here, we describe the discovery of a conserved cryptic allosteric binding pocket distal to the thalidomide binding site that is cooperatively engaged by the small molecule ACB to enhance the binding of orthosteric ligands and the recruitment of neosubstrates. Crystal structures of CRBN-DDB1 bound to lenalidomide and ACB reveal the first allosteric binding site on a mammalian E3 ligase substrate adapter. The allosteric binding pocket is characterized by biochemical, structural, and cellular assays to reveal a conserved allosteric binding site that stabilizes cereblon in cells. Engagement of the allosteric binding site cooperatively modulates the orthosteric thalidomide-binding site resulting in enhanced recruitment and neosubstrate degradation by thalidomide derivatives. Strategic use of ACB can further accelerate the discovery of neosubstrates recruited by orthosteric thalidomide derivatives, resulting in the identification of neosubstrates like the therapeutic target Wee1 G2 checkpoint kinase. These data establish the functional importance of a previously unrecognized conserved cryptic binding pocket on cereblon’s function and orthosteric neosubstrate degradation. Future studies focus on structure-activity relationship studies to develop ACB analogues with independent functionality through the allosteric site.
Session: Nucleic Acid-Targeted Therapeutics
Session Chair: Rachel Moore, PhD As our understanding of the genome and transcriptome deepens, nucleic acids have emerged not only as therapeutic agents but also as high-value drug targets. This session will highlight cutting-edge approaches to modulate RNA and DNA directly, offering new strategies to treat diseases driven by aberrant gene expression, splicing, or non-coding RNA function.
Enabling Technologies for Revealing the Druggability of RNA-Protein Interactions
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: RNAs are invariably bound to and often modified by RNA-binding proteins (RBPs), which regulate many aspects of coding and non-coding RNA biology. Disruption of this network of RNA-protein interactions (RPIs) has been implicated in many human diseases and targeting RPIs has arisen as a new frontier in RNA-targeted drug discovery. This talk will highlight newly developed technologies by the Garner laboratory for validating and screening RPIs to enable RNA- and RBP-targeted drug discovery.
Optimization of Three Cell-Based Assays to Identify Small Molecule Up-Regulators of Genes in Haploinsufficiency Diseases with High Throughput Screening
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Precise gene dosage is critical across numerous signaling pathways, which is why haploinsufficiency is a common mechanism for disease. Over 1,200 genes with haploinsufficiency are linked to adverse phenotypes according to the OMIM database (Online Mendelian Inheritance in Man). Alagille Syndrome (ALGS) belongs to this category of diseases. It is a rare autosomal dominant disease characterized by hepatic, cardiovascular, and facial abnormalities. This syndrome is caused by haploinsufficiency due to heterozygous mutations in JAG1 (98%) or NOTCH2 (2%). Though the genetic cause has been identified, we currently lack approved drugs to treat ALGS at its source. A general therapeutic strategy for this class of diseases is to upregulate the wild-type copy of the mutated gene – in this case JAG1. We therefore aim to perform high throughput screening (HTS) of our in-house small molecule compound libraries at the National Center for Advancing Translational Sciences (NCATS) to identify hits that can up-regulate JAG1 in 2D human cellular models. Here we develop and compare three HTS-compatible cell-based assays that measure the upregulation of JAG1 in human hepatic stellate cells. The first assay uses RNA fluorescence in situ hybridization (RNA FISH) to measure JAG1 mRNA. The second measures JAG1 protein expression via immunofluorescence staining with fluorescent-conjugated α-JAG1 antibody. The third method uses Nano Luciferase (NanoLuc) Binary Technology wherein human hepatic stellate cells are tagged with a small NanoLuc fragment (HiBiT) at the endogenous JAG1 locus via CRISPR-Cas9. Once successfully optimized and miniaturized to 384-well format, we validated these assays using a panel of 32 HDAC inhibitor small molecule compounds to directly compare their dose-response curves. These complementary methods can be used as orthogonal assays to triage hits from HTS and select the most promising candidates to progress through the drug development pipeline. The assays described here are also broadly applicable in studies to identify up-regulators of causative genes in other haploinsufficiency diseases.
Session: Next-Generation Protein/peptide Therapeutics
Session Chair: Hannah Bolt, PhD, MBA, MRSC, CChem Proteins and peptides are emerging as powerful therapeutic agents, offering unmatched specificity, potency, and the ability to modulate complex biological pathways. This session explores the latest innovations driving the next wave of protein- and peptide-based therapeutics—from multi-specific antibodies to cell-penetrating peptides, macrocyclic scaffolds, and synthetic biology-inspired constructs.
A paradigm shift in peptide drug discovery: accelerating hit-to-lead optimisation using affinity-selection mass spectrometry
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Peptide hit-to-lead optimization is traditionally a time-consuming process, requiring many iterative design-make-test-analysis (DMTA) cycles and multi-year investment to achieve candidate nomination. We have developed a novel platform that combines barcode-free, split-pool synthesis of artificial intelligence-enhanced combinatorial peptide libraries with pooled affinity-selection mass spectrometry (ASMS) to significantly reduce hit-to-lead optimisation timelines from years-months to weeks. Our ASMS workflow enables the rapid synthesis, identification, and affinity ranking of peptides derived from parent binding sequences. This approach is complementary to the display techniques used for peptide hit identification and allows rapid expansion into non-native chemical space. Chemical split-pool synthesis facilitates inclusion of diverse unnatural amino acid building blocks and cyclisation strategies. While several groups have already showcased the effectiveness of AS-MS as a peptide hit-finding method (Quartaro, 2020; Zhang, 2021; Zhang, 2024; Lee, 2025), our work leverages this powerful technology differently, for the purpose of peptide optimization in the hit-to-lead phase. We successfully applied our ASMS platform to optimise peptides for use as targeting ligands. By integrating rational design with novel evolutionary algorithms, we harnessed the power of combinatorial chemistry to create a diverse peptidomimetic libraries and used ASMS to competitively rank these sequences. In one case, we were able to drive multi-parameter optimisation with >10-fold affinity enhancement compared to the parent peptide and demonstrated a significant in vivo stability improvement, extending peptide half-life from 3 minutes to 1.5 hours. This platform not only compresses DMTA cycle timelines but also expands access to extensive and diverse chemical space, providing the big data needed to enable further machine learning. We show that ASMS has the potential to significantly expedite the hit-to-lead phase of peptide development and deliver differentiated sequences with enhanced potency, proteolytic stability and improved physicochemical properties. By integrating artificial intelligence and machine learning, combinatorial peptide synthesis, and AS-MS pooled screening, we have demonstrated a paradigm shift in peptide optimisation and are transforming the landscape of peptide drug discovery.
Automated Fast-Flow Peptide Synthesis: Rapid and Reliable Production of Complex Sequences
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Peptide and protein therapeutics are an increasingly important class of biopharmaceuticals, yet conventional batch solid-phase synthesis is slow and limited in the length and complexity of accessible sequences. To overcome these constraints, Amide Technologies has developed an Automated Fast-Flow Peptide Synthesizer (AFPS) platform that applies flow chemistry and precise engineering control to achieve peptide synthesis with unprecedented speed and synthesis quality. AFPS employs continuous reagent delivery through a heated flow reactor, completing couplings within minutes. In-line heating, rapid mixing, and tightly controlled reagent delivery allow residues to be incorporated with high efficiency, minimizing aggregation and side reactions. The platform is engineered for long-term stability, routinely synthesized sequences exceeding 120 amino acids with high crude purity in just a few hours. Engineering improvements at Amide include optimized fluidics, thermal management, and real-time process monitoring, which enhance efficiency, reliability, and system stability. Automated reagent delivery and integrated safety controls allow AFPS to run synthesis unattended, freeing operator time while maintaining high quality synthesis. These innovations enable synthesis of complex targets such as mirror-image proteins, branched peptides, macrocycles, peptide libraries, and sequences inaccessible to conventional or biological methods, all within hours. By uniting advanced flow chemistry with robust automation, Amide’s AFPS platform establishes a new benchmark in peptide manufacturing, expanding the possibilities for discovery and therapeutic development.
DELs in Cells - Human Transcription Factor Screening
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Transcription factors (TFs) are increasingly recognized as promising yet challenging targets for pharmaceutical intervention, particularly in oncology. Long considered "undruggable" due to their intrinsically disordered structures and shallow protein–protein interaction (PPI) surfaces, TFs lack the well-defined binding pockets typical of conventional drug targets such as enzymes. Their functional conformation further depends on post-translational modifications, binding partners, and the complex intracellular environment—features poorly mimicked in vitro. Thus, drug discovery strategies that operate within the native cellular environment appear highly desirable. DNA-Encoded Libraries (DELs) have emerged as powerful tools for ultra-high-throughput screening of small-molecule binders, including PPI inhibitors. However, targeting DNA-binding proteins like TFs with DELs poses a unique challenge: false positives arising from specific interactions between the TF and DNA barcodes that resemble its native binding motif. Protein engineering strategies to mitigate this—such as deletion or mutation of DNA-binding domains—can disrupt key structural and functional features of the TF. Hence, if possible, it would be preferable to screen on full-length protein. Here, we present a successful application of DEL screening within living cells to identify small-molecule binders of a full-length transcription factor. By including duplex DNA containing the TF’s cognate binding motif in the screen, we demonstrate that interactions with DEL barcodes can be efficiently suppressed. This confirms both the specificity of the identified hits and the preservation of the TF’s DNA-binding functionality. Our data illustrate that intracellular DEL screening can overcome long-standing barriers in targeting transcription factors.
Session: Difficult Targets - Expanding Target Space
Session Chair: Aleksandra Nita-Lazar, PhD Despite decades of progress in drug discovery, a significant portion of the human proteome remains "undruggable". This session explores emerging strategies including genomics and proteomics approaches, technologies, and modalities designed to overcome these challenges and redefine what’s considered druggable.
Enhanced Proteomics and Transcriptomics Reveal the Role of DHHC12 in Immune System Palmitoylation
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Despite rapid advances in sensitivity and technology for mass spectrometry-based proteomics, some proteins and post-translational modification remain difficult to detect and quantify. Protein palmitoylation, or S-acylation, is the most prevalent lipid modification of proteins, with over 5,000 known palmitoylated proteins in humans. This process is catalyzed by a family of 23 acyltransferases characterized by a conserved DHHC motif (zDHHC) and has been implicated in numerous human diseases, including autoimmune disorders. We aimed to overcome challenges in identifying palmitoylated proteins, especially hydrophobic membrane proteins including many essential immunoproteins, and to investigate potential roles of DHHC12 in the immune system. By combining the enhanced proteomics with transcriptomic analysis, we seek to elucidate the biological implications of this important modification in immune function. We improved Acyl-Biotin Exchange (ABE) proteomics method with increased sensitivity and reduced false positives to detect palmitoylated proteins. Novel palmitoylated proteins were validated using an orthogonal click chemistry approach. Transcriptomic analysis focused on DHHC enzymes using publicly available data from 19 human immune cell types deposited in the Human Protein Atlas. Correlation analysis was performed between DHHC12 expression and other genes across the human immune cells. Using the enhanced proteomics approach, we identified over 4,000 non-redundant potential palmitoylated proteins, of which more than 1,000 were novel discoveries. The orthogonal click chemistry approach successfully validated these novel palmitoylated proteins, confirming the reliability of our findings. Transcriptomic analysis revealed that DHHC12 is the most abundantly expressed DHHC enzyme across 19 human immune cell types. This finding highlights the potential importance of DHHC12 in immune system function. Furthermore, we identified hundreds of genes that correlate with DHHC12 expression in the human immune cells, providing new insights into the role of palmitoylation in the immune system. Our ongoing investigation into DHHC12-specific substrates under LPS stimulation in macrophage cells revealed key targets of this enzyme in immune response pathways. We anticipate that this study will provide crucial information about the role of DHHC12 in regulating the palmitoylation of important immune signaling proteins. The integration of our palmitoylation proteomics data with the transcriptomics analysis yielded valuable insights, including patterns that may suggest DHHC12's involvement in regulating specific immune pathways, particularly those related to innate immune responses and inflammation. These findings collectively advance our understanding of protein palmitoylation in the immune system and highlight the potential of DHHC12 as a key regulator of immune function through its palmitoylation activity. The combination of our enhanced proteomics approach with transcriptomic analysis provides a comprehensive view of DHHC12's role in immune system palmitoylation, setting the stage for future studies to elucidate its mechanisms and targets in immune regulation. The authors declare no conflicts of interest. This work was supported (in part) by the Division of Intramural Research, NIAID, NIH.
Increasing mass spectrometry throughput using time-encoded sample multiplexing via timePlex
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Liquid chromatography-mass spectrometry (LC-MS) can enable precise and accurate quantification of analytes at high-sensitivity, but the rate at which samples can be analyzed remains limiting. Throughput can be increased by multiplexing samples in the mass domain with plexDIA, yet multiplexing along one dimension will only linearly scale throughput with plex. To enable combinatorial-scaling of proteomics throughput, we developed a complementary multiplexing strategy in the time domain, termed ‘timePlex’. timePlex staggers and overlaps the separation periods of individual samples. This strategy is orthogonal to isotopic multiplexing, which enables combinatorial multiplexing in mass and time domains when paired together, and thus multiplicatively increased throughput. We demonstrate this with 3-timePlex and 3-plexDIA, enabling the multiplexing of 9 samples per LC-MS run, and 3-timePlex and 9-plexDIA exceeding 500 samples / day with a combinatorial 27-plex. Crucially, timePlex supports sensitive analyses, including of single cells. These results establish timePlex as a methodology for label-free multiplexing and combinatorial scaling of the throughput of LC-MS proteomics. We project this combined approach will eventually enable an increase in throughput exceeding 1,000 samples / day.
Protein editing using proximity-inducing molecules
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available The information flow in biological systems involves protein editing, including the addition or removal of post-translational modifications (PTMs) by “writer” or “eraser” enzymes. Most used protein editors (e.g., PROTACs) are chimeric small molecules formed by the fusion of binders of a protein of interest (POI) and writer/eraser; these chimeras induce proximity between the POI and the enzyme to add/remove a PTM. However, as these chimeric molecules must recruit catalytically active writer/eraser, they employ non-inhibitory binders that are scarce, often of poor quality, challenging to discover, and some enzymes may not even contain non-inhibitory pockets. Such binders exist for only 4 of 600 ubiquitin ligases and are nearly non-existent for other writers/erasers. A design of chimeric molecules that employs existing enzyme inhibitors will be scalable and generalizable, enabling diverse laboratories to rapidly develop protein editors without engaging in ligand discovery campaigns that can be resource- and time-intensive. I will describe a scalable platform for protein editing using GRoup-transfer chimeras for Inducing Proximity (GRIPs), which consist of an inhibitor of a writer or eraser enzyme connected to the POI binder via a group-transfer handle. The inhibitor end of GRIP enables the transfer of the POI binder onto a Cys/Lys residue of the enzyme via a transferase-type reactivity. Competition with the (co)substrate or protein dynamics releases the inhibitor, enabling the enzyme to modify POI. We developed 42 group-transfer handles with tunable reactivity to Cys/Lys side chains. We developed GRIPs for > 50 inhibitor/enzyme pairs, comprising kinases, phosphatases, glycosyl transferases, glycosidases, and methyltransferases. To the best of our knowledge, we have not seen such scalability for any chimera platform. GRIPs recruited endogenous phosphatase SHP2 to STAT3, thereby removing the latter’s phosphorylation and switching off the JAK-STAT pathway. An AKT GRIP-induced Liprin phosphorylation that triggered the latter’s phase separation, which is critical for neuronal exocytosis, with dose and temporal control. Finally, using GRIPs that dimerize and induce endogenous EGFR phosphorylation, we switched on the EGFR pathway. These GRIPs mimicked EGF in promoting the growth of cells used for the biomanufacturing of biologics, offering a proteolytically stable and low-cost alternative to EGF. These GRIPs also induced the death of cells with oncogenic, but not wild-type KRAS, potentially by perturbing the cancer cells’ Goldilocks level of oncogenic signaling. Overall, GRIPs enable programmable, scalable, and selective modulation of protein function across diverse biological systems, with applications in basic research, biomedicine, and biotechnology.
Rapid Single-Cancer Cell Encapsulation in Extracellular Matrix-mimicking Micromodel for 3D Cancer Cell Morphology and Chemotoxicity Investigations
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: 2D cell culture is a staple of cancer research, but fails to model pathophysiology accurately, while bulk analysis of these cultures is also standard, posing more issues by convoluting data about heterogeneous cell populations, such as those found in tumors. Single-cell techniques allow for in-depth investigation of diverse cell populations in high throughput, but these technologies typically fail to incorporate 3D models. Here, we developed a method for rapidly encapsulating cells in extracellular matrix-mimicking micromodels. We show the micromodel is compatible with immunocytochemistry (ICC), chemotoxicity assays, and single-cell Western blotting (scWB). Bone micromodels were made by combining neutralized collagen 1 (Col1), hydroxyapatite (HA) functionalized magnetic beads, and GFP-expressing Ewing sarcoma cancer cells, then emulsifying the model in mineral oil supplemented with surfactant, resulting in millions of solid microgels within 5-10 minutes. We optimized sphere size and cell encapsulation rate through a design of experiments and performed custom quantitative image analysis. We performed ICC on cells in spheres after 24 or 48 hours by staining for beta-tubulin and paxillin to visualize cell morphology. Samples were imaged on a confocal microscope, and features were analyzed in CellProfiler. Cells in micromodels and 2D were subjected to 72 hours of Doxorubicin dosing to test chemotoxicity. Finally, spheres below 100 µm were isolated using a cell strainer and settled into microwells stamped on a polyacrylamide gel-coated microscope slide for scWB. Cells were lysed for 20s in 2X RIPA + 10% Glycerol; then, the released proteins were electrophoretically separated into the PAG gel for UV-fixation and antibody probing. In the above experimentation, agitation speed and surfactant concentration significantly impacted micromodel size and frequency of single-cell spheres (standard least squares, p < 0.01). Specifically, high spin speed and surfactant concentrations decreased micromodel diameter and increased the proportion of models containing a single cell. The model was compatible with ICC and scWB procedures since we could observe 3D cell morphology in spheres and were able to electrophoretically extract proteins for immunoprobing. Cells were additionally observed to be more chemo-resistant when cultured in the micromodel for 24 hrs. before dosing with Doxorubicin for 72 hrs. (sum-of-squares on IC50 curve fits, p < 0.0001). We believe such accessible tissue modeling, as shown here, will allow for more in-depth and clinically relevant cancer research than ever before when combined with single-cell technologies. This data additionally leads us to believe that the micromodel platform is applicable to other -omics techniques, given the ability to deliver solutions and extract cell content as demonstrated here. We plan to utilize the micromodel platform to isolate cells and perform proteomics analysis after delivering standard and novel therapies to document therapy resistance mechanisms across diverse cancer cell populations.
Omics and Spatial Omics
Session: High-Dimensional and Integrated Multi-Omics in Modern Therapeutics
Session Chair: Molly He, PhD This session will explore the frontier of intelligent multi-omics, emphasizing the integration of multiple layers of spatial omics data from the same sample through streamlined workflows, enabling deeper insights with greater efficiency. Challenging the notion that more data equates to better science, we advocate for purposeful data generation that maximizes insight while minimizing time, effort, and cost. Through case studies and emerging strategies, we’ll showcase how intelligent, integrated omics, combined with AI, is accelerating therapeutic discovery and development in the modern era.
The Living Code: Aviti24 and the story of change
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Next-generation sequencing (NGS) technologies have become central to multi-omics studies, yet current approaches typically require multiple assays and instruments to capture segmented layers of information. Each assay introduces challenges—being time-consuming, resource-intensive, expensive, and vulnerable to batch effects. In addition, incompatible sample preparation requirements often force researchers to compromise experimental design, limiting both throughput and dataset quality. These constraints also hinder the development of AI foundation models for systems biology, as data generated from different samples, instruments, and workflows introduces noise that obscures true biological signals. We present AVITI24, the first fully integrated platform capable of capturing five dimensions of biological information from the same sample, using the same instrument and workflow. These dimensions include RNA, proteins, cell morphology, subcellular spatial resolution, and dynamic responses with time-resolved information—all derived directly from native samples with no artificial noise. Powered by Direct In-Sample Sequencing (DiSS™) and enabled through Avidity Sequencing™ chemistry, AVITI24 eliminates the need for library preparation altogether. This innovation dramatically reduces complexity, minimizes sources of error, and enables rapid turnaround times of just 1–3 days, depending on the application. Through case studies, we will highlight how AVITI24 unlocks new insights by enabling continuous observation of cell dynamics across diverse biological contexts. Applications include the discovery of novel combination drug therapies, enhanced high-throughput optical pooled screening for drug target identification, and other scenarios where integrating temporal, spatial, and molecular data provides a more holistic understanding of biological systems. By unifying multiple layers of biological readouts into a single workflow, AVITI24 represents a transformative step toward building robust, noise-free datasets ideally suited for next-generation AI models. This convergence of multi-omics, spatial biology, and temporal dynamics not only accelerates discovery but also reshapes how researchers approach systems biology—replacing fragmented, error-prone data collection with a single, coherent story of change.
Decoding human cell architecture – from spatial proteomics to cell modeling
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Biological systems are functionally defined by the nature, amount, and spatial location of their proteins. We have generated an image-based map of the subcellular distribution of the human proteome and showed that there is great complexity to the subcellular organization of the cell giving rise to potential pleiotropic effects. As much as half of all proteins localize to multiple compartments and around 20% of the human proteome shows temporal variability. Our temporal mapping results show that cell cycle progression explains less than half of all temporal protein variability and that most cycling proteins are regulated post-translationally, rather than by transcriptomic cycling. This work is critically dependent on computational image analysis, and I will discuss machine learning approaches for embedding spatial subcellular patterns, and how such embeddings as well as generative AI can be used to build multi-scale models of cell architecture. In summary, I will demonstrate the importance of spatial proteomics data for improved single-cell biology and present how the freely available Human Protein Atlas database (www.proteinatlas.org) can be used as a resource for life science.
Mapping Neurodegeneration: Multi-Omics Insights into Patient Disease and Preclinical Models
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Neurodegenerative diseases represent a major clinical challenge, and effective therapies remain limited. To better understand disease and accelerate biomarker discovery, we applied a multi-omics strategy across human patient clinical samples and preclinical in vivo models. Plasma, cerebrospinal fluid, and brain tissue were analyzed using a combination of targeted and untargeted platforms, including customized multiplex proteomic assays (Luminex, Olink), untargeted proteomics, lipidomics, and metabolomics by mass spectrometry, together with spatial transcriptomic and lipidomic profiling (GeoMx digital spatial profiling and mass spectrometry imaging). Preliminary analyses revealed differentially expressed molecules in plasma and cerebrospinal fluid from patients, with partial concordance in animal models. Lipid metabolism pathways, immune-related signals and cancer-testis antigens emerged as potential areas of interest. Furthermore, integration of bulk and spatial data identified molecular signatures that has the potential to improve mechanistic insight and inform therapeutic development. This exploratory study demonstrates the value of combining targeted and untargeted omics technologies to generate a more comprehensive view of neurodegenerative biology. The approach shows promise for identifying clinically relevant biomarkers, assessing the translational relevance of preclinical models, and informing design of more effective, patient-specific treatments. Further analyses are underway to validate candidate biomarkers and to establish their potential for preclinical therapeutic development and clinical application.
A Scalable Platform for Single-Cell Co-profiling of the Transcriptome and Genotype
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Single-cell RNA sequencing (scRNAseq) has become a routine tool for profiling cellular composition and transcriptional responses across health and disease. Beyond transcriptomic analysis, scRNAseq has also been used to detect expressed genetic variants and infer perturbation identity in CRISPR–Cas-based high-throughput screens through guide RNA sequencing. However, these transcriptome-based measurements face several limitations. They are restricted to transcribed regions, depend on sufficient expression levels to overcome the inherent sparsity of scRNAseq data, and, in the context of genetic perturbations, the detection of a guide RNA transcript does not confirm successful editing at the target locus. To address these limitations, we developed a high-throughput single-cell RNA & DNA co-assay based on our Semi-Permeable Capsule (SPC) technology, enabling highly parallel, multistep processing of single cells. The method couples whole-transcriptome profiling with targeted genotyping by multiplex PCR amplicon sequencing in the same cells, directly linking genotype to transcriptional state, confirming CRISPR edits at genomic targets, and functionally characterizing engineered or naturally occurring mutations. We profiled >100,000 primary human cells across multiple peripheral blood mononuclear cell (PBMC) donors using co-sequencing of RNA and amplicons targeting 10 SNP-containing loci. Over 85% of captured cells yielded genotypes, enabling donor deconvolution from variant calls in the amplicons and robust mapping of genetic background to cell-state heterogeneity. Designed for scalability, the assay supports user-defined PCR panels targeting transcribed and non-transcribed loci and is readily extensible in both cell numbers and amplicon breadth. We will present results demonstrating the utility of the platform for dissecting genotype–phenotype relationships at single-cell resolution.
Session: Omics at Scale: Driving Data-Driven Drug Discovery
Session Chair: Ramy Elgendy, DVM, PhD As omics technologies mature, the next frontier lies in their systematic, scalable deployment across the drug discovery and development pipeline. This session will focus on the platformization of omics—how organizations are building robust, high-throughput infrastructures to generate, manage, and apply large-scale multiomic data with precision and consistency. Talks will spotlight strategies to industrialize omics, including automation of profiling pipelines, scalable data architectures, cross-program standardization, and integration into decision-making frameworks across target discovery, biomarker development, and translational research. Emphasis will be placed on operational excellence, computational scalability, and real-world implementation of omics at scale to drive reproducible and actionable insights across diverse therapeutic areas.
Application of multi-omics technologies to the development of fibrosis-relevant screening assays for target identification
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Systemic Sclerosis (SSc) is a highly heterogeneous rare autoimmune fibrotic condition affecting the skin and internal organs. Current treatments are aimed at reducing inflammation and managing symptoms rather than addressing the underlying causes. Many of these therapies have limited effectiveness or are burdened with significant side effects. Therefore, there is a critical need to develop a comprehensive understanding of the cellular and molecular mechanisms underlying these disorders to identify novel therapeutic targets. Relation Therapeutics integrates single-cell multi-omics, machine learning, and functional assays to interrogate human disease biology at high resolution. Using a data-driven framework, we developed CRISPR/Cas multiparametric functional assays guided by patient genetics and single-cell transcriptomics to test knockouts of genes predicted by our machine learning platform. Within our large-scale observational study, DERMATOMICS, we recruited patients with systemic sclerosis (SSc) and healthy controls to generate single-cell RNA sequencing (scRNA-seq) data from skin biopsies, perform spatial transcriptomics, and collect serum and plasma for proteomics. By combining genome-wide association study (GWAS) variants with scRNA-seq, we identified disease effector fibroblast subtypes, including myofibroblasts, a key extracellular matrix (ECM)-producing population. Differential gene expression and receptor–ligand mapping highlighted altered cell–cell communication networks in disease and guided stimulus selection for functional assays. Isolated fibroblasts from SSc biopsies contracted collagen gels more rapidly than healthy controls, consistent with secretion of pro-fibrotic autocrine factors. Supernatant profiling further informed assay endpoint selection. We performed scRNA-seq of early passage primary fibroblast and their hTERT immortalised counterparts and used scVI to integrate the data with the patient tissue atlas, enabling a direct transcriptional comparison. We validated the integration by deriving transcriptional scores from the patient cells that were mapped onto to those of in vitro culture clusters. We confirmed transcriptional signatures for a number of patient-derived fibroblast subtypes but noted that several subtypes start to lose their transcriptional identity early. Donor-dependent variation in fibroblast composition enabled selection of cultures enriched for specific fibroblast subtypes for tailored assays. Pathway enrichment of disease effector genes identified multiple pro-fibrotic cytokines and growth factors that informed stimulus selection for ECM deposition assays. In summary, we combined genetics, transcriptomics, and proteomics to develop screening assays that better mimic the complex environment of fibroblasts in disease biology and inform the capture of multiple disease-relevant endpoints, enabling functional validation of genetically associated targets for SSc.
Discovery of NRF2 Molecular Degrader using massively high throughput transcriptomics
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Transcriptomics in drug discovery has the potential to identify safer and more selective molecules without the need for target-centric assays that can be extremely challenging for notoriously undruggable targets. Here, we present an orally bioavailable NRF2 molecular glue degrader for treatment of non-small cell lung cancer. Using an extremely high throughput methodology of RNA-sequencing called GRETA, we performed a large small molecule screen as well as subsequent compound optimization using transcriptomics directly. Leveraging this technique, we were able to identify chemotypes more selective for the NRF2 pathway as well as optimize for extremely potent compounds against the NRF2 pathway. Leveraging transcriptomics at this scale allowed us to identify pharmacodynamic biomarkers that were utilized in studies to track compound efficacy in animals.
High throughput transcriptomics for drug screening: an unbiased multidimensional readout to assess efficacy, mechanism of action and safety right from the start
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Purpose: In drug discovery, there is a need to make better decisions early on regarding efficacy, safety and mechanisms of action of compounds in order to reduce failure rates. The emergence of scalable omics technologies promises to rectify this by delivering unbiased, highly multidimensional data for better decision-making. Experimental Procedures: We have developed a proprietary 384-well transcriptomics protocol, ScreenSeq™, that is both economical and highly scalable with more than 3 Mio transcriptomes profiled to date, while yielding high quality data. Summary of data: We applied our ScreenSeq™ platform to characterize small molecule drugs across various stages of discovery—from primary screening to hit expansion and SAR analysis. In a case study involving a library of over 20,000 compounds, ScreenSeq™ enabled precise hit selection by identifying compounds that revert disease-associated transcriptomic signatures. This multidimensional readout not only supports efficacy assessment but also provides insights into mechanisms of action and safety profiles at early drug discovery stages. Conclusion: The new ScreenSeq™ technology allows sizeable drug screens using transcriptomics as primary readout. The platform’s scalability and data richness make it a powerful tool for guiding compound prioritization and decision-making throughout the drug discovery pipeline. Next steps and future experiments: We continue to explore how we can use ScreenSeq™ during lead optimization, with the ultimate goal of providing a complete transcriptomics-enabled workflow from primary screen to development candidate nomination.
Deep Plasma Proteomics of 50,000 Samples: A Scalable, Biologically Validated Workflow for Population Health
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Purpose: Large-scale plasma proteomics holds transformative potential for biomarker discovery and population health, yet technical barriers in throughput, reproducibility, and data handling have limited widespread adoption. We developed and validated Perchloric Acid with Neutralization (PCA-N) – a high-throughput, low-volume workflow – enabling robust plasma proteomic profiling at unprecedented scale. Applied to the Gates Foundation-funded Multiomics for Mothers and Infants (MOMI) Consortium, this workflow established the world's largest mass spectrometry (MS)-based plasma proteomics study. Description of Experimental Procedures: PCA-N introduces a critical neutralization step following perchloric acid precipitation, eliminating solid-phase extraction and enabling direct enzymatic digestion from just 5 µL plasma. Operating in 384-well format, the protocol supports preparation of >10,000 samples daily at 1,700 quality control samples interspersed throughout 353 days of continuous acquisition. Summary of Data: PCA-N consistently quantified ~2,000 proteins per sample across 50,000 LC-MS runs, maintaining high reproducibility across cohorts, instruments and nearly one year of continuous acquisition. Raw data processing was completed in under 72 hours, representing a landmark in large scale DIA-MS analysis. Despite slightly elevated technical variation versus standard workflows, PCA-N preserved excellent biological resolution with strong long-term stability (r=0.89). Canonical pregnancy proteins (PAPPA, LNPEP, PAEP) showed conserved temporal trajectories across all cohorts, validating biological fidelity. Importantly, outcome-specific proteomic signatures predictive of preeclampsia, preterm birth, and fetal growth restriction emerged weeks to months before clinical onset, demonstrating translational potential. Conclusion Statement: PCA-N democratizes deep plasma proteomics by removing key barriers to scale. Its simplicity, minimal sample requirements, and low cost make population-scale proteomics accessible globally, including resource-limited settings. The successful application to 50,000 samples establishes feasibility for MS-based proteomics in epidemiological studies and precision medicine initiatives. Next Steps and Future Experiments: We are implementing targeted validation of pregnancy biomarkers and translating findings into multiplexed lateral flow diagnostics for field deployment. Future work includes extending PCA-N to additional biobanks, optimizing for 1536-well formats, and integrating with genomics and metabolomics platforms. The protocol and computational pipeline are freely available to enable global adoption for biomarker discovery and population health applications.
Session: Emerging Omics Technologies
Session Chair: Jeffrey Moffitt, PhD This session will focus on exciting new technologies that representing emerging advances in omics techniques. These advances include for example multi-modal single-cell measurements with a focus on measurements that maintain the spatial organization of intact biological samples. The session will focus on new techniques, their validation, and their proof-of-principle application to a range of biological questions.
Imaging the microbe-host interface with genome-scale microscopy
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Many population-level bacterial behaviors emerge from heterogeneous, stochastic actions of individual cells. Moreover, many such behaviors are shaped by the complex, often spatial structured environments in which bacteria live. In parallel, the host response to pathogenic interactions is shaped by the massive diversity of different host cell types. Single-cell transcriptomic methods offer an exciting new window into such behaviors by providing genome-wide measures of the behaviors of individual cells. Here I will discuss our efforts to develop complementary techniques that place single-cell behaviors in space. Specifically, I will describe genomic-scale microscopy methods that can characterize the single-cell transcriptional response of both bacteria and host in the context of complex microbe-interactions that occur within the mammalian gut during health and disease. These methods allow us to directly chart how bacteria adapt to the complex micron-scale niches in the gut, and, in turn, how the host remodels both the cellular composition and spatial organization of the gut in response to pathogenic-like interactions between host and microbe. We anticipate the genomic microscopy methods we are developing in the context of microbe-host interactions in the gut may prove useful for characterizing a wide variety of questions in bacterial pathogenesis in many different tissues.
Sequencing-free whole-genome spatial transcriptomics at single-molecule resolution
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Recent breakthroughs in spatial transcriptomics technologies have enhanced our understanding of diverse cellular identities, compositions, interactions, spatial organizations, and functions. Yet existing spatial transcriptomics tools are still limited in either transcriptomic coverage or spatial resolution. Leading array-capture or array-tagging-based spatial transcriptomics techniques that rely on ex-situ sequencing offer whole-transcriptome coverage, in principle, but at the cost of lower spatial resolution compared to image-based techniques. In contrast, high-performance image-based spatial transcriptomics techniques, which rely on in situ hybridization or in situ sequencing, achieve single-molecule spatial resolution and retain sub-cellular morphologies, but are limited by probe libraries that target only a subset of the transcriptome, typically covering several hundred to a few thousand transcript species. Together, these limitations hinder unbiased, hypothesis-free transcriptomic analyses at high spatial resolution. Here we develop a new image-based spatial transcriptomics technology termed Reverse-padlock Amplicon Encoding Fluorescence In Situ Hybridization (RAEFISH) with whole-genome level coverage while retaining single-molecule spatial resolution in intact tissues. We demonstrate the spatial profiling of transcripts from 23,000 human or 22,000 mouse genes, including nearly the entire protein-coding transcriptome and several thousand long-noncoding RNAs, in single cells and tissue sections. Our analyses reveal differential subcellular localizations of diverse transcripts, cell-type-specific and cell-type-invariant tissue zonation dependent transcriptome, and gene expression programs underlying preferential cell-cell interactions. Finally, we further develop our technology for direct spatial readout of gRNAs in an image-based high-content CRISPR screen. Overall, these developments provide the research community with a broadly applicable technology that enables high-coverage, high-resolution spatial profiling of both long and short, native and engineered RNA species in many biomedical contexts.
Foundation cell segmentation models performance on live microscopy and spatial-omics data
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Introduction: Imaging cells both with live microscopy (e.g., cell painting, optical pooled screening) and in tissues (e.g., spatial proteomics and transcriptomics) have transformed biology and our understanding of cells. Yet, accurate cell segmentation still remains a major challenge for quantitative analysis, even with the rise of machine learning algorithms. Recently, several foundation model segmentation algorithms have been developed but not yet compared. Moreover, prior comparisons focus on mask-specific metrics and lack a biologic-centric readout. Methods: To address these challenges, we systematically evaluated Mesmer, Cellpose v2, CellposeSAM, CellSAM, MicroSAM, and InstanSeg on public datasets and our own generated datasets across live microscopy (brightfield, quantitative phase imaging (QPI)) and spatial-omics (multiplexed proteomic fluorescence images). We evaluated models using mask-centric metrics (average precision, segmentation accuracy, F1 score), summary statistics (mask count/area), and biologically grounded metrics (cell type classification and proportions). In our own CODEX spatial proteomics datasets, we extracted marker features post-segmentation, performed unsupervised Leiden clustering, and conducted a single-round cell-type annotation validated on raw images. Results: Our results show that generalist segmentation performance is modality- and dataset-dependent, and mask-centric metrics alone can miss biologically meaningful differences. In brightfield images, CellposeSAM generally outperformed, with more correctly segmented cells and better mask-shape accuracy. In QPI, CellposeSAM produced more accurate shapes in sparse fields, whereas CellSAM performed better in noisy/dense images at higher computational cost compared to other algorithms. MicroSAM frequently produced irregular masks, requiring additional post-segmentation filtering. In human intestine CODEX data, all model identified cell types with non-overlapping, specific markers (Paneth cells: aDefensin5) but struggled with areas of high cellular density and cell subtypes (e.g., epithelial subtypes and M2 vs. M1 macrophages). CellposeSAM showed the highest agreement with prior annotations by cell type proportions but was sensitive to signal blur; Mesmer produced larger masks capturing more membrane signal but also increased cell type mixing; Cellpose missed many smooth-muscle and stromal cells, reflecting sensitivity to size/shape variation; MicroSAM separated nuclei in densely packed regions well but produced artifact masks on cell-free tiles; CellSAM yielded cleaner, nucleus-containing masks and better handling of blur, but at the cost of significantly slower runtime; InstanSeg delivered the fastest inference but tended to oversegment cells. Overall, we developed biologic-based readouts and comparisons of recent foundation segmentation models. Based on this, our findings indicate while models have improved, inherent limitations still exist across segmentation modalities for live microscopy and spatial-omics data. Thus, to achieve high quality data there should be a tailoring of model choice to the modality, dataset, and targeted biological granularity. In the future we will advance cell type annotation across these algorithms that enable compensation algorithms, include additional information, push forward subclustering pipelines, and provide other biologists with an end-to-end comparison pipeline.
Extremely High-Throughput Quantitative Phase Imaging for Improved Image-Based Cell Phenotype Characterization
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Live-cell imaging provides biologists with the ability to observe dynamic cellular processes in real-time, revealing features that are not accessible in static, fixed cells. However, whole-plate experiments still face a hard trade-off between speed, resolution, field-of-view, and throughput. Furthermore, conventional brightfield imaging modalities encounter challenges in accurately visualizing cell morphology and providing quantitative cell information without exogenous fluorescent labels. Quantitative phase imaging (QPI) addresses many of these concerns enabling label-free, quantitative readouts such as cell dry mass, growth rate, and accurate cell morphology. Still, QPI has been difficult for many biologists to implement into their experimental workflows due to the complexity of many QPI systems and techniques. Moreover, the experimental throughput of conventional QPI is just as limited as conventional brightfield imaging. Multi-camera array microscopes (MCAMs) are becoming increasingly popular for high-throughput imaging, since they effectively break the space-time-resolution trade-off by using multiple microscopes to image dozens of FOV’s at high-resolution simultaneously. Here, we describe a multi-camera array microscope (MCAM) with 48 micro-cameras that enables rapid, 3D quantitative phase imaging. The system completes an entire 96-well plate QPI scan in less than 5 minutes, which supports accurate, frequent measurement of cell trajectory. Each micro-camera is equipped with a 0.3 numerical aperture finite-conjugate objective lens enabling a 1.2μm full-pitch resolution across 48 1.5 x 1.5 mm² FOVs simultaneously. The system can capture up to 0.62 gigapixels per stationary snapshot, with data transmission rates of up to 5GB/s. A new, densely packed LED array (2.25mm LED pitch) increases the configurability of custom illumination patterns that facilitate our QPI reconstructions. The addition of a stage-top incubation system permits long-term imaging sessions that maintains optimal cell culturing conditions. To address the massive datasets acquired over many days, our processing pipeline automatically selects the best-in-focus plane across each well’s z-stack, which eliminates the need to save entire z-stacks for each well in a multi-well plate. We utilize through-focus intensity images with differential phase contrast (DPC)-inspired, parallelized illumination to reconstruct refractive index distributions across each well. Our results demonstrate rapid live-cell imaging of B16 melanoma cells co-cultured with T cells to observe T-tumor cell interactions and specific cell phenotypes in both cell populations. This work addresses the challenges of imaging entire well plates with enhanced, label-free contrast. This modified MCAM system offers biologists a new tool for enhanced high-throughput, high-resolution imaging and phenotyping of many experimental conditions simultaneously. Future work will further optimize the QPI reconstruction to enable more rapid imaging frequencies, which will improve the extracted cell dynamics and cell-to-cell interaction assessments. Additionally, integrating other cell phenotyping strategies (such as multi-channel fluorescence spatial-proteomic readouts) will further expand the cell phenotyping capabilities of the system.
Session: Functional Genomics in Drug Discovery
Session Chair: Kamran Honarnejad, PhD This session will explore how genetic perturbation screening technologies, such as CRISPR, are being integrated into complex, disease-relevant models exposed to specific stimuli and microenvironments. Learn how these approaches use holistic high-dimensional readouts and spatial context to reveal genotype-phenotype relationships, decode genetic circuits and disease signatures, map regulatory landscapes linked to disease and investigate drug-target interactions.
X-Atlas/Orion: Genome-wide Perturb-seq Datasets via a Scalable Fix-Cryopreserve Platform for Training Dose-Dependent Biological Foundation Models
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: The rapid expansion of massively parallel sequencing technologies has enabled the development of foundation models to uncover novel biological findings. While these have the potential to significantly accelerate scientific discoveries by creating AI-driven virtual cell models, their progress has been greatly limited by the lack of large-scale high-quality perturbation data, which remains constrained due to scalability bottlenecks and assay variability. Here, we introduce “Fix-Cryopreserve-ScRNAseq” (FiCS) Perturb-seq, an industrialized platform for scalable Perturb-seq data generation. We demonstrate that FiCS Perturb-seq exhibits high sensitivity and low batch effects, effectively capturing perturbation-induced transcriptomic changes and recapitulating known biological pathways and protein complexes. In addition, we release X-Atlas: Orion edition (X-Atlas/Orion), the largest publicly available Perturb-seq atlas. This atlas, generated from two genome-wide FiCS Perturb-seq experiments targeting all human protein-coding genes, comprises eight million cells deeply sequenced to over 16,000 unique molecular identifiers (UMIs) per cell. Furthermore, we show that single guide RNA (sgRNA) abundance can serve as a proxy for gene knockdown (KD) efficacy. Leveraging the deep sequencing and substantial cell numbers per perturbation, we also show that stratification by sgRNA expression can reveal dose-dependent genetic effects. Taken together, we demonstrate that FiCS Perturb-seq is an efficient and scalable platform for high-throughput Perturb-seq screens. Through the release of X-Atlas/Orion, we highlight the potential of FiCS Perturb-seq to address current scalability and variability challenges in data generation, advance foundation model development that incorporates gene-dosage effects, and accelerate biological discoveries.
Spatial transcriptomics for in situ CRISPR screening
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Pooled optical screens have enabled the study of cellular interactions, morphology, or dynamics at massive scale, but have not yet leveraged the power of highly-plexed single-cell resolved transcriptomic readouts to inform molecular pathways. Here, we present Perturb-FISH, which bridges these approaches by combining imaging spatial transcriptomics with parallel optical detection of in situ amplified guide RNAs. We show that Perturb-FISH recovers intracellular effects that are consistent with Perturb-seq results in a screen of lipopolysaccharide response in cultured monocytes, and uncovers new intercellular and density-dependent regulation of the innate immune response. We further pair Perturb-FISH with a functional readout in a screen of autism spectrum disorder risk genes, showing common calcium activity phenotypes in induced pluripotent stem cell derived astrocytes and their associated genetic interactions and dysregulated molecular pathways. Finally, we show that Perturb-FISH can identify neighborhood dependent perturbation effects in a complex tissue by showing immune-tumor interactions in a xenograft model engrafted with human PBMCs. Perturb-FISH is thus a generally applicable method for studying the genetic and molecular associations of spatial and functional biology at single-cell resolution.
Functional Genomic Screens to Identify Regulators of CENP-A Expression and Localization for Maintaining Chromosomal Stability
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Accurate chromosome segregation during cell division is essential for maintaining genome stability, and the centromere-specific histone H3 variant CENP-A plays a critical role in this process. Dysregulation of CENP-A expression or localization can lead to chromosomal instability, a hallmark of many cancers. Identifying factors that regulate CENP-A homeostasis is therefore crucial for understanding genome maintenance mechanisms and developing therapeutic strategies targeting chromosomal instability. In collaboration with Dr. Munira Basrai’s laboratory at the National Cancer Institute (NCI), we conducted a genome-wide, image-based high-content siRNA screen to identify genes involved in the regulation of CENP-A. Using a stable cell line expressing YFP-tagged CENP-A, we screened an arrayed siRNA library targeting approximately 21,000 human genes. The assay proved to be highly robust and reproducible. From this screen, we identified several candidate genes whose knockdown led to aberrant overexpression and mislocalization of CENP-A, indicating a disruption in centromere-specific incorporation. While the siRNA screen identified multiple validated hits, it failed to recover some well-characterized regulators of CENP-A localization. This limitation may stem from incomplete gene silencing or off-target effects inherent to RNAi-based approaches, especially in aneuploid cell lines like HeLa. To overcome these challenges and identify additional factors potentially missed in the siRNA screen, we initiated a genome-wide, arrayed CRISPR knockout screen using the same YFP-CENP-A reporter system. The CRISPR screen allows for complete gene knockout and offers a complementary approach to RNAi, with the potential to reveal additional pathways and regulators—particularly those involved in chromatin organization and centromere identity—that are essential for proper CENP-A localization. By comparing results across both screening platforms, we aim to generate a more comprehensive map of CENP-A modulators, validate novel candidates, and deepen our understanding of the molecular networks that maintain centromere function. Together, these functional genomic strategies provide a powerful framework to dissect the regulation of CENP-A expression and localization. Insights from this work could inform new therapeutic approaches that exploit vulnerabilities in chromosomal stability pathways, particularly in cancers characterized by aberrant CENP-A regulation.
Functional Genomics in the Era of Agentic AI
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Functional genomics has revolutionized our ability to interrogate gene function at scale, yet the increasing complexity of multi-omic and phenotypic data demands new ways of reasoning across biological systems. This study explores how AI systems capable of autonomous reasoning and adaptive hypothesis generation—can enhance biological discovery by dynamically integrating CRISPR-based perturbation data, organoid phenotypes, and disease-relevant molecular profiles. Our goal is to understand how genetic dependencies and cellular context converge to define therapeutic opportunities in cancer.
Session: Advances in Spatial Omics
Session Chair: John Hickey, PhD This session will showcase cutting-edge advances in spatial-omics technologies and the transformative biological and therapeutic insights they are enabling. Talks will highlight how diverse spatial-omics platforms—including transcriptomics, proteomics, and metabolomics—are being integrated to create high-resolution maps of human tissues in health and disease. Emphasis will be placed on emerging strategies for combining these modalities with advanced computational methods and AI to decode meaningful tissue architecture, cellular interactions, and molecular dynamics.
Reconstructing the Architecture of Disease Microsystems with Spatial Multiomics and Rosenbridge
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Spatial biology now allows scientists to resolve the cellular and molecular architecture of tissues with unprecedented precision, capturing the dynamics of communication, inflammation, and repair across entire organ systems. Yet, despite these technological advances, analytical tools remain fragmented, limiting integration across transcriptomic, proteomic, and spatial imaging modalities. Rosenbridge was developed to address this challenge as an intelligent orchestration platform that unifies spatial and single-cell multiomics within a single adaptive analytical environment. Built on the AstroSuite framework with modules for interaction mapping, multicellular communication modeling, spatial graph learning, and perturbation analysis, Rosenbridge integrates data ingestion, harmonization, annotation, and interpretation in a reproducible and conversational format. Its architecture centers on two coordinated analytical hubs. The Data Ingestion Hub converts raw multiomic inputs into interoperable single-cell representations, while the Annotation Hub enables both unsupervised and signature-based classification supported by large language model reasoning. This design transforms traditional pipelines into natural-language interactions, allowing researchers to guide analyses, query cell states, and interpret biological meaning through conversation. The first dataset demonstrated through Rosenbridge represents one of the largest spatially aligned single-cell profiles of graft-versus-host disease, encompassing more than one million cells across multiple tissues and disease stages. Within this dataset, Rosenbridge identified multicellular interaction modules that reveal structured immune and stromal ecosystems composed of T cells, fibroblasts, macrophages, and vascular elements. These spatial architectures delineate distinct inflammatory and reparative niches whose communication networks correlate with patient outcomes drawn from electronic health records, linking molecular states to clinical recovery, organ involvement, and survival. This demonstration illustrates the complete Rosenbridge pipeline from ingestion to annotation and interactive analysis, showing how conversational biostatistics can connect spatial tissue architecture to real-world outcomes. By merging scalable computation, multimodal reasoning, and human-guided interpretation, Rosenbridge redefines spatial biology as a living analytical process that learns from data and user input to accelerate translational discovery across immunology, oncology, and regenerative medicine.
Proteomics for Tox: quantification of 1000 proteins in a 20,000 sample screen captures compound cytotoxicity, immunotoxicity, and distinguishes on- from off-target bioactivity
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: With increasing emphasis on New Approach Modalities (NAMs) to develop in vitro models predicting drug toxicities, HTS programs are increasingly adopting high-content technologies that better inform the selection of drug candidates early in pipelines. Protein profiling is particularly well-suited to this, yielding ground-truth biologically relevant information that simultaneously captures mechanistic information and identifies toxicity markers. Unfortunately, application to HTS has been limited by the difficulty of scaling protein content and throughput. Here, we leverage Nomic's Omni 1000, capable of absolute quantification of 1000 proteins in high-throughput, to analyze the secretome of 20,000 samples, and demonstrate that this approach is more sensitive and interpretable than traditional in vitro toxicity assessment tools. To achieve this, supernatants from hepatocytes, cardiomyocytes, and microglia treated with 510 bioactive compounds at three concentrations were analyzed. Across all samples, >700 proteins were affected by at least 1 compound, and 506 compounds regulated at least 1 protein. To establish the sensitivity of the Nomic platform to detect signs of toxicity, protein Points of Departure (PODs) were calculated based on the lowest dose at which significant changes in protein expression were detected. We also compared the sensitivity of our protein readout to traditional readouts compiled in the ToxCast database for 152 shared compounds. For 55 compounds, PODs were reported in ToxCast; for 50 of these, protein PODs were more sensitive than the POD among dozens of compiled experiments in ToxCast. Additionally, proteomics revealed bioactivity for 92/97 compounds with no effect in the ToxCast database; importantly, changes in protein levels reflected compound mechanism of action (MoA). We also compare protein PODs to previously reported imaging-based PODs: across 59 shared compounds, 58 were detected more sensitively in our assay than by imaging. Surprisingly, for some compounds, protein PODs were >1000x more sensitive than any other assay. This ability of proteomics to identify the pathways affected by bioactive compounds and discriminate these effects from full blown toxicity was prevalent in our screen. For example, 5% of compounds tested showed a toxic signature < 2uM, including classic inducers of cell death such as staurosporine. In contrast, PODs for GSK3-beta inhibitors in hepatocytes were < 2uM, but characterized by on-target changes in protein levels in the Wnt pathway, which are not expected to lead to hepatotoxicity. Similar findings were seen for numerous compound classes, including statins, mTOR inhibitors, corticosteroids, CDK inhibitors, etc. Importantly, we detected signs of toxicity in compounds that failed clinical development due to liver toxicity that went undetected pre-clinically. We will highlight these and many other examples demonstrating that protein PODs are both more sensitive and more interpretable than traditional toxicity readouts or high-content imaging, and discuss the value of the Nomic platform to detect early signs of toxicity in drug development pipelines.
INFLAMMATION ASSOCIATED FIBROBLASTS IN THE HUMAN AND MOUSE INFLAMED GUT
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Fibroblasts have recently emerged as unexpected central players in pathological conditions such as inflammatory bowel diseases, a group of inflammatory disorders affecting the gastrointestinal tract. In these settings, fibroblasts undergo activation and, in addition to their canonical function of maintaining the extracellular matrix, actively coordinate immune responses, facilitate resolution of inflammation, and contribute to tissue repair. Nevertheless, numerous aspects of these activated states—often termed inflammation-associated fibroblasts(IAFs)—remain poorly understood, particularly with respect to their diversity, functional role across stages of inflammation, cellular origins, activation signals, spatial organization, and the extent to which the nature of tissue injury modulates their states and functions. By leveraging MERFISH, a transformative microscopy technique for spatial transcriptomics, we systematically characterized IAFs across several murine models of intestinal inflammation, including Dextran Sodium Sulfate-induced colitis, knockout mice, and C.rodentium infection. This analysis revealed multiple IAF populations conserved across models and defined by distinct gene expression signatures. These populations originate from three fibroblast families present in healthy intestine and exhibit spatial enrichment within histopathological regions such as ulcers or thickened submucosa. Furthermore, analogous fibroblast states—together with previously unrecognized IAF subsets—were detected in biopsies and surgical resections from pediatric Crohn’s disease patients. Collectively, these findings suggest that fibroblasts are intrinsically programmed to activate specialized effector functions in response to diverse gut insults, spanning tissue layers and potentially contributing to the inflammatory response. Furthermore, our results indicate that the stimuli responsible for fibroblast activation may be conserved and operate independently of the nature of the inflammatory insult.
Screening Applications and Biomarker Diagnostics
Session: From Detection to Decision: Advancing Critical Patient Care Through Screening
Session Chair: Ricard Martin Critical care environments demand rapid, accurate diagnostic decisions that can mean the difference between life and death. This session explores how innovative screening technologies are transforming emergency medicine and intensive care settings through breakthrough applications in point-of-care testing, rapid pathogen detection, and real-time biomarker monitoring. Expert speakers will showcase automated screening platforms that reduce diagnostic turnaround times from hours to minutes while maintaining clinical-grade accuracy. Case studies will demonstrate novel detection methods for sepsis markers, cardiac biomarkers, and multi-parameter screening panels. The session addresses regulatory considerations, validation strategies, and cost-effectiveness analysis for deploying advanced screening technologies in high-stakes clinical environments.
From Detection to Decision: Advancing Patient Care Through Screening and Diagnostics
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: In critical care, where timely and accurate diagnostics directly influence survival, both point-of-care (POC) and laboratory-based technologies are advancing the way clinicians approach high-stakes decision-making. Conditions such as sepsis, acute cardiac events, and severe infections require diagnostic clarity within minutes or hours, not days. This session highlights how innovative screening platforms—ranging from bedside rapid tests to next-generation centralized laboratory systems—are transforming emergency medicine and intensive care practice through improvements in speed, precision, and actionable insight. Recent developments in POC diagnostics have reduced turnaround times dramatically, delivering results for sepsis biomarkers, cardiac markers, and infectious disease panels in minutes. These tools allow frontline clinicians to initiate early interventions, tailor therapies, and improve patient triage in dynamic environments such as emergency departments and intensive care units. Equally important are laboratory-based innovations, where high-throughput systems and multiplexed detection platforms offer unparalleled analytical depth. Centralized labs now provide multi-parameter panels capable of detecting subtle physiological changes, enabling comprehensive assessments that guide long-term care strategies. Case studies will illustrate practical applications across both domains. Examples include rapid sepsis marker detection at the bedside to support immediate intervention, cardiac biomarker monitoring through integrated lab-POC pathways, and infectious disease screening where POC testing ensures fast initial guidance while laboratory confirmation ensures accuracy and breadth. Together, these case studies underscore the complementary nature of POC and laboratory diagnostics: one offering speed and accessibility, the other providing depth and validation. Beyond technical capability, the session will address implementation considerations that determine clinical success. Topics include regulatory approval processes, validation strategies across diverse patient populations, and the integration of results into existing digital health infrastructures. Cost-effectiveness analyses will also be discussed, demonstrating how reduced turnaround times, improved diagnostic precision, and better test utilization can shorten hospital stays, optimize antimicrobial stewardship, and reduce overall healthcare expenditure. By showcasing the latest innovations across both POC and laboratory platforms, this session emphasizes a holistic approach to critical care diagnostics. Participants will gain insights into emerging technologies, practical integration strategies, and evolving regulatory and economic frameworks. Ultimately, these advances are bridging the gap between laboratory science and bedside decision-making, empowering clinicians to deliver earlier, more precise interventions for conditions ranging from cardiac emergencies to infectious diseases—improving survival where time and accuracy matter most.
Evaluation of the Analytical Performance of Blood-Based Biomarkers for Improved Screening of Patients with Mild Traumatic Brain Injury
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Traumatic brain injury (TBI) is one of the leading causes of disability and mortality worldwide. Diagnosing TBI in the emergency department can be complex, typically relying on neuroimaging, most often CT due to its high sensitivity and specificity in detecting acute intracranial injuries. Given the high frequency and significant costs associated with TBI, efficiently identifying patients who need CT imaging remains a major challenge in clinical practice. Glial fibrillary acidic protein (GFAP) and ubiquitin carboxyl-terminal hydrolase L1 (UCH-L1) have been shown to be reliable biomarkers for identifying patients with TBI, however, their widespread clinical implementation relies on insights from early adopters on the performance of the tests in clinical settings. In this study, we have evaluated the analytical and clinical performance of GFAP and UCH-L1 blood-based tests in patients with suspected TBI admitted to the emergency department at a large academic hospital. Plasma and whole blood specimens were measured using Alinity iSTAT TBI cartridges and results were compared against head CT results. The performance of the biomarkers was assessed for diagnostic accuracy (sensitivity, specificity, negative and positive predictive values) as well as analytical accuracy. Negative and positive agreement between the qualitative GFAP and UCH-L1 results obtained for different matrices were calculated using method comparison at FDA-approved cutoffs. Additionally, Passing-Bablock regression was used to calculate the agreement between the quantitative results in plasma and whole blood, respectively. GFAP and UCH-L1 measured less than 24 hours post-injury demonstrated high sensitivity (97%) for detecting clinically significant acute traumatic intracranial lesions. GFAP alone had higher specificity compared with GFAP and UCH-L1 combined test (48% vs. 30%). GFAP and UCH-L1 results measured in plasma and whole blood showed a strong overall agreement (95%) with 100% negative agreement and 90% positive agreement. Quantitative method comparison revealed a strong correlation for the UCH-L1 results in both matrices and a small positive bias (~8%) for GFAP in whole blood compared to plasma supporting the higher cutoff used for whole blood (65 vs 30 pg/mL). Finally, with the incorporation of whole blood GFAP and UCH-L1 blood-based tests in clinical algorithms for evaluating suspected TBI, about 80% of patients who tested negative for both markers did not undergo CT imaging. Further studies will be performed to assess the impact of the blood based tests on shortening the length of stay in the emergency department by more efficiently ruling out TBI in low-risk cases.
Enhanced Flow Cytometry for Detection of Low-Abundance Neurological Biomarkers
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Cytokines and neurological biomarkers hold significant promise for early disease detection and monitoring. However, their typically low abundance in complex biofluids presents a major analytical challenge. Standard flow cytometry approaches often lack the sensitivity required to reliably detect these rare biomarker populations. Our objective is to develop a high-sensitivity, flow cytometry-based assay capable of detecting and quantifying low-abundance cytokines and neurological biomarkers at the single-particle level. To achieve this, we implemented an optimized workflow that enhances signal detection while maintaining compatibility with conventional flow cytometry instrumentation. This approach enabled a 4× increase in signal-to-noise ratio compared to previous flow cytometry and bead-based methods, lowering the detection threshold for both cytokines and neurological analytes and expanding the measurable dynamic range. We demonstrate that the method can detect rare biomarker populations providing robust and reproducible quantification even at very low concentrations. This work establishes a broadly applicable platform for high-sensitivity biomarker analysis. By enabling detection of cytokines and neurological biomarkers previously inaccessible to conventional cytometry, our approach has the potential to accelerate translational research, facilitate early diagnostic development, and advance precision medicine applications in neurology and immunology.
Session: Next-Generation Sequencing: Breakthroughs and Challenges in Screening Applications
Session Chair: Jeffrey Hung, PhD Next-generation sequencing (NGS) has revolutionized genetic variation detection and biomarker analysis at unprecedented scale. This session explores cutting-edge NGS applications in screening programs, from population health initiatives to personalized medicine approaches, featuring breakthrough methodologies that expand clinical utility across diverse therapeutic areas. Presentations will cover liquid biopsy screening, comprehensive genetic carrier screening, and pathogen surveillance applications. Speakers will address critical challenges including data interpretation complexity, analytical pipeline standardization, and laboratory workflow integration. The session explores emerging techniques such as single-cell sequencing, long-read sequencing advantages, and multiplexed approaches that maximize information yield while minimizing sample requirements.
Technical and clinical bottlenecks and solutions for ultra-high-sensitivity NGS screening and Dx
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Purpose of the study NGS offers revolutionary potential in clinical diagnosis, especially in areas like oncology for detecting minimal residual disease (MRD) or early cancer recurrence. However, its successful and widespread implementation faces several technical and clinical bottlenecks. The technical and clinical bottlenecks and possible solutions for addressing the bottlenecks and related results will be presented. Description of experimental procedures and summary of data Accuracy and reliability at low allele frequencies (VAFs): Detecting low-frequency mutations, crucial in MRD and early diagnosis, is prone to errors arising from library preparation, sequencing, and data analysis (e.g., PCR errors, sequencing artifacts). Error-corrected NGS (ecNGS) techniques using molecular identifiers/barcodes to tag individual DNA molecules, allowing consensus sequencing have been used to distinguish true mutations from errors. This can significantly increase accuracy, potentially reaching error rates below 1 ppm. Different leading technologies and their technical performance for ecNGS will be presented. Detection of complex variants: Short-read NGS is limited in detecting large-scale genomic changes like structural variants (SVs) and copy number variations (CNVs). Incorporate long-read sequencing technologies to complement short-read approaches. The leading long-read sequencing technologies and the corresponding data for clinical screening and diagnostic tests will be presented. Bioinformatics pipeline and data interpretation: Managing and interpreting massive and complex NGS multiomic datasets, particularly in the context of clinical decision-making, can be a major bottleneck. Leading bioinformatics pipeline algorithm will be presented. Clinical utility and actionable insights: While NGS detects variants, demonstrating their clinical significance and translating them into actionable treatment strategies can be challenging, especially in complex diseases or with variants of unknown significance (VUS). The leading platforms that adopt multiomic approaches, combining DNA sequencing with other data types (methylation, expression) to better understand variant pathogenicity will be presented. Low sample availability and low input DNA/RNA (e.g., from circulating cell-free DNA (cfDNA), fine-needle aspiration (FNA) biopsies, or archived formalin-fixed paraffin-embedded (FFPE) tissues) pose significant challenges. Limited starting material can lead to insufficient DNA or RNA for library preparation, a crucial step for NGS. Specialized library preparation kits and tagging techniques have been developed to have high-efficiency ligation and greatly improved NGS library quality to empower meaningful clinical interrogation and data report on low input DNA/RNA samples. Turnaround time: Long turnaround times for NGS results can delay treatment decisions and impact patient care, particularly in time-sensitive situation. Streamlined workflows, optimize sample processing and analysis, potentially incorporate rapid sequencing technologies, and leverage automation and interpretation software to expedite the reporting process. The best-practice in clinical NGS will be presented. Conclusion statement Next steps and future experiments By collaboratively addressing these technical and clinical bottlenecks, the field can further advance the integration of ultra-high-sensitivity NGS into clinical practice, ultimately improving patient care and furthering precision medicine.
Predicting Genetic Variant Pathogenicity Using Vector Embeddings
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Background: Interpreting the pathogenicity of genetic variants remains a major bottleneck in genomic medicine. Millions of variants of uncertain significance (VUS) impede clinical translation of genetic findings. Traditional computational approaches often depend on hand-engineered features and fail to capture the complexity of multidimensional genomic annotations. Methods: We developed VUS.Life, a semantic embedding framework that converts variant annotations into natural language descriptions and uses pre-trained language models to encode pathogenicity-relevant relationships in high-dimensional vector space. This representation-learning approach enables direct pathogenicity prediction without relying on handcrafted feature sets. We curated variants from three disease-associated genes: BRCA1 (n=3,311) and BRCA2 (n=4,074) from BRCA Exchange, and FBN1 (n=1,532) from ClinVar. Variant Effect Predictor (VEP) annotations were transformed into natural language, excluding any features tied to known pathogenicity to prevent data leakage. These descriptions were embedded using three models: MPNet (all-mpnet-base-v2), Google’s text-embedding-004, and MedEmbed-large-v0.1. Pathogenicity was predicted using a k-nearest neighbor (k-NN) classifier (up to 20 neighbors). Embedding spaces were visualized with PCA, t-SNE, and UMAP. Results: Across all genes and embedding models, k-NN classification achieved high accuracy. For BRCA1, overall accuracy ranged from 97.3% (Google) to 97.9% (MPNet), with benign/likely benign classification accuracy of 95.1–97.2% and pathogenic/likely pathogenic accuracy of 97.9–98.4%. For BRCA2, accuracy ranged from 97.9% (Google) to 99.1% (MedEmbed), with benign/likely benign accuracy of 96.6–99.2% and pathogenic/likely pathogenic accuracy of 98.6–99.4%. Validation with FBN1 confirmed generalizability, with accuracy above 96% across all embedding models. When applied to not-yet-reviewed BRCA1/BRCA2 variants, the framework placed unknown variants within established benign or pathogenic clusters, demonstrating scalability for real-world variant interpretation. Conclusions: VUS.Life captures pathogenicity-relevant features from complex variant annotations using semantic embeddings, achieving >96% accuracy across multiple genes and embedding models. This framework generalizes beyond well-curated genes, supports scalable and interpretable classification, and offers a promising strategy for reducing the VUS interpretation bottleneck in clinical genomics.
A Strategy for Data Collection to Solve the Protein Function Problem
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: The success of AlphaFold2 and similar AI models in predicting protein structure from sequence has demonstrated the revolutionary impact of AI in biotechnology. A key ingredient in that success was the protein structure data in the PDB, which required over four decades and $10B to collect. To achieve similar breakthroughs in Bio-AI on a much shorter timeline, we must develop strategies to rapidly generate and curate the necessary high-quality datasets. NIST and The Align Foundation have collaborated with several other labs to develop a strategy to collect large datasets to tackle the long-standing Protein Function Problem: quantitatively predicting protein function from sequence. The resulting growth-based quantitative sequencing (GROQ-Seq) platform can be adapted to a variety of different protein function types and can produce quantitative functional data for hundreds of thousands of proteins per experiment at a cost of ~$0.05 per sequence. There are several key features of the data collection strategy that we have developed: 1. GROQ-Seq is a deep mutational scanning (DMS) or massively parallel reporter assay (MPRA), in which we use synthetic biology circuits to connect a variety of protein functions to cell growth. 2. GROQ-Seq takes advantage of the scale and reproducibility offered by modern laboratory automation, with distributed data collection across multiple biofoundries, ensuring reproducibility through shared automation protocols. 3. We use DNA barcoding and next-generation sequencing to implement GROQ-Seq in a massively pooled format, with hundreds of thousands of barcoded protein sequence variants in each assay. 4. With each GROQ-Seq measurement, we include a set of ‘protein function ladder’ standards, which provide a reproducible benchmark and enable quantitative calibration of the datasets to meaningful functional and/or biophysical values (rather than raw enrichment scores). 5. To support the use of aggregated data for AI model training, with each GROQ-Seq dataset, we include a quantitative assessment and validation of the data quality. Here, we will review the ongoing development of GROQ-Seq for different types of proteins including transcription factors, proteases, tRNA synthetases, RNA polymerases, histidine kinases, protein-protein binders, and metabolic enzymes. In addition, we will present results from the first wave of GROQ-Seq datasets, for proteins including a viral protease and three different allosteric transcription factors. Each of those datasets includes calibrated function measurements for >100,000 protein coding variants, including all single amino acid substitutions, insertions, and deletions, plus AI-generated and multi-mutation variants. We will highlight the commonalities and differences across the set of sequence-function relationships for the different proteins and assess the capabilities for AI models to provide generalizable protein function prediction (i.e., train with data from multiple proteins to predict results for new proteins). These calibrated, reproducible, and scalable data represent a critical step toward solving the Protein Function Problem and enabling predictive protein design.
Integrating Neuron Villages and Multiomics to Uncover Sex Imbalance in Neuropsychiatric Disorder Risk
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Neuropsychiatric disorders are common, debilitating, and often present with complex etiologies. Many of these conditions, such as autism spectrum disorder, post-traumatic stress disorder, eating disorders, and major depressive disorder, exhibit sex-specific differences in prevalence, age of onset, symptoms, and treatment response. A critical unmet need lies in elucidating how dynamic, sex-specific biological environments influence disorder risk, especially how fluctuating levels of sex steroid hormones interact with genetic regulation. To address this, we conducted multiomic analyses and the first cross-disorder massively parallel reporter assay (MPRA) in a "village-in-a-dish" model of excitatory neurons. This allowed us to identify sex-specific regulatory responses to the major sex hormones estradiol (E2), testosterone (T), and progesterone (P) at scale. Across nine Psychiatric Genomics Consortium (PGC) genome-wide association studies (GWAS), risk-associated lead single nucleotide polymorphisms (SNPs) were fine-mapped using colocalization and transcriptomic imputation. The resulting biallelic library of 8,860 candidate regulatory sequences (CRSs) was used to transduce an excitatory neuron village derived from 9 female and 9 male donors, followed by hormone treatments. Of all the CRSs tested, none had significant differential regulatory activity between female and male neurons with vehicle treatment, while 15, 15, and 12 CRSs had sex-dependent activity with E2, T, and P treatments, respectively. Among these hormone-responsive variants, only rs59473574—an Alzheimer’s disease (AD) associated variant predicted to regulate the PVR gene—was identified in all three treatment conditions. Other treatment-specific variants were associated with Bipolar Disorder (NDUFAF7 gene) with E2, Schizophrenia (PABPC1L gene) with T, and Anorexia Nervosa (WDR6 gene) with P. These results highlight the utility of our MPRA village model to elucidate the context-specific functions of disorder-associated variants in male and female donors. This first multi-disorder, multi-donor screening of neuropsychiatric variants in iPSC-derived neuron villages allows for high-confidence validation of regulatory activity within crucial biological contexts. Future work will incorporate these findings with single-cell RNA and ATAC sequencing data to construct an activity-by-contact model for better identifying target genes. Ultimately, this will help disentangle the roles of sex-related biological environments in the genetic risk for multiple neuropsychiatric disorders.
Session: AI-Powered Screening: Harnessing Machine Learning and Big Data for Diagnostic Innovation
Session Chair: Amir Trabelsi Artificial intelligence and machine learning are transforming diagnostic screening by enabling pattern recognition capabilities that surpass traditional analytical methods. This session showcases how AI algorithms enhance screening accuracy, accelerate result interpretation, and uncover novel biomarker signatures from complex multi-dimensional datasets. Expert presentations will demonstrate deep learning applications in medical imaging analysis, predictive modeling for disease risk stratification, and automated quality control systems. Speakers will address integration of diverse data types including genomics, proteomics, and clinical metadata to create comprehensive screening algorithms. Discussion includes strategies for training robust models, validating AI performance across populations, and ensuring algorithmic transparency in clinical decision-making.
Integrated Machine Learning Based Pipetting Workflow for High Throughput Blood Screening
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Blood viscosity is an important biomarker in diseases including leukemia, sickle cell disease, and systemic conditions such as diabetes. Conventional tools for quantification of viscosity, like rheometers and viscometers, provide accurate measurements for viscosity but are slow, labor-intensive, and not well suited for integration into high throughput or automated pipelines. This presents a key limitation, as diagnostics and clinical research is increasingly relying on fast, efficient, high throughput automation systems for standard laboratory practices. The relationship between fluid flow and viscosity presents an interesting opportunity to address this problem. However, the non-Newtonian nature of blood prevents the use of simple fluid flow algorithms for classification. Therefore, this project addresses the need for scalable viscosity testing by exploring the relationship between aspiration pressure and blood viscosity in a standard pipette tip. To fully understand and accurately leverage this relationship, a machine learning algorithm is applied to allow fast, automation friendly viscosity determination. Experimental procedure uses a pipetting system with a pressure sensor to measure pressure profiles across fluids of varying viscosities throughout aspiration. A Random Forest neural network was first trained on Newtonian standards created with glycerol, allowing the system’s accuracy to be benchmarked against known values. The model is adapted to non-Newtonian fluids by training with blood aspiration pressure profiles using human whole blood of varying viscosities, and integrating traditional non-Newtonian flow models like the power laws. This allows for direct prediction of blood viscosity from only 15 ul aspiration of blood, within a standard pipette tip. Initial results show high accuracy of the neural network model, and distinct and distinguishable pressure profile readings for non-Newtonian fluids as compared to the Newtonian standards. These findings support the potential of using simple pipette aspiration fluid flow for viscosity determination as a fast, sensitive, and automatable alternative to traditional methods such as using the rheometer. This method studies and accounts for the key complexities in biological sample fluid dynamics, providing a solution for a wider range of biological samples. In conclusion, this approach offers a path toward integrating viscosity testing into diagnostic assays and drug development workflows. Future work will explore possible testing to clinical blood samples to assess performance under physiologically relevant conditions and investigate the ability of the system to detect microstructural changes such as sickled red blood cells or viscosity increases due to hyperglycemia.
A High-Throughput 3D Model to Study Bone Metastasis of Breast Cancer
Open to view video.  |   Closed captions available
Open to view video.  |   Closed captions available Abstract: Breast cancer is a leading cause of cancer related deaths among women in the United States, with nearly 70% of total cases being estrogen receptor-positive (ER+). While endocrine therapy remains the cornerstone treatment for ER+ breast patients, the prognosis significantly worsens once the breast cancer metastasizes to distant organs like bone. The bone marrow microenvironment, particularly mesenchymal stromal cells (MSCs) in bone marrow, plays a key role in cancer cell dormancy, therapeutic resistance, and eventual incurable relapse many years after the initial diagnosis. To address the urgent need for better tools in understanding the cancer-stromal cells interactions, we developed a fully automated high throughput, scalable, and a versatile 3D tissue engineered model using a robotic liquid handler. The model incorporates ER+ breast cancer cell lines (T47D, MCF7, HCC1428) and MSCs (HS5) embedded and dispersed in a human type I collagen matrix in a 384- well format. This format allows for real-time biological readouts such as bioluminescence and is compatible with downstream molecular assays. We found that MSCs significantly increase the resistance of ER+ breast cancer cells to standard therapies with fulvestrant and tamoxifen. Protein and gene expression analyses showed decreased estrogen receptor levels and enrichment of cancer stem cells (high CD44 and ALDH1 levels) in cancer cells co-cultured with MSCs. Bulk RNA sequencing further identified upregulation of the PI3K-Akt pathway in all breast cancer cell lines due to interactions with MSCs. Leveraging this information, we performed high throughout drug screening and found that a dual inhibition strategy using hormonal therapy (fulvestrant) and a PI3K inhibitor (alpelisib) significantly improved therapeutic response of ER+ breast cancer cells relative to either monotherapy. These findings establish the utility of our high throughput 3D model as a platform for investigating interactions between ER+ breast cancer cells and the bone marrow. This model offers a valuable preclinical tool for identifying and screening novel therapeutic targets to improve outcomes of ER+ breast cancer patients. Next steps include expanding the platform and automating it for combinatorial drug screening targeting multiple pathways simultaneously, based on the bulk RNA sequencing result.