SLAS2019 International Conference & Exhibition

The SLAS2019 course package contains 81 presentations including:

76 presentations from ten scientific tracks 
2 keynote speakers
1 Ignite Panel Discussion 
2 presentations from our Career Connections series 

Based on presenter permission, 81 of the 144 total SLAS2019 presentations are available on-demand.

The SLAS Scientific Program Committee selects conference speakers based on the innovation, relevance and applicability of research as well as those that best address the interests and priorities of today’s life sciences discovery and technology community. All presentations are published with the permission of the presenters.

Key:

Complete
Failed
Available
Locked
Career Connections
Advances in Bioanalytics and Biomarkers
SLAS2019 Innovation Award Finalist: Digging into molecular MOA’s with high-content imaging and deep-learning
Open to view video.
Open to view video. Machine and deep learning models demonstrate incredible performance when it comes to extrapolating what we know already, in what are collectively called supervised approaches. For example, we’re now able to reduce raw imaging data from large high-content screens, where positive and negative control data exists, into accurate readouts of activity in mere minutes, as well as accurately predict a compounds MOA if it’s already been seen. However, when we’re presented with an unknown MOA, machine learning approaches will typically scan right over it, either missing it entirely or incorrectly assigning it to an existing MOA. By developing novel ‘unsupervised’ deep-learning models alongside high-content assays tailored for computational analysis, we’re able to group compounds by the similarity of their molecular MOA, with no prior knowledge, against a disease phenotype; thus, improving our ability to select the most exciting and novel hits for follow-on development.
Ultrahigh-Throughput Screening of Chemical Reactions Using MALDI-TOF MS and Nanomole Synthesis
Open to view video.
Open to view video. There is an extremely large published body of work in synthetic organic chemistry describing reactions with high yield. However, negative results when chemical reactions do not generate the desired product or the product has low yield are rarely published or presented. The knowledge of types of starting materials and conditions that do not work for the selected reaction type is very important. This knowledge can be quickly generated by ultrahigh-throughput screening (uHTS) of many starting materials in multiple reaction conditions using MALDI-TOF mass spectrometry. In this work we focused on Buchwald-Hartwig reaction, which is a C-N coupling between cyclic secondary amines and N-heterocycle-containing aryl bromides using four different catalysts. Nanomole-scale reactions were run in glass 1536 well plates using Cu catalyst, Pd catalyst, Ir/Ni photoredox catalyst and Ru/Ni photoredox catalyst as four different conditions for each of the reactions. The reaction mixtures were spotted on HTS MALDI targets in 1536 format using a 16-channel positive displacement liquid handling robot. These targets were analyzed on the new generation MALDI-TOF instrument equipped with a 10 kHz scanning beam laser, significantly faster X, Y stage and faster target loading/unloading cycle. The readout speed for a MALDI target in 1536 format was in 8-11 min range depended on the number of laser shots. In the first screening approach we selected a reaction between the simplest cyclic secondary amine and the simplest N-heterocycle-containing aryl bromide and added 383 simple and complex fragment molecules to evaluate catalyst poisoning using four catalytic methods (1536 experiments). Deuterated form of the product was added for ratiometric quantitation of the MALDI product response. The fragment molecules were identified as catalyst poisons (>50% signal knockdown) and non-poisons (2 was 0.85. In the second screening approach the simplest cyclic secondary amine was reacted with 192 aryl bromides of increasing complexity and the simplest N-heterocycle-containing aryl bromide was reacted with 192 cyclic secondary amines of increasing complexity using the same 4 catalytic methods (1536 experiments). Direct correlation with UPLC-MS data was lower since MALDI signals of structurally diverse products were normalized against single internal standard. Nonetheless the normalized MALDI signal was successfully used to create binary reaction success/failure threshold of 20% and the detected trends were essentially identical to those from the UPLC-MS data. This novel uHTS workflow for synthetic reactions based on MALDI-TOF MS is the first step on a road to predicting chemical reactivity and reaction success, which has potential to decrease the number of unsuccessful experiments for organic chemists.
Screening a secretome library to discover novel biology and targets relevant to drug discovery
Open to view video.
Open to view video. Secreted proteins regulate human physiology by transducing signals from the extracellular environment into cells and regulating different cellular phenotypes. The human secretome represents a small (~2200 proteins) and biologically relevant screening library that can be used in phenotypic assays. Here, we have used a high-throughput mammalian cell factory approach to generate separately purified and quality assured human secreted proteins. A sample storage and handling process has been established to enable screening of the proteins, at known concentrations, in different cell-based assays. Screening 1000 proteins from the human secretome we show that the FGF9 subfamily, FGF9 and FGF16, are strong proliferators of cardiac progenitor cells. Using the library, we demonstrate that the effect of FGF16 is specific to the cardiac progenitor cells, with no observed effect on cardiac fibroblast proliferation. Additional biophysical binding experiments, using cardiac fibroblasts and cardiac progenitor cells immobilized on a biosensor surface, showed that the interaction of FGF16 and FGF9 with cells on the surface was additive. This suggests that the proteins are signaling through different receptors. Altogether, the data demonstrates how a secretome library can be used across a panel of assays to uncover novel functional information and to aid the discovery of novel signaling pathways and targets relevant to drug discovery.
Application of acoustic mist ionisation mass spectrometry for metabolic profiling – case study in hepatic toxicology
Open to view video.
Open to view video. Acoustic Mist Ionization Mass Spectrometry (AMI-MS) is the hyphenation of a Waters Xevo G2XS time of flight (ToF) mass detector with an acoustic sampling interface. This new technology enables direct injection of samples from a standard 384 well plate into the mass spectrometer at very high-throughput. There are several advantages of using acoustics to load samples into the mass spectrometer, firstly it is non-contact so there is no carry over between sampling events. Secondly, the sample volume is very small, typically 15 Nano-Litres per second and thirdly, the acoustics can fire very quickly (1400Hz). This technology has been broadly applied within AstraZeneca to support biochemical high-throughput screening (HTS), we routinely process in excess of 100,000 samples per day. Having established this technology within HTS we have recently looked to expand the application of this technology into cell screening. Cellular applications offer significant challenges to AMI-MS, since this is a direct infusion MS system, there is no chromatography or separation technology between the acoustic sampling and the mass detector, suppression can be a significant issue. Preliminary experiments were carried out with adherent cells grown in standard 384 well plates. The culture medium is removed and the cells washed with ammonium formate before the cells are lysed in 50 micro-Litre of water. Using the time of flight MS scanning across ~2000Da range it is possible to generate a “finger print” spectra from the cells. This “finger print” contains examples of the most abundant metabolites present in the lysate including lipids, amino acids, sugars and nucleotides. Typically, less than 10,000 cells per well are required to generate a “finger print” and sampling times are typically in the range of 6-10 seconds per well (90-150nL). Since only very small sample volumes are taken from each well it is possible to generate significant numbers of technical replicates from each well. In addition, since we are working with adherent cell lines it is possible to have multiple biological replicate on the same plate. While it was interesting to demonstrate the ability to generate reproducible “finger prints” from cell lysates, the ability to demonstrate that compound treatment could perturb the “finger print” in a biologically relevant manor was our ultimate goal. We will share some “finger print” data from or our early work using hepatocyte cells treated with known DILI compounds. There are multiple examples where metabolic “finger prints” change on treatment and these changes are consistent with the known mechanism of action of the DILI compounds.
Exploiting the Potential of Ultra High Throughput Mass Spectrometry Approaches to Drug Discovery
Open to view video.
Open to view video. As a direct analysis, label free technology, mass spectrometry enables assays to be generated that monitor native analytes without the requirement for substrate/product modifications or indirect detection methods. It can have a dramatic impact on hit to lead and lead optimization stages of drug discovery by eliminating false positive/negative results typically associated with fluorescent screening technologies. Traditionally, however, MS-based techniques have been relatively slow and thus not suited for high throughput applications. Recent advances in mass spectrometry instrumentation, automation, software and low volume dispensing have enhanced its potential to be adapted to higher throughput approaches, under physiologically relevant conditions, and at sample volumes compatible with hit identification/lead generation screening. Here we describe the application of a variety of high throughput mass spectrometry approaches to lead discovery and our strategy for deploying them in a complementary way to create a suite of label free assay formats to address questions in discovery. This includes the application of Affinity Selection Mass Spectrometry to prioritise small molecule drug targets entering the discovery pipeline, the development of an automated Matrix-Assisted Laser Desorption / Ionization Time-Of-Flight (MALDI-TOF) MS platform for screening and compound profiling, and the evaluation of Acoustic-Mist MS to study kinetics.
High-Throughput ESI-MS Enabled by the Acoustic Droplet Ejection to the Open-Port Probe Sampling Interface
Open to view video.
Open to view video. Label-free Liquid Chromatography/Mass Spectrometry (LC/MS) based screening technology is routinely used in early drug discovery, especially for the high throughput ADME screening. Although the current analysis speed of <30 seconds per sample is quite promising, it still cannot match the throughput provided by plate-reader based High Throughput Screening (HTS) platforms. Acoustic droplet ejection (ADE) is a droplet transfer technology capable of high speed, reproducibility, and absolute accuracy. In this work, we couple the ADE and the standard Electrospray Ionization (ESI) ion source of a mass spectrometer with the open-port probe (OPP) sampling interface. Screening speeds as fast as 0.4 seconds-per-sample are demonstrated with high sensitivity, high reproducibility, wide linear dynamic range, good quantitation capability, no ion suppression from various biological/reaction matrix, and broad compound coverage. The continuous-flow of carrier solvent for the OPP maintained the ionization stability and actively cleaned the entire flow system resulting in no observed carry-over. The advantages of this integrated system have been demonstrated with various drug discovery workflows.
Hit Dissection and Target Identification from a Cell Viability Chemogenomic Screen
Open to view video.
Open to view video.
SLAS2019 Innovation Award Finalist: Live-cell Gene Imaging Nanotechnology for Cells, Tissue and Pre-clinical Abnormal Scarring Challenges
Open to view video.
Open to view video. Live-cell imaging is critical to advancing biomedicine. For example, reporter constructs rely on (viral) integration to enable real-time gene monitoring. However, these result in: viral-induced mutations, laborious clonal selection processes and gene reporter re-design. In addition, fluorescence proteins have similar emission wavelengths to tissue auto-fluorescence, hence suffer poor signal-background ratio. On the other hand, contrast agent cell-labelling often lacks molecular specificity, resulting in highly misleading false positive signals. Our experience suggests that nanotechnology tools readily enable gene expression imaging and increase biomarker detection specificity. We have shown they are easy-to-use, have great selectivity and are highly versatile(simple biomarker re-configuration and near-infrared imaging).Abnormal scars are characterized by excessive fibrosis due to dysfunctional wound healing. Despite occurring in 1:12 of the developed world’s population, no satisfactory therapy exists. Furthermore, no reliable method prognosticates their emergence during early wound recovery. In response, we developed nanotechnology biosensors (nanosensors) to facilitate the following: 1) efficient drug screening; and 2) non-invasive, early scar detection and monitoring.1) To date, no drug screening study has identified suitable anti-scarring drugs. We developed a Fibroblast activation protein(FAP)-α Probe: FNP1, which is specifically and rapidly activated by gelatinases to trigger NIR fluorescence. We demonstrate screening utility with abnormal scar fibroblasts, TGF-β1, anti-fibrotic drugs, inhibitors and stimulants with undefined properties. Following validation against known anti-fibrotic treatment, compounds ‘R’ and ‘T’ were discovered to possess anti-scarring properties and further validated with gene expression and immunoassay analysis.2) Abnormal scar prognosis prior to full manifestation can only be achieved by skin biopsies in addition to further processing and analysis. However, biopsies are limited by: invasiveness, pain, inconvenience, and further scarring and infection complications. In response, we pioneered the concept of topically-applied nanoparticles to probe mRNA non-invasively. NanoFlares - highly-ordered nucleic acids surrounding a nanoparticle core, were chosen for their skin-penetrative properties. These comprise recognition and reporter elements that alter fluorescence emission properties upon target hybridization. NanoFlares targeting connective tissue growth factor (CTGF) demonstrated specificity in solution, cells, ex vivo (human) engineered tissue and animal models (mice, rabbits). Notably, NanoFlare performance was validated with non-coding, uptake NanoFlares, gene expression analysis against functional measures of abnormal scarring.I will elaborate on the critical role nanotechnology can play in abnormal scar therapy and diagnostic development. Specifically, FNP1 is an easy-to-use nanosensor that rapidly identifies novel anti-scarring drug or drug combinations. We also demonstrate the first-ever instance of biopsy-free skin diagnosis using topically-applied NanoFlares validated by several abnormal scarring models. Crucially, gene-based molecular imaging with nanotechnology may dramatically alter healthcare paradigms for skin diseases.
Ignite Panel Discussion
Keynotes
Molecular Libraries
Micro- and Nanotechnologies
High-Definition Biotechnology
Drug Target Strategies
Data Analysis and Informatics
Automation and High-Throughput Technologies
Cellular Technologies
Biologics Discovery
Assay Development and Screening
Utilizing a modified RapidFire 365 system improves throughput and enables simultaneous mechanistic evaluation of multiple Histone Acetyltransferase enzymes and their small molecule inhibitors
Open to view video.
Open to view video. Utilizing a modified RapidFire 365 HTMS (High Throughput Mass Spectrometry) system from Agilent, coupled to a triple quadrupole mass spectrometer, we have built a platform for the rapid biochemical characterization of the lysine acetyltransferase (KAT) enzyme family. Lysine acetyltransferases are enzymes that catalyze the transfer of an acetyl group from acetyl-CoA to the ε-amino group of lysine residues on a variety of nuclear and cytoplasmic proteins. Acetylation of lysine residues on histone tails facilitates transcriptional access to DNA and thus contributes to the regulation of gene expression. Utilizing varying substrates and small molecule inhibitors, we can extensively evaluate the acetyltransferase activity of multiple specific KAT enzymes and tailor our Medicinal Chemistry compound design efforts to target the KAT activity of interest. Our approach using Mass Spectrometry to interrogate native peptides, proteins and nucleosomes for KAT activity allows for rapid evaluation of multiple substrates as well as the individual quantitation of multiple acetylated marks via a single analysis. This enables us to directly compare enzyme activity across a series of different substrates with varying acetylation or methylation marks and allows us to tailor the substrate sequences to target specific activities of interest. The platform is enzyme and substrate independent and allows swift characterization of a variety of different enzymes utilizing many different substrates from peptides to proteins to nucleosomes. Speed of analysis is enabled through the implementation of a modified RapidFire 365 HTMS system which greatly reduces the cycle time to 3 seconds/sample compared to 15 seconds/sample using traditional RapidFire HTMS. Because of the high sample throughput achieved from the modified system, we are able to perform detailed mechanistic studies as well as compound screening campaigns under timelines not afforded by traditional RapidFire HTMS.
Induced pluripotent stem cell-based high throughput screening for modulators of cardiac hypertrophy
Open to view video.
Open to view video. The basis is formed by Ncardia’s proprietary differentiation technology, which allows reproducible manufacturing of highly purified, relatively mature cardiomyocytes, with high predictivity in cardiac toxicity and efficacy assays. Large scale manufacturing in state-of-the-art stirred tank bioreactor systems enabled the production of cardiomyocyte batch sizes suitable for high throughput screening. We demonstrate the development and validation of a scalable assay for induction of cardiac hypertrophy, using NT-proBNP secretion as a readout. To ensure highly reproducible and accurate drug efficacy screening using our hypertrophy assay in a high throughput mode we have set up and validated an automated platform for cell culturing, assay readout and data handling. Using verapamil as a reference compound to reduce hypertrophy we confirm assay robustness (S/B > 2) and reproducibility (Z’-factor > 0.4). A selected panel of anti-hypertrophic compounds was used for further assay validation and proved a high level of predictivity. Finally, the assay was used to screen a collection of <strong>1280 off-patent drugs </strong>(95% approved drugs) with high chemical and pharmacological diversity (Prestwick Chemical Library) for anti-hypertrophic activity. Altogether, with the current results we demonstrate an end-to-end solution for more efficient and cost-effective drug discovery using a combination of cutting edge hiPSC, bioprocessing and high throughput screening technologies.
3-Dimensional Human Cortical Neural Platforms for Drug Discovery in Neurodevelopmental Disorders
Open to view video.
Open to view video. The human Central Nervous System (CNS) has a unique structural organization that is critical to its complex functions. Efforts to model this intricate network in vitrohave encountered major bottlenecks. Recently, much work has been focused on obtaining 3D brain organoids in an attempt to better recapitulate the brain development/function in vitro. Although self-organized 3D organoids can potentially more closely recapitulate key features of the human CNS, current protocols still need major improvements before being implement in a drug discovery scenario. We have recently launched a highly homogenous human induced Pluripotent Stem Cells (hiPSCs)-derived cortical spheroid screening platform in 384 well format, composed of cortical neurons and astrocytes. Using high throughput calcium flux analysis, we showed the presence of quantifiable, robust and uniform spontaneous calcium oscillations, which is correlated with synchronous neuronal activity on the spheroid. Our platform is optimized to have a highly homogenous and consistent functional signal across wells, plates, and batches. Finally, we demonstrated the feasibility of using this platform to interrogate large libraries of compounds on their ability to modulate the human CNS activity. Here, we describe the use of this platform to investigate neurodevelopmental disorders. When implementing hiPSCs from Rett Syndrome (RTT) patients on our platform, a clear functional phenotype emerged. RTT 3D neural cultures displayed a signal that indicates a compromised neural network with slow, large, synchronized calcium signal frequency. We also performed a pilot screen using a targeted library of 296 compounds for their ability to alleviate the observed RTT phenotypes in vitro. In summary, we demonstrated the feasibility of incorporating a neurodevelopmental disorder in a high-throughput screening platform. The system presented here has the potential to dramatically change the current drug discovery paradigm for neurodevelopmental disorders and other neural diseases.
Pre-clinical drug development progressing towards implementation of multi-dimensional cellular models with increased throughput and enhanced translational relevance
Open to view video.
Open to view video. The pharmaceutical industry is continuing to face high R&D costs and low overall success rates of clinical compounds during drug development. There is a surge in demand for development and validation of disease relevant and physiological human cellular models that can be implemented in early stage discovery, thereby shifting attrition of future failures to a point in discovery where the costs are significantly lower. The current drug discovery paradigm involves lengthy and costly lead discovery and optimization campaigns, often using simple cellular models with weak translational relevance to human disease or safety. This exemplifies an inability to effectively and efficiently reproduce human disease relevant states at an early stage to steer target and compound selection. Therefore, a fundamental question is how do we recapitulate human biological complexity of a disease state in robust translational in vitro assays for interrogation to increase our success rate in late stage drug discovery programs. The majority of new complex in vitro technologies that promise to be influential and more predictive in the next few years need to be qualified and repurposed for drug discovery efforts. I will provide examples of where we have utilized time-dependent, multiplexed and multi-dimensional 3D cell culture approaches in both safety and efficacy to better characterize targets and progress molecules earlier in discovery. Furthermore, I will discuss how we have automated and miniaturized 3D Oncology models but still mimic certain characteristics of the tumor microenvironment. I will also elaborate on the importance and methodology to qualify these 3D human in vitro models and the translatability of these models to the clinic.
Fully Automated Large Scale uHTS using 3D Cancer Organoid Models for Phenotypic Drug Discovery Applications
Open to view video.
Open to view video. Pancreatic cancer remains a leading cause of cancer-associated death, with a median survival of ~ 6 months and 5-year survival rate less than 8%. The tumor microenvironment promotes tumor initiation and progression, and is associated to cancer metastasis and drug resistance. Traditional high throughput screening (HTS) assays for drug discovery have been adapted in 2D monolayer cancer cell models, which inadequately recapitulate the physiologic context of cancer. Primary cell 3D cell culture models have recently received renewed recognition not only due to their ability to better mimic the complexity of in vivo tumors as a potential bridge between traditional 2D culture and in vivo studies. 3D cell cultures are now cost effective and efficient and have been developed by combining the use of a cell-repellent surface and a magnetic force-based bioprinting technology. We have previously validated 3D spheroid/organoid-based cytotoxicity assays and now present the results for the first fully automated high throughput screening campaign performed using this technology on a robotic platform against 150K of the Scripps Drug Discovery Library (SDDL). The active compounds identified in the primary screening were tested as concentration response curves to determine the efficacy and the potency of the compounds between 2D and 3D cell models. Further SAR of the most interesting compounds will be assessed to reveal the importance of 3D for early drug discovery process. These data indicate that a complex 3D cell culture can be adapted for HTS and further analysis provides a future basis for therapeutic applications due to their ease of use and physiological relevance toward humans.
Cell Stress Biosensors for Rapid, Live-Cell Detection of Neurotoxic and Cardiotoxic Compounds in iPSC-Derived Neurons and Cardiomyocytes
Open to view video.
Open to view video. Nearly two-thirds of drugs fail prior to Phase II clinical testing, many due to adverse toxicity. Further, the side effects associated with many clinical drugs, especially chemotherapeutics, can force patients to end treatment. Early detection of adverse toxicity of drugs or lead compounds prior to clinical trials would aid in rapid, cost-effective drug development; yet few in vitro tools exist to test for cellular toxicity in disease relevant cell types. Those that do, often lack valuable information such as mechanisms of toxicity or are not applicable in cell types that are relevant to disease. We created a genetically-encoded fluorescent assay to detect chemically induced stress in living cells. We tested the ability of this assay to detect neurotoxic and cardiotoxic effects in iPSC-derived peripheral neurons and cardiomyocytes. To assess the propensity of different chemotherapeutics to cause neurotoxicity we monitored cell stress in iPSC-derived peripheral neurons in a dose dependent manner. Cell stress responses were monitored using fluorescence imaging and compared with neurite outgrowth. Increased cell stress correlated with reduced neurite outgrowth. IC50 values calculated from each assessment were highly similar for most compounds. However, for some compounds, like oxaloplatin, the cell stress assay produced IC50 values well below those produced by measuring neurite outgrowth. These results suggest an increased sensitivity for neurotoxicity using the cell stress assay. Further, not all compounds tested induced cell stress, indicating that cell stress can be used to glean insight into specific toxicity mechanisms. We next tested the ability of the cell stress assay to identify cardiotoxic side effects from a group of chemotherapeutics known as tyrosine kinase inhibitors, as well as doxorubicin and two cardiac glycosides. In conjunction with monitoring cell stress we also monitored cellular calcium levels and cardiomyocyte beating using a genetically encoded calcium indicator in iPSC-derived cardiomyoctyes. Cell stress again served as a marker of toxicity that correlated with both changes in cardiomyoycte beating and intracellular calcium levels. Combining these data, we were able to classify the cardiotoxic tendency of chemotherapeutics from minimal to highly cardiotoxic. These results demonstrate the ability of this assay to not only identify drug induced toxicity, but provide details on the mechanisms of toxicity. The latter of which is extremely important in identifying means to combat neurotoxic and cardiotoxic side effects.
Rapid measurement and comparison of multiple cytokines and immune checkpoint molecules from 2D and 3D cell cultures of immune and breast cancer cell populations grown in a complex co-culture model
Open to view video.
Open to view video. Breast cancer tumors can adapt to immune cell infiltration by upregulating immune checkpoint proteins such as Programmed death-ligand 1 (PD-L1) in response to increased local concentrations of cytokines and inflammatory markers secreted by invading T lymphocytes. This allows a tumor to evade immune targeting and reduce the immune response. We developed a culture system to examine the effects of interactions between immune and cancer cell populations on the expression and secretion of a variety of immune checkpoint proteins and cytokines. Further, to determine the influence of cytoarchitecture on the tumor response, we co-cultured human peripheral blood mononuclear cells (PBMCs) with HCC38s, a triple-negative breast cancer cell line, in both traditional monolayer format and as 3D spheroids. PBMCs were stimulated with Dynabeads® coated with CD3 and CD28 antibodies to mimic the effects of antigen presenting cells and to induce the differentiation and expansion of T lymphocytes. Once activated T-cells upregulate a variety of immune checkpoint molecules and secrete cytokines into the tumor microenvironment which can have varying effects on target cell populations. To investigate the mechanisms involved in the induction of PD-L1 in these cultures, we measured select cytokines and immune checkpoint markers using AlphaLISA no-wash homogeneous assays. Alpha technology is a useful tool for the rapid screening of multiple biomarkers from the same well of a culture plate, requiring very low sample volume (5 µL or less) and being highly amenable to automation. Using this assay, we were able to discriminate between effects on the HCC38 cells caused by secreted factors as opposed to direct cellular contact by including treatment with PBMC-conditioned media. For the most part, different treatments produced similar effects on biomarker expression and cytokine secretion for 2D and 3D cultures, with some notable exceptions which will be discussed.The addition of 3D cell culture models into discovery workflows can reduce downstream costs such as secondary testing, but the adoption in regular screening programs has been hindered by uneven culture growth, high variability, and lack of methods for high-throughput analysis. 3D cultures shown here were grown in CellCarrier Spheroid Ultra-Low Attachment (ULA) microplates, high quality clear plates that enable the formation of consistently round spheroids from various cell types. Cellular health and proliferation were assessed by with ATPlite 1step Luminescence assays and through cellular imaging using the Opera Phenix high content imaging system. These data illustrate and address some of the benefits as well as challenges in developing a biologically relevant culture system for investigating the complex mechanisms involved in tumor evasion of the innate immune response. We show here how AlphaLISA assays can easily be used to screen for compounds that modulate interactions between immune and cancer cell populations.
Single cell-based investigations of endocrine disrupting chemicals by high content analysis
Open to view video.
Open to view video. Our lab has a longstanding interest in single cell analysis-based transcription studies.We have developed novel mechanistic and phenotypic approaches to study transcription within a cellular context, but with sensitive, high throughput approaches.Our main platform allows us to quantify transcription using high throughput microscopy and image analytics that are designed to link, at the single cell level, mRNA synthesis to DNA binding and promoter occupancy of nuclear receptors (NRs) and coregulators,histone modifications and large-scale chromatin modeling. A growing list of endocrine disrupting chemicals (EDC) from the environment have been shown to target NRs, including estrogen and androgen receptors (ER, AR), with large scale efforts to develop and test environmental samples for EDC activities.Our multiplex assays with engineered cells or tumor cell lines endogenously-expressing ER/AR are currently being used to assess individual or mixtures of known hormones and EDCs via machine learning approaches.These studies are currently being applied to environmental samples obtained Galveston Bay and the Houston Ship Channel via new funding from the NIEHS Superfund Research Program.
The evolution of a multidimensional approach combining existing and next wave technologies in delivering binding kinetics in early drug discovery
Open to view video.
Open to view video. Since the drug-target residence time concept was first proposed more than 10 years ago, it has received much attention across drug discovery research. The central principle is that rates of drug−target complex formation (kon) and breakdown (koff) describe target engagement in open systems and that target occupancy is better predicted by kinetic (kon, koff) rather than thermodynamic (IC50, Kd) parameters in the dynamic in vivo setting. Consequently, the incorporation of binding kinetics into PK-PD models is proposed to lead to better prediction of drug efficacy. A large number of marketed drugs are characterized by long residence times, often equating with extreme potency of the binding interaction. This can confound accurate equilibrium potency measurements through both the length of time needed to reach equilibrium and tight binding limitations. Kinetic measurements are not constrained by these limitations, and can therefore offer a more practical and accurate measure of intrinsic potency. However, historical methods for determining binding kinetics are of low to medium throughput and limited to certain target classes. Recent advances in plate reader technology and the emergence of new methodologies have enabled the development and prosecution of higher throughput screens that have enabled real time measurement of on rates, off rates, and affinity. Multidimensional approaches to determining mechanism of action at different points across early drug discovery from hit validation to candidate selection will be presented. These case studies will focus on data from complementary biochemical and cellular systems, generated using a variety of well established technologies and emerging next wave methods. Opportunities to profoundly impact drug discovery through the value of decision-making combined with an ability to effectively reduce attrition at each stage will be critical in defining future successes as an industry. Improving our understanding of both on and off-target kinetics that underpin potency will be an important component in achieving this goal.
High-throughput agonist shift assay development, automation of global curve fit analysis and visualization of positive allosteric modulators (PAM) to support Drug Discovery
Open to view video.
Open to view video. In recent years, allosteric modulation of 7 transmembrane receptors (7TMRs) has become a highly productive and exciting field of receptor pharmacology and drug discovery efforts. Targeting the less conserved and spatially distinct allosteric binding site has been a strategy adopted to overcome selectivity issues with orthosteric ligands. For example, in the case of the five muscarinic receptors, this lack of subtype selectivity has made it extremely difficult to probe the therapeutic potential of activating the predominantly CNS-expressed M4 receptor due to simultaneous activation of peripherally expressed M2 and M3 subtypes, resulting in severe side effects. To characterize positive allosteric modulation (PAM) pharmacology at the M4 receptor, a PAM shift assay was developed that measures the ability of compounds to shift the dose-response relationship of the native ligand, acetylcholine (ACh), thereby enhancing receptor activity. In response to increased demand, an automated high-throughput agonist shift assay was established to characterize the positive allosteric modulators (PAM) molecules for human muscarinic acetylcholine receptor 4 & 2 (M₄ & M₂) in several species (human, rat & rhesus). PAM shift assays global curve fit analysis is complex and data handling required many resources and time which is not feasible in course of lead optimization program. Therefore, data analysis was streamlined using automated work flow manager (AWM), ActivityBase to populate the data points and antagonist concentrations than a customized the Biovia Pipeline Pilot script to automate Graph Pad Prism fitting. Visualization was accomplished with Spotfire in which the number of compounds was not limited and data analyzed in an automated manner to produce quantifiable measurements of allosterism. In parallel, these assays were transferred to the HighRes Biosolution robotics system, resulting in a throughput increase of 300%. The improved and automated workflow provided greater reproducibility owing to higher fidelity of the data. The overall automation effort of assay as well as Global curve fit analysis has saved significant time in assay execution, simplified data analysis, reporting and also eliminated errors due to manual work. The derived allosteric parameters can be used to prioritize compounds for In vivo studies and preliminary human dose prediction."
Maximizing Chemical Diversity in a Natural Product Screening Library
Open to view video.
Open to view video. High throughput screening of natural products is a valuable part of drug discovery, as seen by the numerous examples of natural product hits which have become clinically useful drugs. For our screening library within the NCI Molecular Targets Program we have sought to enhance the base NCI collection of tropical plant and marine invertebrate pre-fractionated samples with extremophilic bacteria and unusual fungi from temperate locations, as well as natural product-like diversity-oriented synthesis libraries from synthetic academic collaborators. Challenges include working with a wide variety of countries while respecting the Convention on Biological Diversity, keeping track of and analyzing tens of thousands of samples from a multitude of sources, and sharing data and collaborating with source organizations. Examples of work in Kazakhstan, Turkey, and Brazil will be discussed, as well as work with academic synthesis groups and US microbial research groups. Funded in part by NCI Contract No. HHSN261200800001E, and by the Intramural Program of the US National Cancer Institute (Project 1 ZIA-BC011469-06)
The Data Journey at GSK: How Biopharm Molecular Discovery has digitalised its workflows to increase data quality, reduce cycle-times and enable data reuse
Open to view video.
Open to view video. In recent years, the Biopharm Molecular Discovery team at GSK has invested in platform technologies and process optimisation which has dramatically increased the scale and throughput of the organisation. As we have grown, the volume of samples we manage and data we produce has increased greatly and traditional methods of capturing, analysing and sharing data for teams to interpret are no longer adequate to keep pace. At GSK we passionately believe that our data and the organisational knowledge that it encodes is one of our greatest assets. To maximise its value, we have modernised our workflows and processes. By focussing on data standards, workflow automation, non-invasive data governance, data integration and upskilling, the "Data Journey Team" has significantly improved the efficiency of our therapeutic programs, increased access to data and enabled new analyses which previously could not have been considered. To deliver best practice across the organisation, we have ensured that every therapeutic program team has a data analyst whose role is to assure the quality of the team's data and configure dashboards which are accessible to all team members and offer impactful visualisations to facilitate rapid decision-making. To enable this, we have data engineers who integrate data from multiple master data sources including our legacy systems. Our goal is to remove the need for data silos (spreadsheets and slide presentations) and instead provide our teams with the tools to interrogate the source data directly. The efficiency can be remarkable; some critical path data wrangling tasks which previously took weeks can now be achieved in a few minutes with templating and well-designed visualisations. Data integration has also enabled us to reveal learnings across multiple programs and quickly share high quality data with business partners for secondary uses such as modelling, prediction and in silico design. The upshot, we have developed a Biopharm Knowledge-base, populated from source data and metadata, accessible across the organisation and generating powerful new insights to drive our science.
A longitudinal analysis over 10 years of enabling technologies for early drug discovery using open source tools
Open to view video.
Open to view video. Progress in 3D printing technology and the evolution of open source hardware/software tools over the past decade have made the development of custom automation much more accessible for laboratory staff. These efforts have democratized access to many tools and technologies which were previously the exclusive domain of engineering and fabrication firms. In addition to cost reductions due to open source efforts, many domain experts have produced voluminous amounts of free quality training and documentation which further improves the accessibility of these tools and technologies to end users with little to no formal education in related fields. The result of these efforts can be seen with the fabrication of adaptive/customized technologies that have leveraged 3D printers, microcontrollers, electronics fabrication design tools and the explosion of open source software. The Lead Identification team at Scripps Florida has leveraged many of these open source tools to deploy a variety of custom in-house platforms which increase productivity, further automate quality control tasks and enable novel HTS workflows. These platforms include a pipetting light guide which makes use of custom 96 and 384 well LED matrices which match the SBS footprint of microtiter plates and were designed using the KiCad suite of open source electronic design automation tools, then fabricated using the OSH Park printed circuit board production service. These custom LED matrices, when coupled with an Arduino microcontroller, provide users with guided step-by-step, well-by-well pipetting indicators to eliminate errors, assist with new employee training and increase productivity when laboratory automation cannot be used. Other platforms developed by the Lead Identification team include an Arduino and Raspberry Pi based real-time liquid dispenser QC system which has been fully integrated into the Scripps uHTS platform, custom 3D printed magnetic incubator shelving for HTS spheroid projects and an automated platform for the QC of compound plates. These custom designed in-house platforms have positively impacted screening and compound management operations by reducing staff workload for QC processes, replaced manual spheroid screening processes with automated routines and improve quality assurance in compound management operations via tens of millions of compound wells which have been imaged and verified against the Scripps corporate LIMS. The advantages of developing in-house automation using open-source tools and lessons learned over a decade of doing so are presented.
Guiding the use of small molecule inhibitors in cancer using chemical-genetic interaction maps
Open to view video.
Open to view video. Small molecule inhibitors have often unpredictable mechanisms of action and sensitivity in cancer. We desribe new approaches that leverage rapid screening of collections of genetically defined, isogenic cells to map the ways by which tumor genetics can modulate drug responses of cells in culture. This concept has been applied to both small molecule inhibitors (Martins et al. Cancer Discovery 2015) and to DNA damaging chemotherapy (Hu et al. Cell Reports 2018) to identify new synthetic lethal relationships as well as genetic causes of resistance to PARP inhibitors. Our studies define a systems approach using in vitro screening of cell lines for the rational design and optimization of small molecule inhibitors for clinical development.
Generation of ion channel blocking antibodies by fusing venom-derived 'knottins' into antibody CDR loops
Open to view video.
Open to view video. Much effort and expenditure has been spent by pharmaceutical and biotechnology companies with little success in the quest to generate potent and selective antibody inhibitors of ion channels. In contrast, a multitude of venomous animals block ion channels using small cysteine-rich peptides in defence or predation. However, such naturally occurring “knottin” blockers of ion channels often suffer from manufacturing difficulties, short half-lives and a lack of specificity. Using phage display we have developed a novel molecular fusion format wherein naturally occurring cysteine-rich peptides are inserted into peripheral CDR loops of an antibody while retaining the folding and function of both molecules. In this novel format (termed a KnotBody), the cysteine-rich peptide enjoys the extended half-life of an antibody molecule and the peripheral CDRs gain additional diversity within a scaffold which is pre-disposed to blockade of ion channels. We have demonstrated functional insertion of multiple cysteine-rich peptides which block the voltage-gated potassium channel, Kv1.3 and the acid sensing ion channel ASIC1a. The modular nature of the KnotBody binding surface and the amenability of this format to phage display technology will facilitate further optimisation of potency and selectivity of ion channel blockade by engineering both knottin and antibody loop sequences.
The Degrader Strategy Rule Book
Open to view video.
Open to view video. The classic drug receptor occupancy model states that the magnitude of the response to a drug is directly proportional to the amount of drug bound. In this scenario the effect of a drug is limited by its affinity and its residence time on its target. However a new paradigm in drug mechanism of action is emerging, in which a drug modulates the proteostais of its target, resulting in decreased protein level. Degrader theory predicts that when a drug induces the proteolytic degradation of its target protein, one drug molecule can degrade multiple target protein molecules (catalytic degradation). Thus, the magnitude of the drug’s effect is no longer limited by drug affinity or receptor occupancy, and the duration of effect can exceed that of the drug exposure. The first drug known to act through a degrader mechanism was Fulvestrant, a selective estrogen receptor degrader (SERD), which consists of an estrogen receptor ligand and a “degradation tail”. We will discuss the development of other novel protein degraders that also use the degradation tail strategy. Another systematic approach to the development of protein degraders is based on bivalent molecules (Chemical Inducers of Degradation; CIDEs) that bring the target protein in close proximity to ubiquitin ligases, resulting in its ubiquitination and subsequent degradation. We will also discuss factors that influence the ability of these types of molecules to induce protein degradation.
Capturing macrophage complexity in vitro: human iPSC models as a platform for disease modelling and drug discovery
Open to view video.
Open to view video. Despite a substantial appreciation for the critical role of macrophages in adaptive and innate immunity, detailed mechanistic understanding of human macrophage biology has been hampered by the lack of reliable and scalable models for cellular and genetic studies. Although commonly used model systems such as macrophage-like leukemic cell lines or primary monocyte-derived-macrophages have advanced our understanding of macrophage biology; such cellular systems have differing limitations with regards to human disease relevance, genetic fidelity and practicality of cell supply. Here we discuss the application of human induced pluripotent stem cell (hiPSC)-derived macrophages as an unlimited source of patient genotype-specific cells, that provides a powerful and scalable platform for disease modeling and drug screening. Through multi-parametric high-content cell-based assays, we demonstrate that hiPSC-derived macrophages can be polarized in vitro to both functionally and molecularly distinct ‘classically activated’ and ‘alternatively activated’ subtypes. Furthermore, we compare the molecular fidelity of such subtype-specific cellular models to primary counterparts through phenotypic, functional, transcriptomic and proteomic profiling. Harnessing our hiPSC-derived macrophage platform with CRISPR/Cas9 based genome editing has further enabled the genetic, molecular and cellular exploration of mechanisms underlying immune cell disease phenotypes. Embedding such hiPSC cellular models into the earliest stages of drug discovery at the expense of traditional reductionist cellular models provides both the human physiological relevance and throughput needed to improve the efficiency of drug discovery and development.
Discerning function from phenotype with high throughput image analysis
Open to view video.
Open to view video.
Shared data ecosystem of NCI’s Cancer Target Discovery and Development (CTD^2) Network
Open to view video.
Open to view video. The National Cancer Institute (NCI) supports, among many initiatives, the Cancer Target Discovery and Development (CTD2) Network, which currently comprises 12 Centers nationwide. The primary goal of the Network is “to bridge the knowledge gap between large-scale genomic datasets and the underlying etiology of cancer development, progression, and/or metastasis”. To make progress toward this ambitious goal, member Centers of the Network are necessarily quite diverse in their biological specialties, experimental methods, and informatic or computational sophistication. To encourage sharing and re-use of data between member Centers, and data dissemination to the wider cancer research community, the CTD2 Network has evolved an interconnected ecosystem of web-based tools and data services, including the CTD2 Dashboard, CTD2 Data Portal, and several investigator-initiated web-based resources. Each of these resources serves distinct purposes and audiences, and accordingly requires different compromises to implement and different processes to maintain. I will provide an overview of these resources, lessons learned during their evolution, and prospects for future data integration within and beyond the CTD2 Network.
Data Integration: Genome X Transcriptome X EHR
Open to view video.
Open to view video. Large-scale biobanks linked to electronic health records (EHR) offer unique opportunities for discovery and translation. I will describe how we impute transcript levels using publicly available genomic resources such as GTEx by building SNP-based prediction models from the measured transcript levels and genotype data in GTEx, and then apply these models to the genotype data in BioVU, the biobank at Vanderbilt University. BioVU has DNA on more than 250,000 samples linked to EHR going back 10-15 years on average and more than 20 years for some individuals. In addition to BioVU, we also utilize the medical phenome data on more than 2.8 million subjects. We have genome interrogation on 120,000 of the BioVU samples and test ~1200 phenome codes in ~18,000 genes with high quality out-of-sample prediction performance. Studies in BioVU are providing evidence for a continuum from Mendelian to common, complex diseases and I will summarize some of the most recent evidence.
Building accurate research models via genome editing
Open to view video.
Open to view video. For drug discovery and therapeutic development, the quality of data relies heavily on the model used to mimic either the disease state or a biological process in human. The models can be immortalized cancer cell lines, induced pluripotent stem cell (iPSC) lines, primary tissues and whole animals. With rapid advances of gene editing technologies, desired modifications are being introduced more precisely into a wide range of genomes. Yet limitations still exist. I will discuss various types of editing we have created at our center, such as scar-free point mutations, large deletions, large insertions, and conditional alleles, issues we ran into and solutions for obtaining specific on-target modifications and effort on increasing throughput. We are also looking at the large collection of edited single cell clones for unexpected on-target modifications and off-targeting events to better understand the genetic accuracy of the models we create.
SLAS Ignite Panel Discussion: Closing the Academic-Industry Innovation Gap
Open to view video.
Open to view video. How do great ideas become new companies? What types of scientific gaps do pharma, biotech and technology companies have? How can we tighten collaboration between these two tribes? Join a panel of experts from the entrepreneurial and technical worlds as they answer those questions and share their expertise. This discussion is aimed at academics, students and early-stage entrepreneurs; however, anyone interested in innovation “war stories” is invited to attend.
LARA and SiLA2.0/AnIML – an integrated open source laboratory automation planning and evaluation suite
Open to view video.
Open to view video.
Patient specific human iPSCs to model fetal hematopoietic anomalies associated with infant leukemia
Open to view video.
Open to view video. Background: Most pediatric cancers arise from immature cell types, and nearly all show a paucity of somatic mutation, indicating a significant link between pediatric cancer, germline variation and aberrant development. Infant leukemia (IL) is a unique and poorly understood pediatric leukemia with a mortality rate >50% and almost no somatic mutation. While IL has a high prevalence of KMT2A rearrangements (MLL-r), at physiological levels in HSCs these rearrangements fail to induce a short latency leukemia phenocopying IL in mammalian models when expressed at physiologic levels. IL arises in utero, suggesting derivation from a fetal progenitor. Hematopoietic development during embryogenesis is regulated spatio-temporally, with hematopoietic stem cell (HSC)-independent progenitors specified extra-embryonically, and HSC adult-like progenitors specified intra-embryonically. Previously, we have found that IL patients, independent of the presence of MLL-r, possess a significant enrichment of germline variation in COMPASS complex members, which strongly suggests that additional germline factors are required for IL transformation. COMPASS protein complexes are nucleated by members of the SET and MLL/KMT2 family. They have specific, non-overlapping and poorly understood roles in mesoderm and hematopoietic differentiation. In addition, several COMPASS members have been found to be recurrently mutated in various cancers and certain developmental defects. Methods: We have established multiple human iPSC lines (hPSC) from infant leukemia germline cells in order to characterize how each child’s germline variation impacts hematopoietic specification compared to human ESCs and hPSCs from healthy individuals with and without COMPASS gene knockouts and inducible MLL-r. Accessing hematopoietic progenitors in a human embryo is very challenging. However, by utilizing a unique, stage-specific human pluripotent stem cell differentiation strategy that can generate the progenitors of each of these programs, we can interrogate genetic and epigenetic mechanisms of normal and IL-associated hematopoietic development and transformation, respectively. Results: To date, we have genomic, epigenomic and functional results in these hPSCs demonstrating a crucial role for specific COMPASS members on the endothelial-to-hematopoietic transition (EHT) when CD34+ cells without MLL-r expression lose the ability to specify hematopoietic progenitors. Compared to wild type hPSCs, isogenic hPSCs with single gene COMPASS family knockouts demonstrate quantitative changes in histone modifications, even in the pluripotent state before any directed differentiation has begun. Conclusion: Like many pediatric cancers, IL appears to be more a developmental defect due to combinations of germline variation and acquired mutations that skew mechanisms of cell fate specification, rather than an aggregation of genetic errors as seen in adult cancers. These functional consequences of these combinations of variants and mutations can be best modeled in cells from the actual patients, reprogrammed to hPSCs and directed to the relevant cell fates. Future work will focus on xenograft studies with these cells to serve as preclinical models.
Cloud-based Data Analytics Brings the Power of Artificial Intelligence to Biologists for High Content Analysis
Open to view video.
Open to view video. While the technologies for the generation of high content data are now widely accessible; many biologists still struggle to make full use of the extracted numeric data. Often this is due to a lack of suitable data analytics tools that they can use independently; without having to engage the services of a data scientist. Here we show how a cloud-based data analytics application can give biologists the ability to analyze the most complex high content data sets. The application allows users to analyze these large numeric data sets with unsupervised analytical methods, to identify novel cellular phenotypes. They can then use integrated Artificial Intelligence functionality to build AI models based on these phenotypes that can be used to identify similar phenotypes. Until recently another problem in the high content analysis field was a lack of publicly available screening data sets, that biologists could use for training and analytical method validation. We demonstrate the utility of intuitive web-based data analytics tools in the analysis of data from a publicly available “Cell Painting” chemical screen from the public Giga DB data repository ( https://www.gigadb.org/). Phenotypic outliers were identified which could then be rapidly viewed in the IDR ( https://idr.openmicroscopy.org) public image data repository for validation of the phenotypes. Hierarchical clustering revealed a group of hits that were used to generate a Random Forest model which was then be applied to identify similar phenotypes. Our data analysis strategy can greatly accelerate high content screening projects by giving the biologist the ability to carry out their own data analysis. The ability to apply advanced AI algorithms without having to engage the services of a data scientist is especially valuable, and the application promises to make advanced high content analytical methods accessible to a far wider audience.
An Additive Biomanufacturing Platform to Automate 3D Cell Culture and Screen for 3D Biomaterial Properties
Open to view video.
Open to view video. Three-dimensional (3D) cell culture technologies are emerging as a toolbox to engineer 3D models with physiologically relevant characteristics to bridge the gap between in vitro and in vivo. To incorporate sophisticated biochemical and mechanical cues as mimics of the native extracellular matrix, biomaterials – particularly hydrogels – are implemented to design physiologically relevant models for investigating cell physiology and manufacturing disease models. Even though the relevance of these 3D model has become recognized and successfully demonstrated for drug screening applications, current 3D cell culture technologies for hydrogels are not well suited for industrial applications. Preparation and manufacturing steps for 3D models involves many manual steps that increase the probability of user error; errors that can potentially manifest as inaccurate results. Additionally, the manufacturing of 3D models with standard laboratory equipment, which has been originally developed for 2D cell culture, is challenging, since automated solutions are missing for 3D model preparation and manufacturing involving hydrogels. In addition, low-throughput of hydrogel-based 3D cell culture applications hampers wide-spread adoption and integration into the drug development pipeline. For this reason, 3D cell culture can benefit greatly from the integration of automation and high-throughput approaches for 3D model manufacturing and screening. Herein, an ‘Additive Biomanufacturing Platform’ has been designed and developed for automated production and testing of hydrogel-based 3D models to screen for 3D biomaterial properties. Using a modular platform design, exchangeable and customizable units have been designed to enable a (semi-) automated workflow for preparation, manufacturing, and screening. The biomanufacturing unit converges conventional pipetting capabilities with emerging bioprinting functionalities, resulting in a closed process workflow for the preparation of the various hydrogel compositions, required mixing steps with cells, and finally the production of the 3D models into SBS-compatible or customized formats. In addition to a well plate transportation unit, the integrated inventory and storage units enable throughput up to 20 well plates. To demonstrate the applicability of the developed platform to automate the manufacturing of 3D models, the integration of various hydrogel systems will be presented along with the capability to manufacture microdroplets as well as defined geometric structures. The feasibility to manufacture physiologically relevant 3D models in a reproducible and high-throughput manner will be demonstrated with breast (MCF-7) and prostate (LNCaP) 3D cancer models. In this presentation, we discuss the current barriers to successfully translate hydrogel-based 3D models for industrial applications: Automation, throughput, and validation criteria. Building upon our research objectives, the talk presents how the development of automated solutions for manufacturing and screening of 3D models are capable of fostering reproducibility and efficacy. Finally, an outlook will present the capabilities to screen for 3D biomaterial properties to establish a 3D model library.
A novel European Compound Screening Library for EU-OPENSCREEN
Open to view video.
Open to view video. The European Research infrastructure EU-OPENSCREEN was founded with support of ist member countries and the European Comission. Its distributed character offers complementary knowledge, expertise and instrumentation in the field of chemical biology from 20 European partner institutes while ist open working model ensures that academia and industry can readily access EU-OPENSCREEN´s collection of chemical compounds, equipment and associated screening data. This work describes our collaborative effort between different medicinal and computational chemistry sites in Europe to set up a protocol for the rational design of a general purpose screening library against novel biological targets, consisting of about 100.000 commercial small molecules. The molecules were first pre-selected to ensure chemical stability, solubility and other screening-compliant physicochemical properties as well as absence of reactive compounds. Then the collaborating groups applied different methods to create diverse sub-libraries according to their own inhouse expertise, aimed at providing hits for a wide variety of drugable targets. Quality of approved vendors, cost and compound availability were other factors which shaped the content of the final screening library.
Building the Laboratory of the Future: How Digital Tools can Augment Human Scientific Output
Open to view video.
Open to view video. For many decades mankind has looked to automate tedious and error-prone steps of biomedical research in an attempt to improve reproducibility and throughput. While automation has had a large beneficial effect on overall throughput of biomedical research, reproducibility is still facing a crisis. In this talk, we will discuss automation trends in data generation, collection and analysis in the biomedical sciences and show how carefully designed digital tools can improve reproducibility while integrating humans in the loop. We will review both currently available and future-looking technology that are key to the laboratory of the future (e.g.: augmented reality, machine/statistical learning, voice assistants, etc). We will also discuss the need for a new framework that focuses on using machines and software to augment human scientists in the laboratory.
Mapping the genetic landscape of human cells
Open to view video.
Open to view video. Seminal yeast studies established the value of comprehensively mapping genetic interactions (GIs) for inferring gene function. Efforts in human cells using focused gene sets underscore the utility of this approach, but the feasibility of generating large-scale, diverse human GI maps remains unresolved. We developed a CRISPR interference platform for large-scale quantitative mapping of human GIs. We systematically perturbed 222,784 gene pairs in two cancer cell lines. The resulting maps cluster functionally related genes, assigning function to poorly characterized genes, including TMEM261, a new electron transport chain component. Individual GIs pinpoint unexpected relationships between pathways, exemplified by a specific cholesterol biosynthesis intermediate whose accumulation induces deoxynucleotide depletion, causing replicative DNA damage and a synthetic-lethal interaction with the ATR/9-1-1 DNA repair pathway. Our map provides a broad resource, establishes GI maps as a high-resolution tool for dissecting gene function, and serves as a blueprint for mapping the genetic landscape of human cells.
The Practical Application of Natural Language Processing to Improving Data Quality in Drug Discovery
Open to view video.
Open to view video. The new generation of Machine Learning (ML) are greedy and need lots of high quality data. Fortunately, there is a lot of data available in the cloud. Unfortunately, the available data is often unstructured (text) and needs to be transformed to a structured format or spread across many data sources which requires linking data across data sets. Linking and mining both structured and unstructured data is complicated because data suffers from lack of context, different words for the same thing, ambiguous words (Hedgehog the animal or gene?) and abbreviations, personal “short cuts” or format issues. Our team has focused on named entity recognition (NER) to resolve these issues. NER uses curated dictionaries to relate terms to entities (P2Y2 is a Gene). Dictionaries contain synonyms along with rules for refining and disambiguating terms. So, Aripiprazole is treated the same as Abilify and if Hedgehog is in the same sentence as signaling it’s most likely the gene and expands short hand like IL1/2 to IL1 and IL2. Often simple terms are not enough for the detection of concepts such as adverse events or the relationship between a gene and an indication. We can group the “cleaned” terms from above into complex queries like “find me documents where a human gene appears near a biological verb that is near an indication” where biological verbs are things like induces, regulates, suppresses etc... This approach has dramatically improved the ability to find documents based on complex relationships and concepts for example identifying adverse events. Several groups are using NER to improve data hygiene across the data life cycle. During data entry NER tools can be used to control data entry by limiting the terms scientists use. Legacy data can be cleaned by replacing synonyms with preferred terms or identifying concepts. An excellent example of NER being used for data mining occurred when a group applied NER to social media and rapidly discovered that people don’t use proper medical terminology, so they might enter “my head hurt, and I felt woozy” and not head ache and nausea. We worked with this group to develop a dictionary and pattern detector that was sensitive to common terms and expressions and related them to medical terms. This talk will cover that basics of NER and its application to several common issues facing data scientists in drug discovery where managing and using the disparate forms of data (literature, CROs, HTS, Genomics…) is critical to success.
Integration of biotherapeutics in combination screening with small molecule libraries
Open to view video.
Open to view video. Biotherapeutics account for 50% (11/22) of the new molecular entities that were approved by the FDA in 2016 with an additional 15 biotherapeutic NMEs having been approved in 2017, comprised of nine antibodies, one antibody-drug conjugate, and two enzymes. This highlights a trend in pharmaceutical companies to invest significantly in alternative modalities to small molecules. With over 200 approved recombinant protein products on the market and over 1,500 more in clinical trials, there exists a clinical need to identify effective combinations of small molecule and biotherapeutics. We have developed a high throughput screen of annotated compound libraries for combination with biologics to identify tractable synergistic combinations and elucidate underlying biology. These initial studies have led to small molecule / biologic combinations that have shown synergy and promise in animal models. Optimization of experimental procedures will be described and case studies will be discussed.
Perspectives on the design, deployment and utilisation of modular, mobile screening automation
Open to view video.
Open to view video. At SLAS2017 we presented on a partnership with High Res Biosolutions to develop a new concept for modular, reconfigurable automation to meet our varied small molecule screening demand. This successful project established the Co-Lab line of products. For SLAS2019 we will review the deployment, validation and impact to AstraZeneca’s internal and external facing screening activities. This will include, but not limited to; our experience from the use of collaborative robotics, CoLab system optimisation and HTS performance across flex cart and core platforms. We will discuss the benefits, learnings and pit falls of undertaking a complete laboratory re-fit to deliver scientist friendly HTS automation. In addition, we’ll outline the enhancements to date with peripheral equipment partners, system scheduling and automated quality control to further assay robustness. Looking further afield we’ll present our ongoing activities to integrate screening with our design, make, test and analyse (DMTA) chemistry automation. Reviewing our prototype, its functionality and development to date, along with our future concepts / aspirations to an integrated “lab of the future”.
Hierarchical organization of the human cell from a cancer coessentiality network
Open to view video.
Open to view video. Genetic interactions govern the translation of genotype to phenotype at every level, from the function of subcellular molecular machines to the emergence of complex organismal traits. Systematic survey of genetic interactions in yeast showed that genes that operate in the same biological process have highly correlated genetic interaction profiles across a diverse panel of query strains, and this observation has been exploited to infer gene function in model organisms. Systematic surveys of digenic perturbations in human cells are also highly informative, but are not scalable, even with CRISPR-mediated methods. Given this difficulty, we developed an indirect method of deriving functional interactions. We hypothesized that genes having correlated knockout fitness profiles across diverse, non-isogenic cell lines are analogous to genes having correlated genetic interaction profiles across isogenic query strains, and would similarly imply shared biological function. We constructed a network of genes with correlated fitness profiles across 400 CRISPR knockout screens in cancer cell lines into a “coessentiality network,” with up to 500-fold enrichment for co-functional gene pairs. Functional modules in the network are connected in a layered web that recapitulates the hierarchical organization of the cell. Notable examples include the synthesis of oligosaccharide chains (two distinct modules), connected to the N-linked glycosylation complex (a third module), which glycosylate the EGFR and IGF1R receptors, which are further linked to downstream signaling modules. A second subnetwork delineates amino acids regulation of the mTOR complex via signaling through lysosomal transport, the Ragulator complex, and the GATOR2 complex, while classical TSC1/2 regulation is connected through a separate branch. The network contains high-confidence connections between over 3,000 human genes and provides a powerful platform for inference of gene function and information flow in the cell.
Automated System Integration – Some Useful Paradigms
Open to view video.
Open to view video. High content and high throughput automation continues to evolve to address the changing requirements of the scientific community. It is reasonable to assume that this evolutionary process will continue, perhaps even accelerate, especially if recent changes in the drug discovery process demonstrate immediate success. The anticipation of application changes is already part of many organizations planning processes. The flexibility of new platforms and modules is assuming a high priority. From the perspective of a supplier, we’ve seen the virtually complete transition from clients having in-house automation expertise to instrument manufacturers and integrators providing upgraded capabilities, applications knowledge and a greater breadth of support services. Flexibility to meet changing discovery processes will result in an expansion of these services. We have developed a set of working paradigms for the integration of multivendor automation that shortens the time from inception to integration and operation. Each installation for us is a case study allowing us to find ways to improve. These commonalities will apply to many automation projects. Early planning is critical to success. It focuses on identifying the optimal functionality of the hardware for the application, will standard software suffice or will customization be required, how do we design and deliver the user interface, and is the capability required already in the API and drivers. Further along in the process, there is a review of process optimization within the robotic environment, primarily focusing on space utilization and timing/throughput issues. Once the early planning is complete, the process we use studies efficiency of operation of the entire robotic system, ease of operation, access for maintenance, communication of error detection/recovery, the supply of fluids/reagents, and safety. The user interface design employed helps to lead the operator through typical day to day use with helpful graphics and input prompts. Detailed application control settings are accessed from a click-through screen, easily reached as needed using the module interface. The same interface for automated operation is typically run through the API from the scheduler screens using that software’s design scheme. The detailed screens are accessed while at the scheduler screen but by screen mirroring. This design eliminates reprogramming of the API for changes to the operational software settings and customized scheduler software. Unique to our automation modules, they are routinely employed to clean labware for repeated reuse. Labware plus the overhead for ordering, shipping, inventory management, unpackaging and ultimately disposal can be reduced for most applications without any effect on results. More companies can be expected to offer products and services that address this market. Minimizing waste not only makes financial sense, but it also reduces the burden of drug discovery waste on the environment.
Application of Multi-Organ-Chips to Enhance Safety and Efficacy Assessment in Drug Discovery
Open to view video.
Open to view video. Microphysiological systems have proven to be a powerful tool for recreating human tissue- and organ-like functions at research level, providing the basis for the establishment of qualified preclinical assays with improved predictive power. However, industrial adoption of microphysiological systems and respective assays is progressing slowly due to their complexity. In the first part of the presentation examples of established single-organ chip, two-organ and four-organ chip solutions are highlighted. The underlying universal microfluidic Multi-Organ-Chip (MOC) platform of a size of a microscopic slide integrating an on-chip micro-pump and capable to interconnect different organ equivalents will be presented. Issues to ensure long-term performance and industrial acceptance of microphysiological systems, such as design criteria, tissue supply and on chip tissue homeostasis will be discussed. The second part of the presentation focusses on the establishment of automated MOC-based assays as a robust tool for safety and efficacy testing of drug candidates. These automated assays will allow for increased throughput and higher inter-laboratory reproducibility thus eventually enabling broad industrial implementation. Finally, a roadmap into the future is outlined, to further bring these assays into regulatory-accepted drug testing on a global scale.
Yoga for uHTS: How to be both strong and flexible to adapt to stretching requirements
Open to view video.
Open to view video. High throughput and flexibility of a screening automation are two of the most important components that remain challenging to piece together. Highly integrated systems are well known for their high throughput, efficiency and robustness while modular systems are great for their flexibility and versatility. Amgen’s automation platform was designed to include the best of both worlds with a highly integrated system containing a versatile and future-proof docking station as well as a separate modular MuHTS system adapting to different throughput requests. The docking station enables the flexibility needed in terms of adapting different readers for various assay technologies. It also provides the option to run parallel protocols or doubling the throughput of a protocol when docking a second copy of the same reader. The modular MuHTS system developed in-house accommodates the need for smaller throughput or lower priority requests. This platform consists of a stationary hotel, a barcode scanner and a liquid dispenser with two flexible docking locations to include different offline readers and compound transferring technologies. This setup represents a meeting point between efficiency and flexibility to cover the needs ranging from a few plates for titration to hundreds of plates for a primary screen.
Enabling Endogenous Protein Detection by Acoustic Droplet Ejection: 3D-Printed Labware and Beyond
Open to view video.
Open to view video. While antibody-based assays such as sandwich ELISA and AlphaLISA can detect unmodified proteins for high-throughput screening (HTS), they have constraints due to the quality and availability of two primary antibodies, excessive costs, and technical considerations. An alternative is the reverse phase protein array (RPPA) in which nanoliter spots of cell lysate are arranged in high density onto a protein binding substrate, typically nitrocellulose-coated glass slides using tip-based arrayers, to enable endogenous protein quantification by ubiquitous immunochemical protocols. Although RPPA only requires a single antibody, instrumentation compatibility and costs associated with coated slides can be prohibitive for large-scale HTS. To address this, we first designed a cell-based 1536-well HTS assay utilizing acoustic droplet ejection to sample nanoliter volumes of media and quantify a secreted bioluminescent reporter. We then adapted this technique for RPPA as an orthogonal antibody-based methodology by constructing a 3D-printed, low-cost nitrocellulose membrane plate alternative to coated slides. In parallel seven-concentration LOPAC1280 qHTS experiments, consistent performance between assays orthogonally identified secretory modulators of the reporter. This acoustically arrayed immunoassay, referred to as acoustic RPPA, enabled quantification of native, endogenously secreted protein with picogram sensitivity and multiplexing capabilities with cytotoxicity and imaging assays. The acoustic RPPA methodology can therefore be utilized as a powerful standalone immunochemical technique or component of a robust drug discovery platform that generates extensive biological profiles from individual wells using physiologically relevant cellular systems, as demonstrated with iPSC-derived hepatocytes. Next generation acoustic RPPA plates, designed to be more automation and reader friendly, have been manufactured and are currently under evaluation alongside custom machined HTS labware.
Ultra-large library docking for ligand discovery
Open to view video.
Open to view video. Despite intense interest in expanding chemical space, libraries of hundreds-of-millions to billions of diverse molecules have remained inaccessible. In principle, structure-based docking can address such large libraries, but to do so the molecules must be readily obtained, the computation must be tractable, and new potential ligands must outscore the inevitable decoys. Here, we investigate docking screens of over 170 million make-on-demand, lead-like compounds. The molecules derive largely from 109 well-characterized two-component reactions with a >85% synthesis success rate. The resulting library is diverse, representing over 10.7M scaffolds not found in “in-stock” commercial collections. In benchmarking well-behaved targets, the enrichment of ligands versus decoys improved with library size, suggesting that more ligand-like molecules exist to be found as the library grows. To test this prospectively, 99 and 138 million molecules were docked against the soluble enzyme AmpC β-lactamase and the membrane-bound D4 dopamine receptor respectively. From among the top-ranking docking hits for AmpC, 44 diverse molecules were synthesized and tested. Five inhibited, including an unprecedented 1.3 uM phenolate that is the most potent AmpC inhibitor found from any screen. Within-library optimization revealed a 110 nM analog, the most potent non-covalent AmpC inhibitor known. Crystal structures of these inhibitors confirmed their fidelity to the docking prediction. For the dopamine receptor, 549 molecules were synthesized and tested from among top docking ranks, and also from intermediate and low ranks. On testing, hit rates fell monotonically with score, ranging from 24% for the highest ranking, declining through intermediate scores, and dropping to a 0% hit rate for the lower ranks. Integrating across the resulting hit-rate curve predicts 481,000 D4 active molecules in 72,600 scaffolds. Of the 81 new D4 actives found here, 30 had Ki values < 1 uM. The most potent was a 180 pM Gi-biased, selective, full agonist, among the most potent sub-type selective agonists known for this receptor. Prospects for making over 1 billion lead-like molecules community-accessible, and for their prioritization in structure-based screens, will be considered.
Self-Driving Experiments': How IoT, AI, and Robotics Can Enable the Future of Science
Open to view video.
Open to view video. Many industries have rapidly adopted technologies integrating connected devices (IoT or the "Internet of Things"), AI-based analysis, and Robotics to form closed-loop systems for improving quality, increasing throughput, and reducing costs. In the life sciences, such closed-loop integrations have not been as readily adopted due to the inherent heterogeneous nature of biological research: robotics need to handle sensitive biological materials, gases, and fluids, generations of knowledge are still archived in text-based records, a countless number of different manufacturers (each either having no cloud-connected solution, or each having a proprietary one), and finally having to decipher the effects of myriad variables that can affect an experimental run (temperature, humidity, storage conditions, etc.). In this talk, we present a novel system that integrates and provides a unified interface for this heterogeneous world. We demonstrate how seemingly unrelated information and variables can combine to confound experimental results and how a globally-integrated informatics platform can identify and highlight these issues before they become prohibitively complex. We further delve into examples of closing the loop and feeding these insights into automated/robotic systems to auto-correct an experiment in real time to achieve a "self-driving experiment" akin to a self-driving car.
Self-Driving Experiments': How IoT, AI, and Robotics Can Enable the Future of Science
Open to view video.
Open to view video. Many industries have rapidly adopted technologies integrating connected devices (IoT or the "Internet of Things"), AI-based analysis, and Robotics to form closed-loop systems for improving quality, increasing throughput, and reducing costs. In the life sciences, such closed-loop integrations have not been as readily adopted due to the inherent heterogeneous nature of biological research: robotics need to handle sensitive biological materials, gases, and fluids, generations of knowledge are still archived in text-based records, a countless number of different manufacturers (each either having no cloud-connected solution, or each having a proprietary one), and finally having to decipher the effects of myriad variables that can affect an experimental run (temperature, humidity, storage conditions, etc.). In this talk, we present a novel system that integrates and provides a unified interface for this heterogeneous world. We demonstrate how seemingly unrelated information and variables can combine to confound experimental results and how a globally-integrated informatics platform can identify and highlight these issues before they become prohibitively complex. We further delve into examples of closing the loop and feeding these insights into automated/robotic systems to auto-correct an experiment in real time to achieve a "self-driving experiment" akin to a self-driving car.
SLAS2019 Innovation Award Finalist: AID.One: An Artificial Intelligence-Driven Digital Health Application for Clinical Optimization of Combination Therapy
Open to view video.
Open to view video. Conventional combination therapy is administered at high drug dosages, and these dosages are commonly fixed. These combinations are designed to achieve drug synergy and enhanced treatment efficacy. However, drug synergy undergoes constant evolution when interacting with patient physiology, and is dependent on time, drug dosage, and patient heterogeneity. As a result, patient response to treatment on a population-wide scale can vary substantially, and many patients will not respond at all to combination therapies with unsuitable drug dosages. The inability to dynamically reconcile the time/dose-driven basis of synergy is a major driver of treatment failure and high costs of drug development and failed trials. Therefore, fixed dose administration presents a substantial challenge to the global optimization of combination therapy efficacy and safety. To overcome this challenge, we have developed CURATE.AI, a powerful artificial intelligence platform that dynamically modulates drug dosages in time, and is capable of optimizing clinical combination therapy for the entire duration of care, an unprecedented advance. CURATE.AI has been validated through multiple in-human studies ranging from solid cancer treatment to post-transplant immunosuppression and infectious diseases. In these studies, CURATE.AI implementation has markedly enhanced patient treatment outcomes compared to the standard of care.To scale the implementation of CURATE.AI, we have developed a digital health application, AID.One (AI Dosing for N-of-1 Medicine) in collaboration with a community of oncologists, surgeons, engineers, and additional researchers at the interface of medicine and technology. AID.One is capable of calibrating a patient's response to combination therapy to create individualized CURATE.AI profiles. These profiles correlate the patient's drug dosages with quantifiable measures of efficacy and safety into a 3-dimensional map that can be used to immediately pinpoint optimal drug doses for a specific patient at a specific time point. As the patient response to treatment changes over time, the CURATE.AI profile also changes so that drug modulation can occur to ensure that optimal efficacy and safety are constantly mediated by the combination therapy regimen. Importantly, AID.One and CURATE.AI use only the patient's own data, and do not rely on population-based algorithms to estimate treatment parameters.This talk will highlight the process of building AID.One, as well as prospective benchmarking results in multiple oncology studies and transplant immunosuppression and clinical development progress.
High-Content Imaging of Colorectal Cancer Patient-derived Organoids to study Tumor Microenvironment
Open to view video.
Open to view video. Complex tumor microenvironments lead to intra- and inter-patient heterogeneity across multiple cancer types, which prevent consistent clinical treatment outcomes. Micro-physiological and cellular changes impact tumor cell survival and disease progression significantly. Therefore, it is critical to understand how tumor cells behave and respond to drugs under different environmental conditions. We have established 3D organoid cultures from colon cancer patient tissues. Organoids were treated with known clinical cancer drugs to examine how they respond to various treatments. To keep intact organoid structures, we performed whole mount immunostaining with tissue clearing techniques. Ki-67 (Cell proliferation), Cleaved-caspase 3 (Apoptosis), E-cadherin (Cell junction) and F-actin (Cytoskeleton) changes were examined during drug treatments. Cleaved-caspase 3 positive cell numbers were significantly increased after drug treatments but the degree of cell death differed depending on each drug type and dose. F-actin was enriched at the border between the outer surface of cells and the inner core in control organoids but this uniform pattern of actin cytoskeleton was lost in Staurosporine- and Irinotecan-treated organoids. To complement our fixed imaging workflow, we also performed live cell imaging on patient-derived organoids labeled using either Lentiviral-H2B transduction (Stable) or vital dyes (Transient). Dynamic imaging of H2B-GFP organoids illustrated cell division events, which were used to quantify cell proliferation rates in individual organoid. Cell proliferation was significantly reduced with Oxaliplatin treatment suggesting that this drug affects organoid growth by inhibiting cell divisions. DRAQ7 (Dead cells) staining was increased rapidly in entire organoids with Staurosporine and Irionotecan caused cell death at the outer surface of the organoids. Interestingly, we observed organoid contraction and shrinkage with Irinotecan and 5-Fluorouracil (5-FU) treatments highlighting the advantages of live cell imaging with regards to depicting dynamic drug- and concentration-dependent responses. Given tumor-stromal cell interactions are key microenvironmental factors influencing drug response, we also isolated cancer-associated fibroblasts (CAFs) from patient tissues to co-culture with patient-matched tumor organoids. H2B-GFP labeled organoids cultures was seeded on top of the fibroblasts to make physical contact between the two different cell types. By using high-resolution 3D imaging, we examined the impact of CAFs on the drug response of patient-derived organoids. 3D/4D organoid imaging is a powerful method to study the tumor microenvironment and screen preclinical drug compounds using patient tissues. We believe that a high-throughput automated imaging system coupled with a bio-repository of patient-derived organoids will expedite the identification of effective anti-cancer treatment options for individual patient as well as our understanding of biological questions regarding the complex tumor microenvironment.
Integrative analysis of complex host-microbial interactions in the gut
Open to view video.
Open to view video. Tissue complexity emerges from interactions of components across various biological systems, such as exogenous factors from the microbiota and different host cellular entities. These interactions can be characterized in multiple domains including genetic, spatial, and biochemical. Here we will discuss current tools that enable spatial and transcriptional profiling of individual cells, multiplex microscopy and single-cell RNA-seq, respectively, and their integration to dissect tissue-level complexity. We present several tools to interrogate these data types and demonstrate their applications in characterizing the function of intestinal tuft cells, the sentinel cell type linking the microbiome and the host. These emerging techniques will play a key role in understanding the role of complexity, for example, microbiome-host interactions, in dictating tissue function in homeostasis and dysfunction in human diseases.
VideoRoutine lab assembly line 2.0 – 3D-Inspection and manipulation of biological samples in culture dishes with the PetriJet31X platform
Open to view video.
Open to view video. Digitalization, Automation and Miniaturization currently change the way we live and work. It also affects the daily work in laboratories creating what we perceive as the Lab 4.0 or the Lab of the Future. The disruptive development of new technologies such as open source automation technology, the Internet of Things (IoT) and 3D-printing offer endless possibilities for an in-house engineering of new laboratory devices, which are compact, adaptable and smart. In conjunction with automated 3D-image analysis, powerful instruments emerge to create and resolve research data. At the SmartLab systems department of the Technische Universitaet Dresden, Germany approaches for the laboratory of the future have been developed and implemented. This includes the PetriJet31X platform technology which was developed the automate all processes associated with culture dishes in environments such as routine laboratories for microbial screening or blood sample testing as well as culture development for the next generation of antibiotics. The device technically is a x-y-robot consisting of two linear axles enabled to transport all kinds of culture dishes from A to B through a 3D-printed gripper-system which can also remove the lid of the culture dish. Core part of the programming is a self-learning control software that does not need any teaching – the most time-consuming part of setting up a typical robot. With the presented solution an experiment conducted on samples is planned only once and executed for all culture dishes in the machine with the right processing station installed – e. g. 3D sample imaging and analysis. It is no longer necessary to specify locations for culture dish piles and treated dishes get allocated dynamically while user interactions are directed by LED-lighting. The system can process more than 1.200 culture dishes in an 8-hour shift and is equipped with a storage unit for these culture dishes. The system consists of a 3D-imaging station able to resolve images of the culture dishes and their contained biological samples from 60 different positions. With these images, photogrammetric algorithms form a 3D-model of the culture dish contents and non-invasively derive properties such as biomass, volume, shape etc. of the biological sample. With this system, the number of needed samples for growth screenings are brought down as no samples need to be harvested during the cultivation. The PetriJet31X platform now operates at the Chair of Microbiology at the TU Dresden for screening of new antimicrobial substances and the next generation of antibiotics. The systems enables biologists to screen agent combinations faster and use the gained image data to feed new deep-learning algorithms.
From “at some point in time” towards “in real time”: Emerging opportunities in micro- and nano-fluidic protein characterization
Open to view video.
Open to view video. Developments in DNA sequencing have transformed the field of life sciences over the past two decades. This transformation has in no small part been due to products incorporating microfluidic technologies. There is a growing recognition, however, that the next transformational changes in life science will be catalyzed by products that focus not just on genotypic data detailing how biological systems will develop over years, decades and lifetimes; but also on phenotypic data detailing how these same systems change over the minutes, hours and days of everyday life. Micro- and nano-fluidic technologies for characterising proteins, their conformation and their interactions have a major role to play in this next transformation. This presentation will discuss recent developments, emerging opportunities and a novel microfluidic platform for protein characterization. These topics will be discussed with respect to the specific technical challenges of: conformational analysis (including aggregation and misfolding); protein-protein interaction analysis (both at individual-interaction and proteomic levels); single-molecule protein detection; and multi-biomarker detection (both hypothesis-driven and hypothesis-free). Key technological and commercial challenges relating to the emergence of these technologies at lab, clinic and consumer scale will be discussed in the context of historical precedents in the life sciences industry and current trends in basic science, diagnostics and consumer care.
3D Paper-Based Tumor Models to Characterize the Relationship Between the Tissue Microenvironment and Multidrug Resistant Phenotypes
Open to view video.
Open to view video. Multidrug resistance arises when a small population of cells in a tumor undergo changes in cellular phenotype that allow them to evade a particular treatment. We are developing a novel 3D paper-based culture screening platform that can stratify cells within tumor-like structures based on either environmental or phenotypic differences. Unlike other 3D culture platforms, we are able to rapidly separate (within seconds) discrete cell populations based on the location. Our platform involves stacking cell- or extracellular matrix (ECM)-laden paper scaffolds together to create a 3D tissue-like structure, which models the tumor microenvironment with defined gradients of oxygen, waste, and nutrients. The stacks are disassembled by peeling the paper scaffolds apart to isolate each discrete sub-population for downstream analyses of cell viability, protein and transcript levels, or drug metabolism. Our system is also highly customizable by: i) altering the size, number, or shape of available seeding areas by wax printing seeding boundaries onto each scaffold; ii) manipulating gradients by changing seeded cell density, stack thickness, or limiting exchange with culture medium; or iii) enabling either monoculture and co-culture setups, which allows the user to simulate a wide variety of in vivo conditions based on the research of interest. Here, we utilize this novel platform to investigate the phenotypic characteristics of chemotherapeutic resistant sub-populations within each culture. With a tumor cross-section model, we utilize flow cytometry to quantify the differences between cells at the drug resistant necrotic core of a tumor and cells at the proliferative outer shell. Specifically, we quantify the phenotypes that lead to passive resistance (e.g., decreased metabolic activity) or active resistance (expression of ATP-consuming drug efflux pumps). With an invasion assay setup, we investigate the relationship between drug resistance and the epithelial to mesenchymal transition (EMT) by isolating cells based on their level of invasiveness; a well-documented, observable phenotypic change that results in mesenchymal cells being more invasive than epithelial cells. The isolated sub-populations are analyzed for changes in i) drug response with dose-response curves, and ii) epithelial and mesenchymal specific protein expression levels with enzyme-linked immunosorbent assays (ELISAs) as a means to connect drug resistance to epithelial or mesenchymal phenotypes. This platform is an inexpensive customizable screening tool, which can be utilized to study specific subsets of cellular populations for advancements in both targeted chemotherapeutic development and understanding of the heterogeneity of tumor populations.
Compound Interest: Accessing the LifeArc compound bank
Open to view video.
Open to view video. LifeArc, formerly MRC Technology, is a medical research charity with over 25 years’ experience in helping scientists turn their research into potential treatments. Our Centre for Therapeutics Discovery (CTD) works on drug discovery projects in collaboration with early-stage academic research for medical conditions where there is a clear need for new treatments. With over 80 scientists in CTD we support early stage target validation, hit identification and lead optimisation for both small molecule and antibody projects. One area of support where we have focused significant resource is enabling academic access to our chemistry resources and chemical collections. For many academic researchers access to compounds to support target validation, pathway elucidation and lead discovery can often be a limiting factor in the progress of their research; whether looking for tool compounds with well annotated and understood pharmacology or collections of compounds for varied screening approaches the cost and logistics of obtaining such compounds can be significant. In addition, the choice of which tool or set of compounds to use can need an understanding of drug discovery and medicinal chemistry that may not be immediately available to a researcher locally. LifeArc has established a simple, minimal cost model to enable academic access to our drug discovery expertise and if relevant obtain whichever sets of compounds from our collections that can potentially further their research. This presentation will highlight the development of the full range of the LifeArc compound collection, with emphasis on the successful application of compound sets to academic drug discovery programmes. Established and continually refined by collaboration between LifeArc’s computational and medicinal chemists, with many years of drug discovery experience between the team members, the LifeArc collection is not just a single deck of compounds. Rather there are sets of compounds to support multiple strategies for target validation or lead discovery, scaled to support all levels of screening capability and capacity from single compound analysis through to multiple robotic systems. In this talk we will cover diversity strategies from small numbers to 100,000 compound plus approaches, follow up mechanisms for various screening strategies (e.g. fragments and annotated collections) and LifeArc strategies outside of the ‘drug/hit-like’ space necessary for prosecuting targets such as protein-protein interactions or gram negative antibacterial target research.
Achieving Reproducibility in Biological Experimentation with an IoT-Enabled Lab of the Future
Open to view video.
Open to view video. Reproducing results of published experiments remains a great challenge for the life sciences. Fewer than half of all preclinical findings can be replicated, according to a 2015 meta-analysis of studies from 2011-2014. Many variables contribute to this poor rate of reproducibility including mishandled reagents and reference materials, poorly defined laboratory protocols, and the inability to recreate and compare experiments. The consequences are significant. In drug discovery, failure to reproduce experimental outcomes stands in the way of translating discoveries to life-saving therapeutics and revenue-generating products. In economic terms, at least $28 billion in the U.S. alone is spent annually on preclinical research that is not reproducible. Even a small improvement to the rate of reproducibility could make a large impact on speeding the pace and reducing the costs of drug development. Most recommendations to address the reproducibility crisis have focused on developing and adopting reporting guidelines, standards, and best practices for biological researchCloud computing combined with the Internet of Things (IoT) offers an opportunity to leapfrog the standards-setting debate and create more precise and reproducible research without adding human capital or increasing process time. An automated, programmatic laboratory reduces error by relying on hands-free experiments that follow coded research protocols. Tracked by an array of sensors and pushed to the cloud, every documented step, measurement, and detail is automatically collected and stored for easy access from anywhere through a web interface. Underpinned by a connected, digital backbone, biological experimentation looks more like an information technology driven by data, computation, and high-throughput robotics, ultimately leading to advances in drug discovery and synthetic biology. At Transcriptic, we have developed the Transcriptic Common Lab Environment to address the current state of large-scale experimentation and to unlock a future where software and technology innovation drive high-throughput scientific research. This laboratory-of-the-future is available to users, via the internet to work in a cloud-based ecosystem where collaborators can traceably fine-tune, debug, and optimize iterative biological experiments to achieve reproducible results. As experiments become more complex, datasets larger, and phenotypes more nuanced, transformative technologies like a programmatic robotic cloud lab will be necessary to ensure that these high-value experiments are reproducible, the results can be trusted, and the protocols producing these experiments can be compared. This advance builds on a foundation of IoT-enabled innovations that have already contributed to a smarter, more efficient, and safer world.
Activity of CRBN and VHL Derived PROTACs Across a Cancer Cell Line Panel
Open to view video.
Open to view video. The success of small molecule therapeutics such as fulvestrant and lenalidomide, that promote ubiquitin mediated degradation of cancer related targets, has fueled an intense effort to mimic their activities with larger bispecific molecules called PROTACs (proteolysis targeting chimeras). PROTACs are engineered from two separate ligands joined by a flexible linker; one half engages the E3 ligase while the other half binds the disease relevant target protein. Proximity of a target to an E3 ligase results in ubiquitination and ultimately degradation of the disease-causing protein. Recent examples of BRD4 degrading PROTACs utilizing the ubiquitin ligases CRBN or VHL are quite potent and possess cellular activity that exceed that of the parent ligands. Thus, many new PROTACs targeting different proteins for degradation are being generated with the same or similar ligase binders, and the search for new PROTAC ligases is underway in both academia and industry. Hypothetically, the PROTAC strategy can target nearly any intracellular protein for degradation with many different ubiquitin ligases if small molecule ligands for each half of the chimera are discovered and if both the target and hi-jacked ligase are present within the same compartment of a relevant cell type. A difficult and necessary task is to identify the best PROTAC ligases from the hundreds of ligases encoded by the human genome, but little is known as to what makes a ligase particularly viable in the PROTAC strategy or if the PROTAC mediated activity of a ligase can be predicted by its RNA or protein expression in normal and diseased tissues. To better understand both the limits and the drivers of ligase activity in cancer cells, we screened a panel of 50 cancer cell lines with BRD4 degrading PROTACS that engage either CRBN or VHL. By comparing the activity of the PROTACs to the expression and mutation of genes that make up the E3 ligase complexes, guiding conclusions about the selection of future platform ligases were made. To rapidly validate new PROTACs made from novel target and ligase pairs, we developed a click chemistry platform that allows efficient construction of novel PROTACs in small quantities from a library of ligase binders.
Advantages of applying a new biophysical approach to HTS for lead ID, lead validation and optimization
Open to view video.
Open to view video. A known trend in HTS-driven drug discovery, biophysical methods are becoming an integral part of the workflow as targets have become more diverse, the demand for fragment-based screening projects increase and the need to identify ligands with different mechanisms of action continues to rise. Membrane proteins and targets previously considered undruggable or that don’t exhibit enzymatic activity have particularly benefited from this trend as these methods can overcome obstacles often faced with these challenging targets and are successful in lead identification and remain a crucial tool in lead validation and optimization. Despite their many advantages, biophysical methods suffer from limitations particularly in assay development, throughput and automation that is needed for lead ID and validation.But Temperature Related Intensity Change (TRIC) technology changes these shortcomings. Recent developments show that measuring TRIC can be successfully applied to make the characterization of binding events for drug discovery: More robust with improved S/N ratios More sensitive to binding events from fragments to antibodies More efficient with faster assay development and reduced measurement times, effectively increasing the throughput Here we present the latest findings in TRIC-based detection and biophysical characterization of binding events and the direct application of TRIC for HTS of diverse protein targets. Gupta, A.J., Duhr, S., Baaske, P., MicroScale Thermophoresis. Enc. Biophys. doi:10.1007/978-3-642-35943-9_10063-1
Driven assembly of stem cell-derived human tissues for disease modeling and discovery
Open to view video.
Open to view video. The need for human, organotypic culture models coupled with the requirements of contemporary drug discovery and toxin screening (i.e. reproducibility, high throughput, transferability of data, clear mechanisms of action) frame an opportunity for a paradigm shift. The next generation of high throughput cell-based assay formats will require a broadly applicable set of tools for human tissue assembly and analysis. Toward that end, we have recently focused on: i) generating iPS-derived cells that properly represent the diverse phenotypic characteristics of developing or mature human somatic cells; ii) assembling organotypic cell culture systems that are robust and reproducible; iii) translating organotypic cell culture models to microscale systems for high throughput screening; and iv) combining genomic analyses with bioinformatics to gain insights into organotypic model assembly and the pathways influenced by drugs and toxins. This talk will emphasize recent studies in which we have explored biologically driven assembly of organotypic vascular and neural tissues. These tissues mimic critical aspects of human organs, and can be used for reproducible identification of drug candidates and toxic compounds. The talk will also introduce the use of our assembled human tissues to develop models of rare developmental disorders and degenerative diseases of the brain.
A Microfluidic Trap Array with the Ability to Perform Serial Operations on Nano-liter Samples
Open to view video.
Open to view video. Despite significant advances in microfluidic single cell analysis, serial processing on the isolated cells is not trivial. In this work, we present a novel microfluidic platform that can facilitate in situ processing and analysis of single cells in nano-liter volumes by trapping, delivering reagents, and mixing in a seamless operation. Furthermore, it is possible to deterministically release the sample for further off-chip processing and analysis. This platform is composed of microfluidic trap arrays placed along a central channel of a microfluidic chip. The traps are comprised of a thin semipermeable membrane on the top, which is exposed to a pneumatic control channel. Vacuum can be applied into the pneumatic control channel to actuate the membrane to withdraw single cell samples from the central microchannel. The semipermeable membrane allows air caught within the trap during the withdrawal to escape into the pneumatic control channel and thereby, filling all the traps with the same volume of fluids. Plugs of aqueous solutions separated by air are created in the central channel such that nano-liter samples can be withdrawn into the traps by the actuation of the membrane. This enables serial operations on the nano-liter samples. Furthermore, filling the central channel with fluorinated oils can create isolated droplets containing the trapped cells/cellular components, which can be released by applying pressure into the pneumatic control channels. The multilayered device is fabricated using a laser based micro-patterning and lamination method. Commercially available silicone films were integrated as the stretchable, semipermeable membranes. Following proof of principle with food coloring dye, trapping of single fluorescent beads and single cells, and serial operations such as bleaching, labeling, and lysis will be demonstrated. With its ability to integrate multiple serial operation steps on a single trapped cells, applications towards single cell toxicological analysis will be presented.
Genetic analysis at scale
Open to view video.
Open to view video. Large-scale population cohorts such as the UK Biobank are transforming the way we consider and pursue genetic analyses, offering the opportunity to learn about a whole range of human traits and diseases. Here I will describe genome-wide association analyses of over 4,000 distinct traits and diseases from the UK Biobank including estimation of heritability. Further, we performed sex-stratified analyses enabling us to probe the question of the degree to which genetic influences on complex traits are shared across men and women. Previous work on sex-stratified analyses has shown that height and BMI are largely similar but that waist-hip ratio shows clear evidence of sex-specific effects (Randall et al. PLoS Genetics, 2013). Analysis of the initial wave of UK Biobank shows differences in sex-specific estimates of heritability for waist circumference, blood pressure, skin and hair color (Ge et al, PLoS Genetics, 2017). Initial results suggest that the overwhelming majority of genetic influences are shared across the sexes, with an average genetic correlation estimated to be in excess of 90%. Nevertheless, a number of phenotypes do show sex-specific genetic correlation clearly less than 1 including fat percentage traits, exercise related traits such as pulse rate and haemoglobin concentration and prior history of smoking. These kinds of analyses illustrate the growing potential for biobanks to understand the impact of genetic variation across all of human health and disease.
How to Survive – and Thrive – In Industry
Open to view video.
Open to view video. In this session, attendees will learn more about what it takes to advance in a non-academic career, including leadership and management skills, teamwork, and responding to feedback.
What do you want from me? Getting an Industry Position from the Hiring Manager’s Perspective
Open to view video.
Open to view video. In this interactive session, attendees will learn more about the types of non-academic positions available and how to achieve them, including tips on reviewing job descriptions, structuring the CV, and interviewing.
Recent Developments in Microfluidics and Microtechnologies for Applications in Life Science Research, in vitro Diagnostics and Medical Devices
Open to view video.
Open to view video. This presentation will provide insight into the dynamics and benefits of rapidly evolving microfluidics and microtechnologies. Technologies that may have an enormous impact on the progress in science and healthcare. Microfluidics is the science of manipulating small volumes of liquids, usually on microchips made with semiconductor manufacturing techniques and containing small channels, which enable accurate control of liquids and chemical reactions. Because of the use of small volumes, quicker temperature shifts and faster liquid displacement is made possible. Moreover, microfluidics enables the automation and integration of complex operations on-chip, with reduced sample and expensive reagent volumes. With such properties, microfluidics obviously fits applications in healthcare along with other microtechnologies. Breakthroughs in the fields of life science research, in vitro diagnostics and medical devices are only possible because of the progress that has been made by both top designers and top manufacturers in these technology fields. With the right capabilities and growing experience in the field, a plurality of possibilities are offered by microfluidics and microtechnology companies that may lead to improvements in healthcare. For critical applications, where the challenge can be as critical as saving a patient’s life, selecting average quality is not an option. For this it will be key to successfully implement microfabrication technologies into industrial products. This does however require a multidisciplinary approach that usually starts from a conceptual design to prototyping to a final design and process transfer for volume manufacturing. Moreover, a flexible approach allowing the choice of a variety of fabrication materials and hybrid material combinations as well as various options for various fabrication technologies are key to the successful integration of required multiple functions into next generation products. This approach is of particular interest for developing relative complex designs with multiple micro- and nanostructures and functionalities on board. This way, development costs and time-to-market can be reduced and success rates for cost-effective commercialization are increased. During this presentation examples of developing industrial solutions together with recent technology developments will be presented and explained that may have a significant impact on applications in the field of Genomics, Point-of-Care diagnostics, Organ-on-a-Chip and cell culturing.
Toward Standardizing Automated DNA Assembly
Open to view video.
Open to view video. DNA-based technologies have revolutionized research over the past decade, and industry and academia are both keen to leverage these new capabilities as efficiently as possible. A crucial demand is the ability to assemble basic DNA parts into devices capable of executing a desired function. Equally important are the needs to build these devices quickly, robustly, and at scale. Common DNA assembly strategies like Modular Cloning and Gibson Assembly, while well-known in the field of Synthetic Biology, often differ greatly in their protocols and execution between research groups, individuals, and even for specific DNA constructions. This variability necessitates standardization, especially if the DNA assembly process is to leverage automation. Towards this end, we performed a parameter sweep to identify critical factors, such as DNA part concentration, final assembly size, and number of DNA parts in a reaction, which most impact DNA assembly efficiency when automating the process. These efforts included development of a software tool called Puppeteer, which provides both high level in-silico specification of assembly products using DNA part repositories, and generation of both human-readable and machine-readable protocols for DNA assembly reaction preparation. Puppeteer also quantifies the advantage of executing DNA assembly jobs on liquid handling hardware through a new metric we developed known as “Q-metrics”. Q-metrics (Q-Time & Q-Cost) quantify the savings of both time and cost of a particular job done on a liquid handling robot against performing the job manually. We calculated these metrics by running head-to-head experiments comparing associated hands-on time and costs of performing identical DNA assembly jobs of different scopes both manually and automated with a liquid-handling robot. Taken together, we have developed protocols, software, and quantifiable metrics that will provide a solid foundation for research groups new to automating the process of DNA assembly. We are continuing to develop our pipeline and expanding the assembly strategies used, as well as the number of Puppeteer-compatible liquid handling devices. It is our hope that we can provide a fully automated, robust, and repeatable DNA assembly package that can be deployed in any academic or industrial setting.
Cell growth is an omniphenotype
Open to view video.
Open to view video. Genotype-phenotype relationships are at the heart of biology and medicine. Numerous advances in genotyping and phenotyping have accelerated the pace of disease gene and drug discovery. Though now that there are so many genes and drugs to study, it makes prioritizing them difficult. Also, disease model assays are getting more complex and this is reflected in the growing complexity of research papers and the cost of drug development. Herein we propose a way out of this arms race. We argue that synthetic interaction testing in mammalian cells using cell growth as a readout is overlooked as an approach to judge the potential of a genetic or environmental variable of interest (e.g., a gene or drug). The idea is that if a gene or drug of interest is combined with a known perturbation, and causes a strong cell growth phenotype relative to the known perturbation alone, this justifies proceeding with the gene/drug of interest in more complex models like mouse models where the known perturbation is already validated. This recommendation is backed by the following: 1) Most genes deregulated in cancer are also deregulated in other complex diseases; 2) Diverse non-cancer drug responses on patient cell growth readily predict responses to clinical outcomes of interest; 3) Gene and drug effects on cell growth correlate with their effects on most molecular phenotypes of interest. Taken together, these findings suggest cell growth could be a broadly applicable, single source of truth phenotype for gene and drug function. Measuring cell growth is robust, and requires little time and money. These are features that have long been capitalized on by pioneers using model organisms that we hope more mammalian biologists will recognize.
The value of normalization and impact of library size on identifying hits from DNA Encoded Library screens
Open to view video.
Open to view video. DNA encoded small molecule libraries (DELs) can be synthesized by several means (i.e., DNA recorded, DNA templated, self-assembling), but regardless of approach most DELs rely on affinity selection to identify hits from those libraries. Affinity selections coupled with chemically diverse DNA encoded libraries have demonstrated their value in identifying small molecule hits for the development of chemical probes and clinical candidates. Selection outputs are interrogated through sequencing of the DNA tags and analysis of the copy counts of selected molecules help to identify the ligands enriched specifically for the target of interest. Analysis based on the copy counts presents two challenges: 1) Copy counts do not correlate with activity of DEL hits synthesized on- or off-DNA and 2) Hits from very diverse libraries (>100 million) are rarely identified. At GSK we have adapted several different strategies for selection, sequencing, and analysis tailored to different DEL diversities that enable us to identify and rank hits from our entire library collection.
An Automated Gene-Editing Platform for Breast Cancer
Open to view video.
Open to view video. About 70% of breast tumours express estrogen receptor alpha (ERa), and has been a strong stimulator for breast cancer proliferation. In addition to their complex roles in cancer, ERa also controls a wide range of physiological processes from regulating the development and function of the female reproductive system and initiate protective based functions. Although to fully understand the connection between physiological and molecular functions of the estrogen receptor, it requires an in-depth understanding of the spectrum of genes regulated in the ER pathway. Hence, there have been studies in finding new small molecule inhibitors that will prevent the upregulation of genes related to the estrogenic signaling pathway. But before designing new inhibitors, a continuous challenge is to find genes that are related to the ER pathway that regulates the growth and differentiation of the cells. A possible mechanism is to search for these genes is by knocking out these genes in the genome using gene-editing techniques. Our group has recently developed an automated gene-editing droplet-based microfluidic platform which is capable of automating culturing and editing of lung cancer cells using traditional-based lipid-mediated transfection and performed cellular analysis through imaging techniques. However, in the context of breast cancer cells, traditional methods of gene-editing are not possible. Breast cancer cells (like T47D-KBLuc) are typically hard-to-transfect cell lines which lead us to integrate alternative gene delivery methods (e.g., viral transduction and electroporation) on our microfluidic system to improve delivery. Furthermore, to show the versatility of our platform, this led us to test different types editing methods: RNA interference (RNAi) using short-hairpin RNA (shRNA) and CRISPR using Cas9 protein. Using these different methods, we present three main findings: (1) results from the optimization of gene delivery conducted through viral transduction and electroporation to assess the efficacy of the novel integration of these methods on droplet-based microfluidics, (2) to maintain the integrity of analysis-on-chip, results of cell proliferation measurements using immunofluorescence on device is shown and (3) finally, the targeting of genes expressing ERa, p53 and other important oncogenes will be assessed for a loss-of-function screen. Overall, with the novel incorporation of alternative gene delivery methods, this platform aims for working with harder-to-transfect cell lines regarding automated gene-editing to further demonstrate the flexibility and efficacy of using automated gene-editing tools on device. We hope this would allow for better ease and rapidity in finding therapeutic targets for breast cancer treatment.
Small-molecule induced protein degradation with proteolysis targeting chimeric molecules (PROTACs)
Open to view video.
Open to view video. Targeted protein degradation using bifunctional small molecules known as proteolysis targeting chimeric molecules (PROTACs) is emerging as a novel therapeutic modality. PROTACs redirect ubiquitin-ligases to target specific proteins for ubiquitination and proteasomal degradation. The advantages of the PROTAC technology lie in its modular, rationally designed molecules, capable of producing potent, selective and reversible cellular protein knock down as demonstrated in both cellular and in vivo. This approach combines the efficacy typically associated with nucleic acid–based approaches with the flexibility, titratability, temporal control and drug-like properties associated with small-molecule agents. The removal of a disease-causing protein is an attractive therapeutic option. PROTACs possessing efficient cellular degradation of target proteins with functional activity have been described in the literature, with some examples showing in vivo efficacy in disease-relevant models. This presentation aims to highlight the potential of PROTACs in drug discovery with a focus on their challenges from our perspective.
A Cloud-based Solution for Automated Device Data Collection, Validation, and LIMS Integration for Analytical QC
Open to view video.
Open to view video. Most modern labs continue to employ time consuming and error prone manual processes for collecting, validating, and publishing raw device data. Although there are a number of self-hosted Scientific Data Management Systems (SDMS) that claim to alleviate at least one aspect of these processes, most of these solutions are built on outdated technology, have no support for cloud deployments, and do not facilitate the use of validation and export pipelines. At the Broad Institute, we set out to build a modern, cloud-hosted informatics solution for the automation of our analytical data that does more than just file aggregation, enabling scientists to work smarter by automatically collecting, validating, and exporting all of their data through a flexible, cloud-hosted pipeline. This solution minimizes errors, saves valuable scientist time, and provides a comprehensive, fully auditable, record of data as it moves from the device to the LIMS. The analytical lab in the CDoT (Center for Development of Therapeutics) group at the Broad Institute has a set of UPLC devices that are able to be used interchangeably. Attached to each device is a computer that, by default, is used as the primary storage of both raw and processed data files. To begin a run, an input file is generated by the LIMS and placed into a directory accessible by the device computer. The operator uses this input file to initiate a run on the device. The result of the run is a set of raw result files, some of which store transient data and are ignored. Once the run is complete, the operator performs an integration, effectively producing a Chromatogram for each sample in the run. The data collection solution sends all of these files, along with the summarized run output, to the cloud in real-time. Collected data is then automatically checked for completeness, and valid results are subsequently transformed and exported to the LIMS in a LIMS-ready format. Over the course of 5 months, our automated solution has processed chromatograms for more than 7,400 samples, saving countless hours for operators and analysts. The system has captured at least 624 invalid documents, segregating them and informing the lab manager to prevent erroneous data from being imported into the LIMS.
Reducing the Science Community Plastic Waste Contribution Through Financially Viable Solutions
Open to view video.
Open to view video. Plastic waste is a growing issue for our planet. It is estimated that more than 8.3 x 109 metric tons of plastic have been produced since its introduction. It is thought that less than 90% of total plastics are reused and instead enter the waste stream adding toxins and physical debris to the environment. In 2015, Urbina, et. al. estimated labs contributed 5.5 million tons a year to the problem. While many industries work to find innovative solutions to their plastic waste contributions, the science industry has been slow to adopt aggressive abatement or repurposing strategies. While at the same time, the amount of plastic in the lab continues to increase. Every sample uses multiple plastic tips and tubes to generate results. As more labs adopt new technology and automation strategies, laboratory plastic consumables waste will continue to grow. For example, by the end of 2018 it is estimated that over 77 million pounds of pipette tips will be discarded after a single use into landfills. That’s enough tips to circle the earth more than ten times. Additionally, the electricity wasted/used to produce that number of plastic pipette tips could have powered almost 21,000 homes in the US for a year. Multiple technologies and strategies are being employed within automation groups and at the benchtop to help laboratories substantially decrease their plastic waste. Employing these technologies can not only decrease waste but also easily align with overall laboratory budgetary goals resulting in both financial waste and plastic waste reduction.
Data Directed Approach for Diversity-based HTS library design
Open to view video.
Open to view video. High-throughput screening (HTS) remains the principal engine of small-molecule probe and early drug discovery. However, recent shifts in drug screening toward nontraditional and/or orphan-ligand targets have proved challenging, often resulting in large expenditures and poor prospects from traditional HTS approaches. In an ever-evolving drug development landscape, there is a need to strike a balance between chemical structure diversity, for diverse biological performance, and target-class diversity aimed at generating efficient and productive screening collections for HTS efforts. In this presentation, data-driven approaches for compound library design and hit triage are examined. An overview of current developments in data-driven approaches formulated at Scripps and other institutes is presented that demonstrate increased hit rates using iterative screening strategies that also provide early insight into structural activity relationships (SAR). Scripps Research Molecular Screening Center, having served as one of four principal HTS screening centers during the NIH-funded Molecular Libraries Production Centers Network program, had performed well-over 300 HTS campaigns on a large variety of traditional and nontraditional targets. Informatics analyses of the NIH/MLPCN library reveals representative scaffolds for hierarchical related compounds correlated to various target classes. Data mining HTS screening results of various campaigns demonstrates that hierarchical-related compounds to these scaffolds have enhanced hit rates (up to ~50X); serving as a hit predictor and guide for compound library selection. Presented are the informatics that supports a novel approach to HTS library design aimed at mitigating risk when screening difficult targets by only requiring small pilot screens for early chemical landscape discovery and provide guidance to formulating larger HTS efforts. Hits discovered from a targeted pilot screen can allow for the selection and retesting of chemically related super-structures; which results in enhanced hit discovery while providing preliminary SAR heredity. Lessons learned in data-driven library selection allow investigators to cover larger amount of appropriate chemical space, increase the chances of identifying quality hits, provide guidance in library acquisition and even provide a foundation in combinatorial chemistry design.
VideoSiLA & AnIML: Lab of the Future, meet Data Analytics
Open to view video.
Open to view video. Connectivity and seamless interoperability are among the goals for many “lab of the future” projects. Additionally, organizations are looking at analyzing and leveraging their data in new ways. Cloud technology, machine learning and artificial intelligence tools are getting into focus. To bring such efforts to fruition, a seamless data flow between systems must be established.This presentation will explore how standards can provide avenues towards such a digital lab. Standard protocols and data formats serve as infrastructure enablers for integration of instruments, systems, and a unified data flow.We will review the progress of the open ASTM AnIML and SiLA standardization initiatives, which can provide key building blocks to establish interoperability. While AnIML specifies an XML-based standard data format for analytical data, SiLA provides webservice-based communication standards for interfacing with instruments. Software clients can use SiLA to discover and interact with services inside the lab. These services may be instruments or other systems. All services communicate using a common open protocol, built on cutting-edge web technologies. AnIML provides the data format to plan and document the execution of lab experiments.This ensures that data from all experiments is captured in a consistent and easily accessible format, enabling easy consumption of the data via analytics tools, feeding machine learning models. Data becomes readily available and can generate value beyond the original goal of the experiment. This paves the way from the bench to result, and from result to long-term insight.By joining forces, a new ecosystem is emerging around SiLA and AnIML that allows end-to-end integration of instrument control, data capture, and enterprise system (ELN, LIMS) connectivity. This presentation reports on the current state of this ecosystem and discusses the rapid rate of adoption in the vendor community.
Antibacterial monoclonal antibodies : A strategy to prevent serious bacterial infections
Open to view video.
Open to view video. Staphylococcus aureus is an opportunistic pathogen that causes numerous debilitating infections such as pneumonia, endocarditis, bacteremia, complicated skin soft tissue infections. These infections are associated to high level of morbidity and mortality especially in immune-compromised patients. Antibiotic resistance has reduced treatment options and underscores a need for alternative antibacterial treatment strategies. The increase incidence of multiple drug resistant isolates raises for the need of alternative therapies such as immuno-therapy. We have developed three human monoclonal antibodies (mAbs) against Staphylococcus aureus secreted alpha toxin (MEDI4893), and the four main bi-component leukotoxins (SAN481) as well as surface expressed Clumping Factor A (SAR114), six virulence factors exerting different activities on the host. AT and the bi-components leukotoxins (LukSF, LukED, HlgAB and HlCB) are pore-forming toxins that causes immune dysregulation, cell death and promotes bacterial dissemination, whereas ClfA is a fibrinogen binding protein that promotes biofilm formation, bacterial agglutination and complement evasion. MAb combination showed broad strain coverage in multiple animal models through complementary mechanism of action such as toxin neutralization, inhibition of bacteria agglutination and opsonophagocytic killing. MEDI4893 prophylaxis also demonstrated efficacy in a S. aureus + Gram-negative mixed lung infection model. Together our data hold some promise as an alternative therapy for the prevention of S. aureus diseases.
On-chip Membrane Protein Cell-free Expression: Enables Direct Binding Assay
Open to view video.
Open to view video. Though integral membrane proteins (IMPs) play a pivotal role in the drug discovery process, developing a direct binding assay for monitoring their interactions with therapeutic candidates, particularly small ligands, has been extremely challenging. IMPs are commonly expressed in a cell-based system, and after undergoing a cumbersome multistep process involving extraction, purification, and in vitro stabilization in a soluble format, they could be interfaced with a standard biophysical technique such as surface plasmon resonance (SPR) for binding analysis. To circumvent this traditional limitation, we envisaged combining cell-free technology with a SPR biosensor for performing real time on-chip protein expression monitoring, and subsequently binding detection. SPR functionalized surfaces were used as a template to capture cell-free expressed IMPs. This bottom-up methodology was employed to express a G-protein coupled receptor, the β2 adrenergic receptor, and the chimeric ion channel, KcsA-Kv1.3, for binding characterization. Described herein, we investigated two different in situ strategies: (1) a solid-supported lipid bilayer for incorporating nascent protein directly into a native membrane mimicking environment, and (2) an anti-green fluorescent protein (GFP)-functionalized surface for capturing in situ expressed GFP-fused protein. Phospholipid-bilayer-functionalized surfaces didn’t render any control over the IMP’s orientation in the bilayer, and demonstrated minimal or no binding to the binding partners. In contrast, the anti-GFP-functionalized capture surfaces, with a defined IMP surface orientation, have shown binding to both large and small molecules.
The Digital Lab: Enhancing Data Integrity and Adherence to FAIR Guiding Principles
Open to view video.
Open to view video. Data is the product of any science-based R&D organization. And since data is created in the lab, the lab itself is the very beginning of the data management and stewardship lifecycle. In this talk we will discuss embedding data standards at a foundational level allow for the creation of a 100% “digital lab” that removes numerous error modes from laboratory processes. The digital lab expands the scope of current automation capabilities and self-documents the relevant context of the process in real-time, capturing enriched metadata during the execution of an experiment, data reduction and analysis, decision capture, including details of the materials and instruments used. Entropy is removed from the process using normative terms provided by the Allotrope Foundation Ontologies in the laboratory software – enabling correct, consistent and complete metadata capture. Conformance to SOPs and analytical methods is also enforced digitally using standard input (instruction sets) described in a semantic graph. The completed data set connects equipment, materials, processes, results and decisions and is stored in a semantic graph (RDF triples) along with “raw data” (W3C data cube) in an open, portable, vendor-neutral format. The resulting standardized, highly annotated body of data is more consistent, searchable and more easily integrated across domains, and also enables the automation of reports and other structured documents- with documented provenance to the original source. When completed, the digital lab approach will greatly enhance data integrity (ALCOA-CCEA rules) and adherence to the FAIR Guiding Principles for scientific data management and stewardship.
One of these things is not like the other… or is it? How to know when to repurpose software solutions for your laboratory
Open to view video.
Open to view video. Often when trying to solve a problem, we find ourselves looking in familiar places for the answer. But when it comes to data and informatics, relying on surface similarities might prove problematic. How is one to know when a familiar technology is appropriate for a new application or when an entirely new technology is needed? To answer these questions, we developed a methodology to review and analyze requirements and assess existing solutions for fitness to repurpose in new areas. This method led to some surprising findings about when and what types of software solutions are appropriate to reuse for new applications and which are not. In this talk, we will outline the steps to review technology and determine fitness to repurpose for new applications. To illustrate the method, we will review a case study of a production software platform used to track trucking data that was repurposed to collect and store global public health surveillance data. Participants will learn actionable steps to analyze, review and determine fitness of their current software solutions for new applications as well as some rules of thumb about when software can be repurposed and when to consider new solutions.
High Throughput Measurement of Antibody Drug Pharmacokinetics: Assessment of Penetration of Antibodies and Conjugates into 3D Cell Culture Models
Open to view video.
Open to view video. In developing an antibody-based therapeutic for a solid tumor target, it essential that the therapeutic not only binds to the target of interest with high affinity but also penetrates the tissue. The ability for an antibody-based therapeutic to penetrate into tissue has a significant impact on its in vivo efficacy and thus a reliable and inexpensive assay for screening antibodies for penetration has great value in the therapeutic development process. To address this need, an in vitro assay was developed that combines 3D cell culture models with tissue clearing and fluorescent labeling to quantitatively assess antibody penetration. It was demonstrated that this assay allows for rapid evaluation of antibody penetration within models that accurately replicate in vivo diffusion kinetics and that whole antibody libraries could be quickly screened. The introduction of this assay allows for antibodies to be quickly screened, altered and optimized during inexpensive in vitro assays prior to expensive in vivo studies where poor penetration can lead to late stage failures.
Phenotypic discovery and mechanism-of-action studies: a focus on the beta cell
Open to view video.
Open to view video. A loss of beta-cell mass and biologically active insulin is a central feature of both type 1 and type 2 diabetes. A chemical means of promoting beta-cell viability or function could have an enormous impact clinically by providing a disease-modifying therapy. Phenotypic approaches have been enormously useful in identifying new intracellular targets for intervention; in essence, we are allowing the cells to reveal the most efficacious targets for desired phenotypes. A key step in this process is determining the mechanism of action to prioritize small molecules for follow up. Here, I will discuss how my group is identifying small molecules that promote beta-cell regeneration, viability, and function. We have developed a number of platforms to enable the phenotypic discovery of small molecules with human beta-cell activity, providing excellent starting points for validating therapeutic hypotheses in diabetes. First, we developed an islet cell culture system suitable for high-throughput screening that has enabled us to screen for compounds to induce human beta-cell proliferation, as well as to perform follow-up studies on small molecules that emerge from other efforts. This work identified DYRK1A inhibition as a relevant mechanism to promote beta-cell proliferation, and 5-iodotubercidin (5-IT) was shown to induce selective beta-cell proliferation in NSG mice. We have collected a number of available DYRK1A kinase inhibitors for kinase profiling and correlation with beta-cell proliferation. It appears that inhibition of DYRK1A and CLK1 are both correlated with the level of proliferation induced. More recently, we have found that this small-molecule approach actually causes the loss of several markers of beta-cell maturity (PDX1, NKX6.1, MAFA) in human beta cells, as measured by immunofluorescence, suggesting that induction of a proliferative state results in beta-cell dedifferentiation. We are investigating whether dedifferentiation enhances the effects of DYRK1A inhibitors, or whether it is sufficient for proliferation in the first place. We also developed an assay for high-throughput screening to identify small molecules that can protect beta cells from apoptosis induced by pro-inflammatory cytokines. This work led to the discovery that histone deacetylase 3 (HDAC3) is a relevant target for inhibiting beta-cell apoptosis, with beneficial effects in islets in cell culture. Further, isoform-selective inhibitors can delay (and partially reverse) the onset of autoimmune diabetes in NOD mice. Although we know the target for such compounds, the precise mechanism of action is still unclear. Thus, we have been performing gene expression- and proteomics-based experiments to uncover the pathways in beta cells perturbed by HDAC3 inhibition.
Improving the Drug Discovery Tool Box
Open to view video.
Open to view video.
Opening Keynote: Teresa Woodruff, Northwestern University
Open to view video.
Open to view video.
Closing Keynote: Eran Segal, Weizmann Institute
Open to view video.
Open to view video.