2021 AI-Powered Drug Discovery Symposium

Learn more about the various AI tools available today as well as discuss future drug discovery collaborations involving the cutting-edge advances in AI, machine learning and computational science.

Daniel Ting, M.B.B.S., Ph.D.

Director of Singapore Health Service (SingHealth) AI Program, Head of AI and Digital Innovation

Singapore National Eye Centre

Associate Professor Daniel Ting is the Director of Singapore Health Service (SingHealth) AI Program, Head of AI and Digital Innovation in Singapore Eye Research Institute (SERI), Associate Professor in Ophthalmology with Duke-NUS Medical School Singapore. At the global setting, Dr Ting serves in several AI executive committee (American Academy of Ophthalmology, STARD-AI, QUADAS-AI, DECIDE-AI), AI editorial boards (Associate Editors for Nature Digital Medicine, Frontiers in Medicine and Frontiers in Digital Health; and Chair of AI and Digital Innovation Standing Committee, Asia-Pacific Academy of Ophthalmology. In 2017, he was also the US-ASEAN Fulbright Scholar who visited the Johns Hopkins University to deepen the AI collaboration between US and the ASEAN regions. 

To date, he has published >200 peer-reviewed papers in highly prestigious journals such as JAMA, NEJM, Lancet, Nature Medicine, Nature Biomedical Engineering, Lancet Digital Health and etc. In 2021, he is also ranked the world’s most influential deep learning researcher across all domains in healthcare for the past 10 years (2010-2021) by the ExpertScape. 

For the accomplishment, Dr Ting was recognized by many top-tiered international AI and ophthalmology societies in winning many prestigious scientific awards, including the MICCAI OMIA Prestigious Achievement Award (2020), ARVO Bert Glaser Award for Innovative Research in Retina (2020), USA Macula Society Evangelos Gragoudas Award (2019), APAO Young Ophthalmologist’s Award (2018), APTOS Young Innovator Award (2017) and etc. 

With the advancement of AI technology, Daniel hopes to harness the power of digital technology, including big data, deep learning and blockchain, to  improve patients’ outcome, experience, reduce health economic burden and more importantly, narrow the gap of inequality of health care access, standard and delivery globally.

Kerry Gilmore

Assistant Professor of Chemistry

University of Connecticut

Dr. Kerry Gilmore is an Assistant Professor of Chemistry at the University of Connecticut. He grew up in Brewster, Massachusetts and attended Roger Williams University obtaining bachelors degrees in Biology and Chemistry. He received his Ph.D. in 2012 from Florida State University studying cyclization reactions, during which time he was a Fulbright Scholar working at the CNR in Bologna, Italy.  He then moved to the Max-Planck Institute of Colloids and Interfaces for postdoctoral work, and was group leader of the flow chemistry group from 2014-2020. In 2020 he moved to the University of Connecticut where he is an Assistant Professor. His current research interests include the development of technology and approaches to advance methodologies of small molecule synthesis, as well as to utilize computational methods and machine learning to better understand and predict the selectivity of organic reactions. He recently won the ACS 2021 Award for Affordable Green Chemistry.

Ola Engkvist, Ph.D.

Head Molecular AI

AstraZeneca R&D

Dr Ola Engkvist is head of Molecular AI in Discovery Sciences, AstraZeneca R&D. He did his PhD in computational chemistry at Lund University followed by a postdoc at Cambridge University.  After working for two biotech companies he joined AstraZeneca in 2004. He currently lead the Molecular AI department, where the focus is to develop novel methods for ML/AI in drug design , productionalize the methods and apply the methods to AstraZeneca’s small molecules drug discovery portfolio. His main research interests are deep learning based molecular de novo design, synthetic route prediction and large scale molecular property predictions. He has published over 100 peer-reviewed scientific publications. He is adjunct professor in machine learning and AI for drug design at Chalmers University of Technology and a trustee of Cambridge Crystallographic Data Center. 

Rafael Gomez-Bombarelli, Ph.D.

Toyota Assistant Professor in Materials Processing


Rafael Gomez-Bombarelli is the Toyota Assistant Professor in Materials Processing in MIT’s Department of Materials Science and Engineering. Gomez-Bombarelli is interested in fusing machine learning and atomistic simulations for designing materials and their transformations. His group works across molecular, crystalline and polymer matter, combining novel computational tools in optimization, inverse design, surrogate modeling and active learning with simulation approaches like quantum chemistry and molecular dynamics. Through collaborations at MIT and beyond, his group helps develop new practical materials such as therapeutic peptides, organic electronics for displays, electrolytes for batteries, and oxides for sustainably catalysis. Gomez-Bombarelli’s work has been featured in the MIT Technology Review and Wall Street Journal. He also co-founded Calculario, a materials discovery company that leverages quantum chemistry and machine learning to target advanced materials in high-value markets. He earned a BS, MS, and PhD in chemistry from Universidad de Salamanca, followed by postdoctoral work at Heriot-Watt University and Harvard University.

Payel Das, Ph.D.

IBM Research

Dr. Payel Das is a Principal Research Staff Member and a manager at IBM Research AI, IBM Thomas J Watson Research Center. She has also been an adjunct associate professor at the department of Applied Physics and Applied Mathematics (APAM), Columbia University. She received her Ph.D. on theoretical biophysics from Rice University, Texas. Currently, she leads research on trustworthy AI in the low-data regine and machine creativity. A central focus is on developing controllable generative AI and efficient black-box optimization techniques. The goal is to enable reliable modeling of complex systems and efficient synthesis of novel and useful designs for various downstream business and scientific applications, including drug discovery and material design. 

Das has co-authored over 40 peer-reviewed publications and several patent disclosures, given dozens of invited talks at several university colloquiums, department seminars, top rated conferences, and workshops. She served as the editorial advisory board member of the ACS Central Science journal. She is the recipient of two IBM Outstanding Technical Achievement Awards (the highest technical award at IBM), two IBM Research Division Awards, one IBM Eminence and Excellence Award, and six IBM Invention Plateau Awards. 

Dean Ho, Ph.D.

Head, Department of Biomedical Engineering

National University of Singapore

Provost’s Chair Professor

Director, The N.1 Institute for Health (N.1)

Director, The Institute for Digital Medicine (WisDM)

Head, Department of Biomedical Engineering

iao Liu, M.B.ChB., Ph.D.


University Hospitals Birmingham NHS Foundation Trust

Dr Xiao Liu is an ophthalmologist and a post doctoral researcher at the University of Birmingham and University Hospitals Birmingham NHS Foundation Trust. She is interested in evidence standards for AI in healthcare, to ensure AI innovations can safely and effectively improve patient care. Xiao co-led the development of SPIRIT-AI and CONSORT-AI, the first international reporting standards for clinical trials of AI interventions, and is working with other AI reporting standards in development, including STARD-AI, DECIDE-AI and TRIPOD-AI. She also works with regulatory and commissioning bodies, including MHRA, NICE, the UK National Screening Committee and the WHO/ITU AI4H focus groups, on their approaches to evaluating AI in healthcare.

Edward Chow, Ph.D.

Editor-in-Chief, SLAS Technology; Associate Professor Cancer Science Institute of Singapore

National University of Singapore

Associate Professor Edward Kai-Hua Chow is a Principal Investigator at National University of Singapore (NUS) in the Cancer Science Institute of Singapore and the Department of Pharmacology. He is also the Research Director for the Institute of Digital Medicine (WisDM) and Dept. of Pharmacology. He received his B.A. in Molecular and Cellular Biology from the UC Berkeley and his Ph.D. at UCLA. Prior to joining NUS, A/P Chow was an American Cancer Society Postdoctoral Fellow under the guidance of Prof. J. Michael Bishop (1989 Nobel Prize in Physiology or Medicine) at UCSF. His research group is interested in understanding how to treat specific oncology patients or patient subtypes as well as the oncogenic drivers that determine subtype-specific therapy. In particular, his group is interested in applying engineering-based analytics towards personalised and precision drug combination design. Through the development of quadratic phenotypic optimisation platform (QPOP), his group has demonstrated that drug combination design for specific disease indications as well as specific patient groups can be quickly achieved in a cost, time and sample efficient manner. With an emphasis on haematological malignancies and gastrointestinal cancers, this work has been translated into multiple clinical studies as well as part of the NUS start-up, KYAN Therapeutics.

Elodie Pronier, Ph.D.

Lead Biologist


Elodie Pronier, PhD, Lead Biologist, Owkin graduated from Université Denis Diderot (Paris 7) with a European Magister in Genetics, before completing her PhD in Oncology and Hematology at Institut Gustave Roussy. There, she focused on epigenetic mutations involved in leukemia development. Elodie then moved to New York as a postdoctoral research fellow at the Memorial Sloan Kettering Cancer Research in Ross Levine’s laboratory, where she co-led several projects on the genetics of leukemia. She is now a Lead Biologist within Owkin’s Biotech team. She is in charged of creating our early drug discovery pipeline. 

Lukas Pelkmans

Endowed Research Chair

University of Zurich

Lucas Pelkmans studied Medical Biology at the University of Utrecht in The Netherlands, and did his PhD in Biochemistry at the ETH Zurich in Switzerland. He was then a postdoctoral fellow at the Max Planck Institute of Molecular Cell Biology and Genetics in Dresden, Germany. In 2005, Lucas Pelkmans returned to Switzerland to start his research group in image-based systems biology at the ETH Zurich as an assistant professor, and was elected to the Ernst Hadorn Foundation-endowed Chair at the University of Zurich in 2010, where he has his research group since then. Lucas Pelkmans has received multiple awards for his pioneering work to establish the field of image-based systems biology, has received junior, consolidator, and advanced ERC grants, and is a member of EMBO.

Leonidas Bleris, Ph.D.

Cecil H. and Ida Green Chair in Systems Biology Science and Professor

The University of Texas at Dallas

Leonidas Bleris is a Cecil H. and Ida Green Chair in Systems Biology Science and Professor with the Bioengineering Department of the University of Texas at Dallas. Before joining UTD, Bleris was a Postdoctoral Fellow at the FAS Center for Systems Biology at Harvard University. Bleris earned a Ph.D. in Electrical Engineering from Lehigh University in 2006. He received a Diploma in Electrical and Computer Engineering in 2000 from Aristotle University of Thessaloniki, Greece. Bleris was awarded the Christine Mirzayan Science and Technology Policy Graduate Fellowship from the National Academy of Science (NAS), and served with the Board of Mathematical Sciences and their Applications. During 2009-2010, Bleris was a Visiting Scientist at the FAS Center for Systems Biology at Harvard University, and an Independent Expert with the European Commission under the "Science, Economy and Society" directorate. Bleris served as the University of Texas at Dallas Representative for the 2011-2012 Tuning Oversight Council for Engineering and Science, Committee on Bioengineering, and received the 2014 Junior Faculty Research Award from the Erik Jonsson School of Engineering and Computer Science. In 2018 Bleris was awarded a Cecil H. and Ida Green Chair in Systems Biology Science. His research has focused on systems biology, mammalian synthetic biology and genome editing.

Chris Bakal

Professor of Cancer Morphodynamics

Institute of Cancer Research

Chris Bakal is the Professor of Cancer Morphodynamics at the Institute of Cancer Research in London, UK, where he leads the Dynamical Cell Systems Laboratory. His team aims to understand how cancer is driven by changes in cell shape. 

Chris was born in Calgary, Canada. He received his BSc in Biochemistry from the University of British Columbia, and his PhD in Medical Biophysics from the University of Toronto. Chris’ postdoctoral work was performed in Department of Genetics at Harvard Medical School, and the Computer Science and Artificial Intelligence Laboratory (CSAIL) at the Massachusetts Institute of Technology (MIT). In 2007, Chris was named as one of the most promising postdoctoral fellows or junior faculty members at Harvard Medical School by the Dorsett L. Spurgeon award. After being awarded a Wellcome Trust Career Development Fellowship, Chris established his laboratory at the Institute of Cancer Research in London in 2009. In 2015 he was awarded the prestigious Cancer Research UK Future Leaders Prize.

Outside of science Chris is competitive track cyclist, a former national-level runner, and a former world-ranked downhill ski racer. Chris has run a mile in just over 4 min, and aims to compete in the Ironman next year.

Finton Sirockin, Ph.D.

Associate Director


Finton Sirockin, PhD, is Associate Director in Global Discovery Chemistry in Novartis, based in Basel.

Finton is a theoretical chemist / molecular modeller by training with a keen interest in exploring how data science can make drug discovery more efficient.
He obtained his Ph D. in Theoretical Chemistry from the University Louis Pasteur of Strasbourg in Prof. Martin Karplus group under the supervision of Prof. Annick Dejaegere, in 2003. After a year of Postdoc working on structural aspects of bioinformatics he joined Novartis’ CADD group in Basel. He delved into cheminformatics and machine learning through their application in MedChem projects.
He is providing MedChem project support in different Disease Areas. Since 2020, he is leading the Project Digital Lead initiative which fosters Data Science and Digital Tools literacy in the Medicinal Chemistry community in NIBR. He is also providing CADD support in MedChem projects and a machine learning collaboration applied to chemistry. Finally he is co-leading, at the MedChem project application level, the Generative Chemistry exploration collaboration between Microsoft Research and Novartis.

David Leslie

Ethics Lead

The Alan Turin Institute

Dr David Leslie is the Ethics Theme Lead within the public policy programme at the Alan Turing Institute. He is the author of the UK Government’s official guidance on the responsible design and implementation of AI systems in the public sector, Understanding artificial intelligence ethics and safety (2019) and a principal co-author of Explaining decisions made with AI (2020), a co-badged guidance on AI explainability published by the Information Commissioner’s Office and the Alan Turing Institute. He is PI of a UKRI-funded project called PATH-AI: Mapping an Intercultural Path to Privacy, Agency and Trust in Human-AI Ecosystems, which is a research collaboration with RIKEN,  Japan, as well as of a BEIS and GPAI funded research project entitled Advancing Data Justice Research and Practice. David is also lead author of “Does AI stand for augmenting inequality in the COVID-19 era of healthcare” (2021) published in the British Medical Journal and the lead author of Artificial intelligence, human rights, democracy, and the rule of law (2021), a primer translated into Dutch and French. His other recent publications include the Harvard Data Science Review article, “Tackling COVID-19 through responsible AI innovation: Five steps in the right direction,” (2020), Understanding bias in facial recognition technologies (2020), and “The Arc of the Data Scientific Universe,” (2021) also published in the HDSR. 

Before joining the Turing, David taught at Princeton’s University Center for Human Values. Prior to teaching at Princeton, David held academic appointments at Yale and at Harvard, where he received over a dozen teaching awards. He now serves as an elected member of the 9-person Bureau of the Council of Europe’s Ad Hoc Committee on Artificial Intelligence (CAHAI). He is on the editorial board of the Harvard Data Science Review and is a founding editor of the Springer journal, AI and Ethics.

Brent Mittelstadt, Ph.D.

Senior Research Fellow

University of Oxford

Brent Mittelstadt is a Senior Research Fellow in data ethics at the Oxford Internet Institute. He is an ethicist and philosopher focusing on auditing, interpretability, and ethical governance of complex algorithmic systems, as well as data protection, privacy, and non-discrimination law. He coordinates the Governance of Emerging Technologies (GET) research programme at the OII, which investigates ethical, legal, and technical aspects of AI, machine learning, and other emerging technologies.

Carlos Maria Galmarini, M.D., Ph.D.


Topazium Artificial Intelligence

Carlos María Galmarini is Topazium’s founder. He earned his Medical Degree from the University of Buenos Aires (Argentina), after which he completed an internship in clinical oncology at Marie Curie Cancer Center (Buenos Aires). He pursued two post-doctoral fellowships in oncology at Los Angeles City of Hope Cancer Center (USA) and at Léon Bérard Cancer Center (Lyon, France). He obtained his PhD in oncology and his qualification to lead research teams (HDR) at Claude Bernard Lyon-1 University (Lyon, France). Carlos María practiced medicine both at the Marie Curie and Carlos G. Durand Hospitals in Buenos Aires for several years. He has also held positions as a medical researcher in oncology at Lyon Rockefeller Medical School and in Lyon-Sud Medical School (France), where he was appointed Associate Professor. In 2008 he joined PharmaMar (Madrid, Spain) as Head of Cell Biology and Pharmacogenomics, where he was in charge of drug screening, biochemistry, molecular biology and pharmacogenomics areas. He has authored over 130 publications across leading international scientific journals. In recent years, he devoted his research to the application of artificial intelligence in healthcare, molecular biology and other related-medical fields, particularly, in the generation of diagnostic assistance and predictive tools applied to clinical trials and real-world data.


Keynote Presentation
Artificial Intelligence in Health: Sky is the Limit
Open to view video.
Open to view video. At present, we are facing an existential global health crisis – the outbreak of novel coronavirus virus (COVID-19). This symposium aims to describe the role of data science and artificial intelligence (AI) that can be applied for digital transformation in tackling major clinical problems and diseases. These digital technologies include the internet of things (IoT) with next generation tele-communication network (e.g., 5G), big data analytics and artificial intelligence (AI) using deep learning. They are highly inter-related: the proliferation of IoT (e.g., devices, instruments) in hospitals and clinics enable the establishment of a highly interconnected digital ecosystem, enabling real time data collection at scale, which could then be utilized by AI and deep learning systems to understand healthcare trends, model risks associations and predict outcomes. This symposium aims to share the principles, concepts and examples of the data science, AI, machine learning, deep learning in tackling various clinical unmet need during the COVID-19 pandemic crisis.
AI for Automated Discovery
Designing and Combining Automated Synthesis Platforms
Open to view video.
Open to view video. Automated chemical synthesis platforms have the capability to significantly accelerate and standardize the development and study of organic chemistry reactions and processes. However, one limitation of the general approach is the design of custom systems for specific targets or processes. This requires physical reconfiguration of the systems to perform the next “unique” process. There also exist several approaches towards the design and execution of automated synthesis. However, these different approaches provide the opportunity for integration, allowing systems to work on either collaborative or disparate problems. In this talk we will discuss the design and development of an automated platform for chemical synthesis and study using a radial arrangement of continuous flow modules, that by design have in-built flexibility and efficiency due to the decoupling of sequential steps. We will then discuss how this system can fit into the broader future utilizing a variety of synthesis platforms.
AI for Drug Design: Where Are We Now?
Open to view video.
Open to view video. Artificial Intelligence has become impactful during the last few years in chemistry and the life sciences, pushing the scientific boundaries forward as exemplified by the recent success of AlphaFold2. In this presentation I will provide an overview of how AI have impacted drug design in the last few years, where we are now and what progress we can reasonably expect in the coming years. The presentation will have a focus on deep learning based molecular de novo design, however, also aspects of synthesis prediction, molecular property predictions and chemistry automation will be covered.
Generative Models
Representing and Optimizing Small Molecules and Biologics
Open to view video.
Open to view video. Given adequate training data, machine learning models trained over experimental outcomes enable virtual screening and inverse design of therapeutic (macro)molecules. However, activity data is typically expensive and slow to acquire. Thus, finding representations of (macro)molecular structure that allow learning structure-activity relationships that are fast, robust and data efficient is key. Here, we will show our recent work in representation learning and design of therapeutics. In particular, we will report 3D and 4D deep learning models that allow conformation information to achieve better transferability. Furthermore, we well describe hierarchical macromolecule representations for biomacromolecules. By combining cheminformatics-like descriptors with linear, and graph representations for the macromolecule we will describe how it is possible can generalize similarity metrics, inspired by sequence alignment, as well as structure-property relationships to macromolecules of any topology and monomer composition. These representations allow generating state of the art cell-penetrating peptides that boost un-assisted delivery of anti-sense polymorpholino oligonucleotides by 50 and surpass the best training data points by 100% in activity.
Learning to Control AI Models for Accelerating Discovery
Open to view video.
Open to view video. Scientific discovery is one of primary factors underlying advancement of human race. However, traditional discovery process is slow compared to the growing need of new inventions. In this talk, I will present a closed-loop paradigm to accelerate scientific discovery, which can seamlessly integrate machine learning, physics-based simulations, and wet lab experiments and enable new hypothesis and/or artefact synthesis and validation thereof. Development of novel deep generative models and black-box optimization methods for designing novel antimicrobials, drug candidates, and functional metamaterials will be discussed. Finally, I will discuss the importance of adding crucial aspects, e.g. creativity, robustness, and interpretability, to infuse elements of trust in machine learning models in order to enable and add value to AI-driven discovery.
Augmenting drug hunters with Generative Chemistry Models
Open to view video.
Open to view video. Small-Molecule Drug discovery is a multi-objective optimization problem in which finding the next drug candidate depends on various characteristics of compounds including efficacy, pharmacokinetics and safety. In the design process of small molecule drugs, medicinal chemistry project teams routinely face this complex multidimensional optimization challenge. Given the massive size of the relevant 'chemical space' (estimated to be in the range of up to 1060 drug-like molecules), the key question for medicinal chemists is “What is the best compound to make and test next”'. While humans are extremely good in understanding the bigger picture, computers/algorithms are potentially much better in coming up with and evaluating a large body of complementary solutions – such as the described multidimensional optimization problem. Novartis has partnered with Microsoft Research to explore the potential of Generative Chemistry in “real life” conditions. We have built an in silico decision-support system that assists medicinal chemists in multi-objective compound design, selection and prioritization. Compared to humans, GenChem is not biased by past experience, thus it complements the chemist’s experience with independent ideas driven by data. This presentation will describe the computational workflow that is being developed. It combines a diverse set of generative models to suggest novel, high-quality compounds. The compounds are optimized in a continuous latent space to fit a pre-defined property profile. Predictive models and scoring functions guide the generation of promising candidate molecules, and a post-processing workflow annotates the generated molecules to enable the medicinal chemistry teams with optimised compounds that can be selected and prioritised for synthesis.
The Clinical Validation of AI for Next Generation Medicine
The Clinical Validation of AI for Next Generation Medicine
Open to view video.
Open to view video. The emergence of AI and digital medicine has led to promising advances along the entire continuum of therapeutic development, spanning from discovery to development to administration. Each of these segments include interventional and diagnostic platforms that, when seamlessly integrated, can optimize and streamline how novel therapies are taken to patients. Examples of these technologies include AI optimization platforms for combination therapy design, diagnostic platforms for patient-drug matching and treatment guidance, treatment monitoring and other methodologies that can collectively redefine how medicines are made. This session will provide a unique look at field-defining AI-based interventional and diagnostic technologies already being validated and/or deployed in clinical settings. Importantly, this session will feature clinicians and technology developers who have successfully taken their AI-driven innovations into first-in-kind trials. Additional insights that are vital towards broad AI deployment (e.g. regulatory engagement, implementation sciences, healthcare economics, policy and beyond) will be explored.
Machine Learning in Drug Discovery
ML Analysis of Histological and Genomic Data from Patients
Open to view video.
Open to view video. Cancer neoplasms are heterogeneous diseases with multiple molecular and histological aspects. Cancer heterogeneity is one of the main factors explaining the failure of many treatments in oncology. Molecular subtypes are defined today using RNA profiling only which is limited as it requires high quality samples in large quantities and it is also too slow to be used in routine care. In addition, tumours are a mixture of several subtypes and deconvoluting these subtypes using bulk transcriptomic approaches is not always feasible and limits thereby their clinical use. We are thus developing deep learning models combining histology images and OMICs data to predict patient prognosis and diagnosis. In particular our models can define innovative patient subgroups based on novel cellular and/or molecular biomarkers, further improving the stratification of cancer patients. These novel cellular and/or molecular biomarkers can be used to identify genes/proteins associated with patients prognosis and could be used as novel therapeutic targets
Machine Learning in Drug Discovery
Open to view video.
Open to view video. A fundamental property of cells is that they make decisions adapted to their internal state and surrounding. This context-aware behavior requires the processing of large amounts of information, but it is unclear how cells can reliably achieve this using heterogeneous signaling responses. To study the information processing capacity of human epithelial cells, we here apply epidermal growth factor stimulation, combined with multiplexed quantification of signaling responses and multiple markers of the cellular state across multiple spatial scales. We find that signaling nodes in a network display adaptive information processing, which leads to heterogeneous growth factor responses and enables nodes to capture partially non-redundant information about the cellular state. Collectively, as a multimodal percept, this provides individual cells with a large amount of information to accurately place growth factor concentration within the context of their cellular state and make cellular state-dependent decisions. We propose that heterogeneity and complexity in signaling networks have co-evolved to enable specific and context-aware cellular decision making in a multicellular setting.
Cell Morphology-based Machine Learning Models for Human Cell State Classification
Open to view video.
Open to view video. We present machine learning architectures to ascertain models that differentiate healthy from apoptotic cells using exclusively forward (FSC) and side (SSC) scatter flow cytometry information. We discuss and highlight differences in classifier performance and compare the results to the standard practice of forward and side scatter gating, typically performed to select cells based on size and/or complexity. We demonstrate that our model, a ready-to-use module for any flow cytometry-based analysis, can provide automated, reliable, and stain-free classification of healthy and apoptotic cells using exclusively size and granularity information. Additionally, we present two different cell morphology-based machine learning approaches that can be used identify cells that harbor custom CRISPR-based genetic modifications.
The Shape of Things to Come. Predictive Models of Cancer Fate Fuelled by Image-'omics
Open to view video.
Open to view video. The genomics era generated unprecedented biological insights. In particular, genomics has established unambiguous correlation between phenotypes and sequence variation in specific genes. This has been particularly true in cancer biology where individual mutations, and combinations of these mutations, are associated with disease. However, we still have little mechanistic understanding of how the interaction of genes with each other, and their environment, leads to different cellular fates. Insight into these interactions has implications far beyond cancer; and will have impacts on our textbook understanding of biology as well as improve human health. To understand how genes and cell shape interact we are building predictive frameworks where we have embedded detailed ‘omics data – allowing us to compute predictive outcomes. These frameworks describe how the shape of cells interacts with genes to determine cell fates. For example, describing how stem versus senescent cells emerge in cancer populations. We test whether our theories are right by leveraging our ability to make parallel measurements of the dynamic behaviour of individual components in the system. Importantly, these models span the nanoscale (mutation, amino acid modifications, protein conformation), the microscale (the interaction of proteins in space and time), and macroscales (cell geometry, and tissue organization). Taken together these findings present opportunities to ‘toggle’ cells between different states for therapeutic benefit or bioengineering. For example,by inducing a toggle between states this may increase the efficacy of established therapies, or promote reprogramming.
Responsible Data Science for Health and the Life Sciences in the COVID-19 Era
Open to view video.
Open to view video. During the first two waves of COVID-19, data scientists were put under an unprecedented amount of pressure to produce rapid-response insights that could assist clinicians, epidemiologists, and health officials to tackle the pandemic. On the whole, this stress test yielded mixed results. Often pushed beyond the limits of their normal practices, researchers faced myriad challenges around data management, data quality, methodological interoperability, model reporting and validation, consent, algorithmic bias, and interpretability. In this talk, I explore this range of issues, and I offer some constructive steps toward building resilience and readiness in the data science community through the development of responsible data innovation practices and protocols.
Bias Preservation in Machine Learning: The Meaning of Fairness in Medical AI
Open to view video.
Open to view video. Concerns over fairness and bias have marked much discussion around the ethics of AI. Ensuring fairness in algorithmic technologies supporting drug discovery, patient diagnostics, and clinical decision-making is critical to close existing gaps in access to healthcare and biomedical research. Western societies are marked by diverse and extensive biases and inequality that are unavoidably embedded in the data used to train machine learning. Algorithms trained on biased data will, without intervention, produce biased outcomes and increase the inequality experienced by historically disadvantaged groups. Recognising this problem, much work has emerged in recent years to test for bias in machine learning and AI systems using various fairness and bias metrics. Often these metrics address technical bias but ignore the underlying causes of inequality and take for granted the scope, significance, and ethical acceptability of existing inequalities. In this talk I will introduce the concept of “bias preservation” as a means to assess the compatibility of fairness metrics used in machine learning against the notions of formal and substantive equality. The fundamental aim of EU non-discrimination law is not only to prevent ongoing discrimination, but also to change society, policies, and practices to ‘level the playing field’ and achieve substantive rather than merely formal equality. Based on this, I will introduce a novel classification scheme for fairness metrics in machine learning based on how they handle pre-existing bias and thus align with the aims of substantive equality. Specifically, I will distinguish between ‘bias preserving’ and ‘bias transforming’ fairness metrics. This classification system is intended to bridge the gap between notions of equality, non-discrimination law, and decisions around how to measure fairness and bias machine learning. Bias transforming metrics are essential to achieve substantive equality in practice. To conclude, I will discuss how to choose appropriate metrics to measure bias and fairness in practice.
From Hippocrates to Artificial Intelligence: Moving Towards a Collective Intelligence Vendor Scientist Roundtable
Open to view video.
Open to view video. “A physician must insert wisdom into medicine and medicine into wisdom.” (Hippocrates, physician, 460 BC – 370 BC). Modern medicine is based upon the work of Hippocrates and his disciples. In the Hippocratic tradition, the ability to diagnose, predict, and treat specific diseases stemmed from a combination of solid scientific knowledge and being able to intuitively perceive the patient in his/her entirety. Hippocrates believed every doctor should view the patient as a unique physical, mental, and spiritual being. Knowing and understanding the illness alone was not enough without a similar level of knowledge about the patient (a concept called “the art of medicine”). The Hippocratic tradition transformed medicine into a real, experience-based science. Doctors in today’s world continue to use the same methodology described by Hippocrates, but they also have access to new, vast, and valuable sources of information. In fact, laboratory tests, imaging, DNA sequencing, molecular pathology, and the technological advances of hyper-connectivity, allow for the analysis of new individual health characteristics. It proves impossible for doctors to process all this information and glean useful knowledge from it. As a result, the ancient mix of knowledge and wisdom is being lost: technology dominates over the “art” and physical facts over the human. It is difficult to discern if we are treating people or diseases. Until now, there weren’t the necessary tools to analyse and produce meaningful knowledge out of this massive amount of data. But with artificial intelligence (AI), this is changing at a dizzying speed. AI-powered systems can now process huge amounts of data, generating information that facilitates the creation of new knowledge. AI, however, instills fear and gives rise to all sorts of apocalyptic predictions. On the contrary, in the field of medicine, artificial intelligence should be viewed as a tool that can improve medical work. The term “augmented intelligence” proves to be more appropriate to describe the true role of AI, given that this technology is designed to improve human intelligence, not supplant it. Indeed, AI cannot create wisdom, which is nothing else than the correct way of using the knowledge gained. Therefore, AI cannot replace a doctor’s professional opinion. Thus, the advent of AI in medicine should be viewed rather as a paradigm shift, where medicine will evolve in a collaborative work environment, where machines and humans interact for medical decision making. Doctors will have more time to perfect their “art of medicine”, spending more time interacting with their patients, analysing more complex situations, and deciding the course of action to follow. As Hippocrates stated, different patients have different needs, and humans can better respond to these needs. The combination of human and artificial intelligence will create a new type of collective intelligence capable of solving problems that until now have been unfathomable to the human mind alone. Finally, it is worth remembering that fact-based sciences are divided into natural and human disciplines. Medicine occupies a special place, straddling both. It can be difficult to establish the similarities between a doctor who works, for example, with rules defined by specific clinical trials and a traditional family practitioner. The former would be more related to a natural science, and the latter with a more human science – “the art of medicine.” The combination of human and artificial intelligence in a new type of collective intelligence will enable doctors themselves to be a combination of the two. In other words, the art of medicine – human science – based on the analysis of big data – natural science. A new collective intelligence working on behalf of a wiser medicine. Patients deserve nothing less.