Home » Latest News » The 11th Virtual World Congress (WC11) 3Rs in Transition from Development to Application

The 11th Virtual World Congress (WC11) 3Rs in Transition from Development to Application

Maastricht, The Netherlands.

Read our report on the 11th Virtual World Congress (WC11) of 3Rs in Transition from Development to Application. Use the Index links below to jump to each section.

23/08/21 – 02/09/21

The 11th Virtual World Congress Index:

Day 1

S161: Validation redefined: needs and opportunities for the validation of (r)evolutionary non-animal approaches for chemical safety assessment (Theme: Safety)

S111: Modern, Mechanistic Approaches to Cancer Risk Assessment (Theme: Safety)

Day 2

KEYNOTE: Donald Ingber – Human Organ Chips: From Experimental Models to Clinical Mimicry

S115: Scientific highlights in emulating human biology on chips (Theme: Innovative Technologies)

Day 3

S199: Can non-animal models identify environmental endocrine disruptors? (Theme: Innovative Technologies)

KEYNOTE: Jason Ekert – Accelerating The Development and Adoption of Complex In Vitro Models in Early Drug Discovery

S105: The role of clinical research on the understanding and treatment of diseases (Theme: Disease)

Day 4

S118: A global movement to improve science using animal-free antibodies (Theme: Disease)

S117: Personalized medicine through human organoid models (Theme: Innovative Technologies)

Day 5

S314: Applications of New Approach Methods in Genotoxicity and Developmental toxicity testing

Day 6

S162: Novel cell-based technologies for predicting drug-induced liver injury (Theme: Innovative technologies)

S84: Beyond the 3Rs: Expanding the Use of Human-Relevant Replacement Methods in Biomedical Research (Theme: Disease)

Day 7

S313: Roadmap to integration (Theme: Safety)

S300: (Multi-)organ models-1 (Theme: Innovative Technologies)

Day 8

S154: NASH, the liver disease of the 21st century? Alternative technology in the spotlights (Theme: Disease)

S305: (Multi-)organ models-3 (Theme: Innovative Technologies)

Day 9

KEYNOTE: Joseph Wu – Stem Cells & Genomics For Precision Medicine

Abbreviations

S161: Validation redefined: needs and opportunities for the validation of (r)evolutionary non-animal approaches for chemical safety assessment (Theme: Safety)

This session saw talks from 6 presenters around the general topic of validating new approach methods (NAM). Aldert Piersma of the Center for Health Protection, RIVM kicked off the session. He first gave an overview of the classical validation process for alternative models. Traditionally, a diversity of chemical structures are used to define the chemical applicability for validation. In most cases, the validation of alternative methods presumes a one-to-one replacement for in vivo tests. It is limited to 10-50 chemicals and predictivity is largely dependent on the test chemicals chosen. The reductionist nature of alternative assays limits their predictivity and hampers chances of validation, while bottom-up approaches still leave uncertainty. He focused on how the adverse outcome pathway (AOP) networks can be modified to aid validation. These pathways are generally a unidirectional representation of the route through the body from compound administration to clinical outcome. They include basic molecular components in the form of molecule-cell-tissue-organ-organism physiology. Each organizational level has specific tests associated with them that the absorption, distribution, metabolism, and excretion (ADME) model undergoes. The data from these tests are used to create in silico models of human biology for detailed safety profiles. Aldert says that to get the full understanding we need to include the full toxicological pathways including the likes of homeostatic feedback loops. Toxicological models can be made from the biology of the AOP system, using the data to create computational vascularization models, for example, which have identified the toxic effects on cellularization from Thalidomide exposure. He then talked us through the test system requirements. These come under 4 headings. Biological domain, which describes the biology of the system through the mechanism of action (MOAs), AOPs, and endpoints measured. Technical performance, which looks at the compatibility of the test system with standardization and transferability and looks at the variability within the test system. The chemical domain, as you can imagine, deals with the properties of the chemical such as solubility and volatility. The final heading is sensitivity/specificity, this involves validating the test battery against in vivo toxicity of sufficient biological and toxicological mechanistic coverage. Additionally, dynamics and kinetics also have to be explored. Dynamics is tested in silico, looking for molecular targets for each cellular organization level. Kinetics looks at ADME characteristics at the target exposure in the ADME model. These models take a different standpoint; they go back to the basics – look at the biology – rather than just adding on to the information we already have. The advances were seen in artificial intelligence (AI) and big data recently have the potential to revolutionize these toxicological tests. Aldert then finished his presentation by emphasizing the need to validate each individual assay for fit-for-purpose applications. The next presentation was from Maurice Whelan of the European Commission, Joint Research Center (JRC). He first drew attention to the Tracking System for Alternative methods towards Regulatory acceptance (TSAR) information portal which is a very useful platform that has a collection of validation study reports from both failed and accepted studies, that serves to prevent researchers from repeating failed studies. This and other validation resources are freely available via the EU Reference Laboratory for alternatives to animal testing (EURL ECVAM) website. Encouragingly for non-animal approaches, the Organization for Economic Cooperation and Development (OECD) has approved a combination of an in vitro and a computational model for testing chemicals. This also provides a guideline for researchers on combining approaches. This study was validated by a ladder validation process and has seen a thorough investigation of the test model over three stages; internal lab validation, OECD validation, and US Food and Drug Administration ((F)DA) validation. Current JRC projects include the introduction of 18 new methods into 14 of their labs to develop a reproducible thyroid disruptor battery, their Scientific Advisory Committee (ESAC) has been working on publishing a review of the genomic allergen rapid detection (GARD) skin sensitization testing methods, they’re working with the OECD to provide guidance on how to assess the potential developmental neurotoxic (DNT) effects of chemicals using NAMs (specifically AOP networks, in vitro test batteries, and IATA), and they have been working on the validation of physiologically-based kinetic (PBK) models from ADME models. He mentions that the JRC has an interest in validating organ-on-chip (OOC) models but at present, their lack of standardization is an obstacle. They want to improve standardization in the characterization of models, definitions, introduce standards for engineering, materials and data management, best practices, and biological performance. They organized a workshop to that effect in collaboration with EUROoOCs and CENELEC in April of this year. They performed a meta-analysis of validation and scientific credibility. They found that validation is already embedded in the process of seeking scientific credibility. The concepts and terms vary between contexts but the validation principles can be mapped onto 7 credibility factors: assumptions confirmed, quantitative concordance, external consistency, explanatory power, simplicity, internal consistency, and qualitative concordance. Finally, he highlighted the areas that require additional attention. We need to work on integrating NAMs as well as benchmarks from completed studies using animal, human, in vitro, and in silico data. The use of mechanistic reasoning in the validation of NAMs could build confidence in the results from NAM studies which is needed for increased uptake of NAMs in research. Rebecca Clewell from 21st Century Toxicology Consulting took over from there to talk to us about new paradigms in chemical testing and validation approaches for these. The future of risk assessment lies with in vitro assays, which are not all that different from in vivo assays. They both perform dose-response assessments to obtain a point of departure from the drug. Additionally, quantitative in vitro in vivo extrapolation (QIVIVE) is used to translate in vivo assays to a human equivalent dose. Validation strategies rely on defined MOAs and recapitulation of specific phenotypes. Results from responses to a selection of controls are compared to animal data. Reproducibility is also assessed before being sent for validation. Many of the assays aren’t complex enough to recapitulate the phenotypes of interest and more often than not are low throughput. Over an 11 year period, only 28 animal alternative assays have been approved for regulatory use, with ocular toxicity assays having the most approved. To speed up this process she pointed to the comparison to animal assays as a large-time consumer with no sensical purpose, as animal data has been proven inefficient. NICETAM evaluated rodent use (the current gold standard) compared to in vitro approaches for skin sensitivity and confirmed that in vitro assays were just as good, if not better in most cases. To improve the validation process AOP, IATA, and DA approaches are being employed to focus on the biology of the systems being tested. Performance-based standards provide quick validation of approaches that are very similar to already validated approaches. Rebecca also proposes that less focus is put on the number of control compounds used and the animals. This focus should be redirected to the inclusion of dose-response feedback pathways in assays. The estrogen receptor pathway study by the US EPA was the example she gave for this. This saw the use of the ToxCast in 16 estrogen receptor assays that identified 133 active compounds. They wanted to develop a model that can be used to find the point of departure (PODs) for a quantitative risk assessment. Medical literature shows that ER𝛽 is a negative regulator of cellular proliferation, whereas ER⍺ is a positive regulator but ER⍺-36, found in the uterus, has the opposite effect. It is these receptor interactions that determine the dose-response curve and the POD. The monotonic response is in the positive direction with positive regulators and the negative direction with negative regulators. They used this to develop a model using human uterine epithelial cells, this was then validated against in vivo human response which showed that it was a more sensitive assay. She then gave two more examples of this approach for adipogenesis and spermatogenesis. The adipogenesis assay matured pre-adipocytes over 8 days to adipocytes that recapitulate the phenotype allowing prediction of the therapeutic index. The spermatogenesis assay identified that toxicity to spermatogonia is by direct or indirect approaches which need to be included in in vitro assays. For their study of human tissue versus rats, they had to substitute their human stem cell tissue for non-human primate (NHP) as they had supply issues due to COVID-19. They saw a qualitative difference in response and quantitatively determined that the NHPs/humans had no decrease in spermatocyte markers in response to doses at which rats showed decreased markers. Carl Westmoreland was next up from the Safety and Environmental Assurance Centre at Unilever. His talk focused on the role of next-generation risk assessment (NGRA) in validation. NGRA is increasingly used in cosmetic safety decisions where the margin of safety (MOS) is very important. MOS is based on in vitro bioactivity-derived PODs from NAM and animal test batteries. When compared, NAM data gives a more conservative MOS. In general, the chemicals used in the pharmaceutical industry are high risk and the majority of food and cosmetics are low risk. The MOS can also be distributed in a way to show uncertainty. A SARA mathematical model and benchmark information were used to combine data on biological sensitivity for a skin sensitization study. This study classified a collection of chemicals based on the margin of exposure. The general trend seen was that a larger MOS was more likely to be low risk for causing skin sensitization. For example, shampoo was identified as causing no skin sensitization. NGRA is a hypothesis-driven approach. The fundamental principle of the NGRA framework is striving for the protection of human health, not prediction. It works under the hypothesis that if there is no bioactivity observed at consumer-relevant concentrations then there can be no adverse health effects seen. NGRA does not attempt to predict the results of toxicology studies in animals; it merely uses the bioactivity data to form a better understanding of human biology using new exposure science. The aim of this project was to compare PODs from high-throughput predictions of bioactivity, exposure, and hazard information on a large selection of chemicals. They assessed these predictions in a data-focused way using a large toolbox of assays to evaluate their application in risk assessment. They used 2 sets of approaches and both could calculate the MOS distribution (the fold difference between the Cmax and the in vitro POD). Toolset evaluation is ongoing aiming to maximize the synergy with other evaluation activities such as EUToxRisk, RiskHunt3R, Cosmetics Europe, and the EPA. He emphasized the need to validate non-standard assays and computational methods to continue harnessing new approaches. Aspects of reproducibility and transferability are already part of the modular approaches to validation. The evaluation of NGRA has to be in the context of combining different bioactivity and exposure data to give reproducible and transparent safety decisions. This requires documented safe/non-safe history of exposure scenarios, risk assessments, or chemical screening in humans to test the framework on. This evaluation of MOS derived from NGRA approaches will contribute to what we already know about the degree of protection afforded by prediction using human exposure and biology, rather than dose-effects in animals. This in turn will assist in increasing the confidence in this method. Peter Bos from IBM then took over to speak about temporal characteristics in combining methods for human exposure scenarios. Exposure situations are defined by the duration of exposure, the rate of exposure, and the type of chemical. The current EU framework requires that potential human health risks are evaluated in animal experiments. These models are over-simplifying the exposure scenarios – where exposure in humans could be intermittent the animal models experience exposure once at a daily average dose. This requires additional conversion steps to translate to humans. For example, a 6h exposure period in animals could convert to as much as 24h for humans. Animal models also do not cover all disease types either, such as the relationship between pesticide exposure and Parkinson’s disease. NAMs overcome these limitations, specifically improving risk assessment through the inclusion of real-time exposures and combined exposures. NAMs can generate data that animals cannot. Peter mentioned that oftentimes researchers focus on solely creating models with the same data capabilities as animals for replacement when these NAMs are capable of more. One of the main challenges is the inclusion of temporal characteristics in NAMs, as Haber’s rule is not a toxicological law and can cause mistakes in the estimation of health risks. There are many questions that need to be answered before these temporal characteristics can be included with any certainty. New risk assessment paradigms require new validation strategies that include validation of single and combination approaches and prioritize human relevance. The final speaker was Mark Cronin of Liverpool’s John Moore University, who talked to us about how to characterize uncertainty in NAMs as well as validating them as fit-for-purpose. To do this he has been looking to in silico methods such as read-across, Quantitative Structure Activity Relationship (QSAR), and AI/machine learning. There are guidelines in place for statistical validation of QSAR but many find them confusing. As a result, we see a lack of consistency in the validation of these models and others. “Uncertainty is the limitations in the knowledge that affect the probability of answers to an assessment question” – the European Food Safety Authority (EFSA). Mark and his colleagues have been characterizing and quantifying uncertainty to assist in model validation. They do this by identifying all the possible uncertainties with the in silico approaches, which cover the key elements of this approach including problem formulation, creation, and application. By identifying these uncertainties they can identify the high uncertainty elements and add in additional data to decrease the uncertainty. For read-across, they identified 12 key elements of the process. 3 of these were high uncertainty which can be reduced by additional NAM data. They did similar for QSAR and defined 10 criteria that they deemed important risk factors to be included in validation processes. They defined the acceptable/unacceptable uncertainty levels for fit-for-purpose validation. For example, for an application in risk assessment, all 10 criteria had to be low-uncertainty, whereas for screening or prioritization of compounds each criterion had different acceptable levels ranging from low to high uncertainty. This is also being applied to machine learning which is currently being developed further. It can also be extended to virtual organs and organisms but hasn’t been done yet.

S111: Modern, Mechanistic Approaches to Cancer Risk Assessment (Theme: Safety)

The first to present was Warren Casey of the National Institute of Environmental Health Sciences. He talked about modernizing the carcinogenicity program within the National Toxicology Program (NTP). He first mentioned that the protocols that are used today were developed in 1976 and have yet to be updated. So we are relying on techniques from the 70s to protect public health in 2021 – it doesn’t make sense. Cancer is one of the most widely studied disease indications around the world which means we have an abundance of human data to create human models, whereas for other diseases this may not be possible. This data is publicly available from sources such as The Cancer Genome Atlas (TCGA), Pan-Cancer Atlas, and Pan-cancer Analysis of Whole Genomes (PCAWG).

He mentioned two ongoing projects that will be very helpful for this cause. The first was Applied Proteogenomics Organizational Learning and Outcomes (APOLLO) which is a project where active duty service members who have cancer have complete expression profiling done on their genomes from tumor tissues. A collaboration of the National Cancer Institute, Department of Defense, and the Department of National Affairs.

The second is a project called Department of Defense Serum Repository (DoDSR) which involves active-duty members. They have their blood taken every two years (starting in 1989), and both before and after deployment. These specimens are stored with eHealth records and records of exposure. It currently represents more than 11 million service members. When a member develops cancer they can determine the exposure that caused it and can enable body location-specific determination of carcinogens to find common locations for specific carcinogen-associated cancers.

Then Chris Corton, from the US Environmental Protection Agency (EPA), took to the virtual podium to discuss replacing the 2-year rat assays with a series of shorter NAM assays. The aim of the current research is to bridge the gap between the 2-year rat assays and the aim for the complete replacement of animals in a prospective manner. Short-term assays need to provide information such as chemical-dose relationships, identifying combinations that cause tumors, describing the MOA of the tumor, etc.

Gene biomarker expression can be very useful for cancer diagnosis. The relationship between the list of genes and the fold difference in values can indicate cancer. They can also identify the mechanism of toxicity and molecular initiating events (MIE) in AOPs with transcriptomic profiling. They have identified 6 biomarkers to identify MIEs in their mouse and rat livers. These biomarkers had a balance accuracy of >90%. They compared the gene lists to the 6 biomarkers to get a p-value, this separated the non-tumorigenic and tumorigenic groups. They found that activation levels defined the max value in the non-tumorigenic group. The activation levels accurately predicted liver tumors. These activation levels along with biomarkers have the potential to model liver tumorigens. The activation levels of individual genes can also be used to predict cancer.

This AOP-guided computational approach can be used to identify tumorigens and has the potential to bridge the gap of animal and non-animal model use. This approach also has the potential for toxicity prediction using robust datasets of compounds and their known toxicity status.

Kate Guyton from the US National Academies of Sciences, Engineering, and Medicine (NAS) took over to speak to us about the key characteristics of carcinogens and how they can be used for hazard identification. Current hazard identification relies on evidence integration data derived from cancer in humans, cancer in animals, and carcinogen mechanisms. By investigating the mechanisms of Group 1 carcinogens they were able to identify that they undergo multiple mechanisms. Mechanistic data is not perfect and these studies identified some challenges with their use. Questions that formed included; how to search systematically for relevant mechanisms? how to analyze the very large mechanistic database with ease? and how to avoid bias towards favored mechanisms?

A workshop saw talks of the 10 key components (KCs) of human carcinogens. These look at the chemical and biological properties of already established carcinogens. Data on these KCs can provide evidence of carcinogenicity which can be used to assemble data relevant to these mechanisms without the need for a prior hypothesis of the mechanism being used. Ethylene oxide has the first 4 of the 10 characteristics; it is electrophilic, genotoxic, binds to DNA, and can induce epigenetic alterations.

KCs enable a systematic approach to support hazard identification studies, allowing targeted searching of the databases for the chemical of interest. These 10 KCs are supporting the already existing IARC monograph which has been around for years. The 10 KCs have been integrated into this monograph. Under this updated monograph, evidence integration is a single step, Group 2B is based on one stream of evidence, and Group 2A is based on two streams. KCs are now enabling prioritization of agents from most hazardous to least, and they prevent a narrow focus from being placed on specific pathways ensuring broad consideration of the mechanistic evidence.

Next Gina Hilton of the People for Ethical Treatment of Animals (PETA) Consortium presented the rethinking carcinogenicity assessment for agrochemicals (ReCAAP) project. This project began due to the large number of animals used in food-use pesticide registration. The assays for these are both time and resource-intensive. They require a minimum of 3 dose levels along with a control, tested on at least 50 male and 50 female rodents. This adds up to around 400 animals per assay and approximately 1000 per chemical tested over 18 months in mice or 24 months in rats. The aim of this project is to move away from the checkbox method of testing chemicals and move towards a weight of evidence (WoE) paradigm to support carcinogenicity testing specifically for food. They needed to determine what assays are necessary to determine a substance’s carcinogenicity without the use of animal methods and without causing a data overload. The goal is to streamline and organize data in a way that improves these tests.

They have identified areas in the guidelines of the Australian Pesticides and Veterinary Medicine Authority (APVMA), Health Canada Pest Management Regulatory Agency (PMRA), and the United States EPA that enable the use of this WoE paradigm, and they also offer the chance to have a presubmission interview early in the development of these chemical databases so that they meet the agency’s expectations and needs.

Their working group has developed case studies using publicly available information to evaluate the effectiveness of the ReCAAP reporting framework. Within these case studies, they determined that it is critical to define the inclusion criteria to support the selection of chemicals. Read-across was very useful for this and WoE analysis. Knowledge of the mechanism of action of the chemicals was deemed helpful for the evaluation of human relevance. The final thing they identified as critical was proposed risk estimates to estimate the health-protective effects of the points of departure for risk assessment. The general findings from this framework, while targeted at agrochemical rodent tests, do have potential applications for other endpoints.

Nathalie Delrue then took over to tell us about the work within the OECD, specifically the development of an integrated approach to testing and assessment (IATA) for non-genotoxic compounds. The IATA for non-genotoxic carcinogens was included in the OECD Test Guideline Program work plan in 2015. It is led by the UK and it started as a discussion of how there are no validated alternatives for testing non-genotoxic carcinogen and rodent cancer assay limitations. The cell transformation assay that they had been working on previously was not sufficient on its own so a more comprehensive battery of tests is needed.

The OECD formed an expert working group to work on the development of the IATA initiative. The IATA was developed based on cancer models and the AOP framework. It relies on a model of the general cancer pathway which serves as the framework for the development of assays for each key event within the IATA. The key events start at the molecular initiation event, then onto inflammation, immune response, mitogenic signaling, cell injury, sustained proliferation, change in morphology, and the endpoint is tumor formation.

This working group is currently creating a database of assays in parallel to the IATA. These are grouped based on the 13 key events/cancer hallmarks. It is structured such as to give the critical information requirements for each assay. It focuses on in vitro assays but in vivo assays that provide useful information are also included but only for short-term use. Each assay is being evaluated based on set criteria from the OECD test guidelines and ranking parameters.

The working group is currently at the point of review and assessment of the assays and the next steps will see the formation of the IATA with links between each assay. The IATA may not have use for the characterization of non-genotoxic substances but may have use in robust read-across assays. A guidance document is expected to be developed shortly for the IATA.

Lastly, Federica Madia of the European Commission JRC took the stage to present a methodological approach to integrate information. Cancer has been a priority of the JRC for many years now and at present, it is still killing approximately 4 million people in Europe alone. Some of these are preventable through reduced exposure to carcinogenic factors. Current regulations differ across sectors and many require the use of 2-year rodent bioassays which have questionable translatability to humans let alone being expensive and time-consuming. These models also struggle to differentiate between genotoxic and non-genotoxic substances. These factors highlighted the need for the better use of toxicity studies to protect human health. The JRC has been looking at the move towards mechanistic key event data, novel methods, and moving away from the traditional apical toxicity endpoints. They aim to use already existing data to analyze several toxicological effects.

It is focused on stored information, looking backward to the in vivo study data especially for systemic complex endpoints such as repeat dose studies using methods such as no adverse effect level (NOAEL) and BMP. The information is being used to answer a few critical questions such as is a specific test necessary, is the required in vivo tests full potential being exploited? and what mechanistic data can be used to fill the gaps in knowledge? It uses the IARC KCs as mentioned previously to organize the information derived from many endpoints to build information extracted from many tests such as toxicokinetics, skin sensitization, acute toxicity, etc. A comparative analysis is performed to identify the information gaps from the distributed information. These can assist in identifying biomarkers that can be used as the basis to design new approach methods. KCs in this context are just being used for carcinogens but they are also useful for both male and female repro-toxicants as well as endocrine disruptors which can be compared to find overlaps of less specificity.

KEYNOTE: Donald Ingber - Human Organ Chips: From Experimental Models to Clinical Mimicry

Dr. Ingber started his keynote presentation by emphasizing the amount of time, money ($3 billion), and resources that are spent to develop and approve drugs, many of which never make it to market. The main causes of this high attrition are the requirements from the FDA for NHP studies for vaccine and biologics development. Biologics are now so specific to humans that they don’t (cross)react with NHP and animal cells. Many alternative models are available such as the human OOC his lab at the Wyss Institute is developing. These are created with microchip manufacturing techniques and rubber/silicone materials for the chip. The ultimate goal of his research is to recapitulate human physiology and disease to predict drug response.

The alveolus-on-chip has been developed to include the stretching associated with in vivo breathing motions. This is recreated with cyclic vacuums on either side of the alveolus, with the flexible chip material. Interleukin-2 (IL-2) was used to show pulmonary edema in the chip. On day 2 of treatment, they saw the formation of a liquid meniscus in the airspace which filled in subsequent days. In chips without breathing motions they saw no IL-2 toxicity. This indicated that breathing motions play a role in increasing the toxicity of drugs. This OOC was an effective drug efficacy model that influenced the passage of TRPV4 inhibitor into phase 2 clinical trials for inhibition of edema.

Their lung small airway chip uses highly differentiated stem cells and can use patient-derived chronic obstructive pulmonary disease (COPD) cells for personalized models. On these chips, they studied the effects of cigarette smoke. They found that smoking doubled the IL-8 expression in COPD patients. These smoking studies were better than the human clinical studies as they allowed matched comparison of before and after smoking, showing the gene expression changes associated with smoking only. Whereas clinical studies could have gene expression changes from other exposures i.e. it’s a less controlled study.

They also have a cystic fibrosis model that has more ciliated cells and increased beating, mucus production, increased IL-8 release, increased aerogenosa growth, and increased immune cell recruitment compared to normal lung models.

Their orthotopic lung cancer chip uses non-small cell lung cancer (NSCLC) cells grown at low density with a medium that only ensures they live. This showed that the growth of the cancer was due to the microenvironment and not the medium. Breathing motions in this model suppressed cancer growth by 50%.

They also have the ability to model cancer invasion in their lab using green fluorescence protein (GFP). This allowed quantification and identified that breathing motions also inhibited invasion by 50%. The breathing motions regulated the epidermal growth factor receptor (EGFR) signaling. 

His intestine chip uses primary human hepatocyte organoids to form a villi-on-chip model. This chip better recapitulates the duodenum at the transcriptional level than the organoids outside of the chip and allows the formation of a mucus bilayer within the chip.

They can culture the living microbiome within the living colon chip forming hypoxia gradients on-chip.

EED organoids-on-chip can mimic the Pakistani lifestyle. By subjecting them to nutritional deficiency and having chips without nutritional deficiency as the control they identified that with nutritional deficiency absorption of fatty acids and proteins are affected and the villi are stunted.

A vagina/cervix chip is being created to test probiotic-based therapeutics. They tested three formulas that showed decreased IL-6, IL-8, and IL-1 alpha.

They created bone marrow chips using patient-derived CD34 progenitor cells exposing them to clinically relevant doses to show normal neutrophil and erythroid maturation. They can also use this platform to model rare genetic disorders giving new insights into their MOAs.

A lymphoid follicle chip is being developed for vaccine testing. So far they can measure cytokine production from them.

Patient-specific liver chips are being used to recapitulate hepatocellular injury, steatosis, cholestasis, fibrosis, and species-specific toxicities with many compounds.

They are now looking to make body-on-chip models for in vitro to in vivo translation of pharmacokinetic parameters.

S115: Scientific highlights in emulating human biology on chips (Theme: Innovative Technologies)

The first speaker of this session was Uwe Marx, the CSO of TissUse. He talked us through the ethical aspects of MPS and discussed the multi-organ chips being developed at TissUse. Body-on-chip (BOC) platforms can also be referred to as organismoids. Their BOCs downscale the human body by a factor of 1/100,000 and allow injection of drugs into the circulation or topical administration onto the skin layer of the BOC. They have 10+ compartments and are tested in 1-4 week co-culture experiments.

In creating these BOC models they came across both technical and biological challenges that needed to be addressed. They had to ensure the correct physiological arrangement of the organ compartments. They had to ensure that pH, pO2, and barrier integrity sensors were integrated, as well as mechanical forces and electrical stimuli. They had to ensure immunological compatibility of the organ models as incompatibility could lead to a risk of rejection. They had to ensure stable endothelization of all microphysiological system (MPS) surfaces for stem-cell organismoids to ensure the most biologically relevant models. BOCs are not yet regulatory accepted but they are working towards this. There are 23 fit-for-purpose assays available for internal decision making but none have been used for investigational new drug (IND) applications successfully. The bone marrow models specifically are making an impact on the pharmaceutical industry as it supports long-term culture which in turn enables long-term drug testing.

Their HUMMIC models can make a total of 16 different organoids over 3 levels (1, 2, and 3 chip models) with 22 assays in total. Their liver-pancreas organoid showed signs of diabetes with the increased and decreased insulin production in response to a glucose stimulus. Readouts from these models can indicate the therapeutic efficacy of the tested compound. The model size is a limiting factor; it prevents full emulation of bone, heart, and brain specifically, preventing studies of mental illness as they are only present in our 1.4 kg human brains.

He predicts that organismoids will replace between 70-80% of animals in research and development enabling the collection of clinical data at the preclinical testing stage.

Next Andries Van Der Meer of the University of Twente took to the stage. His talk focused on the vascularization of OOCs. OOCs can be personalized by using patient-derived stem cells and biomarkers. He emphasized the importance of blood vessels in disease pathophysiology and mentioned that perfusion of in vitro systems is difficult. This is due to the need for fully endothelium-lined microchannels as, in humans, blood interacts with the endothelium, not plastic microchannels. To do this they treated cultured endothelium with tumor necrosis factor (TNF)-⍺ to increase the activation of these cells and their manipulability. They perfused the lined channels with whole blood. In non-endothelium-lined chips platelets adhered to the bare collagen microchannel walls, this doesn’t happen in vivo so would change the results we should be seeing.

Their vascularized OOC models can be applied to drug efficacy testing. The heart-on-chip model was created using iPSC cardiomyocytes. As a proof of concept study, they administered a drug in two ways. The first was the diffusion of the drug into the chip from a channel above. The second was administered into the blood circulation. The blood circulation administration had a better effect on the chip. Currently work is focused on making a 3D blood vessel using a vertical chip with an integrated lumen that promotes cellular growth into a hollow tube through both single and co-culture.

Peter Loskill of the Fraunhofer Institute for Interfacial Engineering and Biotechnology IGB then talked to us about their eye-on-chip platform. He mentioned that OOCs are mostly applied to pharmaceutical research and development but also have many more applications that could benefit from their use. He chose to focus his research on the eye as, for studies of the eye, explants are the best model as animal eyes are uniquely different from humans.

For their model, they combined retinal organoids and OOCs to overcome the missing physiological layering and vascularization. The retinal pigment endothelium (RPE) is first seeded into the chip until they have physiological functions. Then the retinal organoid is added on top of a layer of thin hydrogel, this prevents the organoid from growing out of shape, the photoreceptors can then be seen in the model. The photoreceptors did not magically appear, a process known as phagocytosis occurred that caused photoreceptors to grow into and communicate with the retinal pigment endothelial cells (RPEs) to accurately recapitulate them. On this model, they tested gene therapy as a subretinal treatment. The organoid wells allowed separation of the nutrient and drug delivery allowing them to monitor the adeno-associated virus (AAV) efficacy.

They are also looking at another platform to enable the automation of OOCs. The OrganDisc can integrate a peristalsis-like pump by using magnetic beads that can form a closed-loop perfusion system. This disc looks like a CD on a CD player in that it can rotate to allow the machine-operated pipette to seed the cells into each well or to administer the test compounds.

Hitoshi Naraoka gave us a brilliant insight into Japan’s take on the introduction of MPS into research and the organizations that spearhead this introduction. They, like us, have three phases of development; development, optimization, and research. They design MPS based on their users’ needs. For example, they have an integrated liver-gut model that includes intestinal cells where drugs can be transported to hepatic cells recreating in vivo metabolic processes in vitro. Their MPS system includes standardization of the models so they are reproducible. Their aim for the future is to attend more world meetings to keep up to date and to create more MPS products. 

The final speaker of the session was Gunnar Cedersund of Linköping University. He presented the opportunities to use the data we collect from MPS in computational models to accurately predict health.

For metabolic disorders, insulin resistance is a key factor in their progression, for both the pancreas and the liver. Type 2 diabetes cannot be studied in mice, as they don’t react the same to insulin resistance and their mealtimes are irregular where humans generally follow a routine. To overcome this they have created a model based on healthy controls and type 2 diabetes data. The models aim to describe the crosstalk seen between the pancreas and the liver and have been FDA approved.

In humans beta-cell failure sees their graph taper off whereas mice beta-cell failure does not happen at all with insulin resistance. They created a digital twin for the mice that included insulin-resistant beta-cell failure which has potential use in the MPS system.

By rescaling the organ proportions they were able to bring the meal times down to normal human times in the chip models. This model allows feedback from clinical steps to pre-clinical steps to increase its relevance.

Using the new FDA guidelines they were able to quantify low uncertainty, test predictions of drug effects in new independent experiments, and find applications that were close to clinical testing.

S199: Can non-animal models identify environmental endocrine disruptors? (Theme: Innovative Technologies)

Natalie Burden of the UK NC3Rs kicked off this session on non-animal models to identify endocrine disruptors (ED) by discussing the findings of a recent workshop. Animals are used extensively in ED research. To help reduce this, the Health and Environmental Sciences Institute (HESI) has created an animal alternatives committee. Their 2020 workshop evaluated the proposal of ED studies in fish and amphibians and also identified methods to increase confidence in NAMs. Discussions highlighted different regions in each sector that could be targeted. For full replacement of animals in ED research, they recommended the use of truly tiered approaches as these also increase confidence in NAMs, the use of embryo assays, and transcriptomics. Conclusions reached from this workshop included the need for more effort and investments towards NAMs to address their missing endpoints and the need for validation of test guideline performance. The workshop summary is available here.

Stefan Scholz from Helmholtz Center for Environmental Research at UFZ was the next to present. He spoke about the use of fish embryos in identifying environmental EDs that affect the receptor binding pathway that hormones act through. Transactivating assays in particular are very useful for these studies.

Eleutheroembryonic embryos don’t feel pain or distress, so their use is ethical. They can also detect effects caused by direct receptor binding, but cannot determine ED MOAs as many mechanisms by which the chemical can act such as hormone synthesis or metabolism are included. Omics and gene analysis are used to measure ED chemical genes effects and potentially in the future may determine MOAs. Reporter transgenes are used to test for changes in embryo heart rate, angiogenesis, and chemical effects on behavior.

These assays use genetic constructs, with a reporter GFP gene, that bind to the promoter of endocrine receptors. This enables monitoring of endocrine receptor activation or inhibition by the test, when activated the cell will fluoresce whereas inhibition will prevent this.

Maria Arena of the EFSA discussed their take on non-animal ED testing. This testing is regulated by regulation number 2017/2100 for biocidal products. It requires that for official classification of an ED chemical its adverse effects must be shown in an intact organism and have an endocrine MOA. The adverse effect also must be a result of the endocrine MOA. For human and mammalian EDs there are two validation options: a 2 generation reproductive toxicity study or an extended one-generation reproductive toxicity study.

The ToxCast ER bioactivity model is an alternative to the eutrophic rodent bioassays. The Level 2 framework studies lack metabolic competence and considerations of ADME. Eleutheroembryos are the ethical compromise between the in vivo and in vitro studies. These embryos are not protected lives under law as they are not independently feeding organisms. Their use requires consideration of their metabolic competence, limitations such as standardization of protocol, and the lack of MOA data as previously mentioned by Dr. Scholz. For example, these embryos do not have thyroid glands, therefore are unable to describe the MOA of thyroid inhibiting chemicals.

The OECD is in the process of validating other fish embryo studies. The XETA model is not yet validated but also takes into account ToxCast data which is advantageous. Non-animal assays should be considered in ED assessment but at the moment are not sufficient to exclude any endocrine activity. EFSA has collated some recommendations as to when the XETA model can be incorporated into the assessment strategy and will make proposals on how to include eleutheroembryo assays in assessments once the OECD has standardized and published their protocols.

The final speaker was Laurent Lagadic from Bayer Crop Science. Initially, he presented the WHO definition of an endocrine disruptor which states that an endocrine disruptor is an exogenous substance or mixture that alters the functions of the endocrine system and causes an adverse health effect in intact organisms or their progeny. The Level 2 OECD framework consists of various in vitro mammalian assays to determine endocrine MOAs. Vertebrate testing is supposed to be used as a last resort assay when others do not exist for biocides and plant protection products, yet many vertebrate tests are used regardless. The animal testing ban for testing of chemicals used in cosmetics in Europe, unfortunately, does not apply to compounds that have multiple uses. The US EPA is ahead of Europe in that respect as it is making a policy committed to eliminating requirements and funding for mammalian testing by 2035.

As Dr. Arena mentioned EU lab animals are protected, this includes non-human vertebrates that are independently feeding and larvae and fetal forms of mammalians only in the last 3 months of growth. Earlier larvae and fetal forms are not protected by law. The XETA models for example use Xenopus laevis. Embryos are more specified for ED mechanisms. Cross-species extrapolation is being used to reduce animals in this application but additional information on evolutionary genetics is required for it to be successful. There is limited availability of validated non-animal models but EFSA and the OECD are encouraging the development of NAMs for ED research.

KEYNOTE: Jason Ekert - Accelerating The Development and Adoption of Complex In Vitro Models in Early Drug Discovery

Dr. Ekert is the head of the Complex In Vitro Models (CIVM) group at GlaxoSmithKline (GSK). CIVMs are applied across safety, ADME, and pharmacology, but additional characterization is required to increase the confidence in these NAMs. The liver is an organ of interest, particularly for drug toxicity studies. There has been an increased interest in multi-organ MPS but the ability to determine the predictive value is a challenge, especially as they improve the translational relevance. The CIVM group at GSK consists of more than 200 people that are utilizing new technology and techniques to further develop NAMs, such as using bioprinted organoids in MPS.

CIVMs are being incorporated into development studies to identify clinical endpoints and organ pathophysiology. From lead candidates to candidate selection, they are beginning to use MPS to obtain pharmacokinetic and pharmacodynamic data.

GSK focuses on oncology. They have been building solid tumor models, drug safety and drug metabolism and pharmacokinetic (DMPK) models, and epithelial and immune-competent models. In oncology and pharmacology, there is space for early target identification and validation, and lead candidate selection is an area of constant development. GSK is specifically looking at ways to reduce mouse models in oncology, through replacement with more patient-relevant organoids, organoids in MPS, and multi-layered avascular models. They have developed a patient-derived tumor organoid with epithelial cells embedded in a basement membrane gel. This enables long-term passage, has the native tumor heterogeneity, and can predict the in vivo therapy response.

A workflow for assay development has been devised. They tested poly(ADP)-ribose polymerase (PARP) inhibitors in this workflow. PARP inhibitors interfere with DNA leading to single strands and double-strand breaks which cause cell death and in turn increase the cytotoxicity of the compound. When PARP was tested with other compounds they saw differences in response to inhibition indicating a resistance similar to what was noted clinically. These compound-dose studies help to increase the confidence in the data obtained from organoids. The tri-culture model was also used to test PARP inhibitors. It includes tumor cells, stromal cells, epithelial cells, and extracellular matrix (ECM) layers. Results from the PARP inhibition studies were obtained after 7 days and cell viability was tested with GFP expression.

They have also been working on vascular tumor models creating endothelial tubes. With these models they are studying T-cell migration, they usually migrate from the endothelial wall through to the ECM. A lung tumor model was also used to study this. This model was put on a chip to study CD3 activated T-cells and quantitative changes in T-cell adhesion to walls and extravasation were noted. They could monitor and quantify tumor cell death over 96 hours with treatment. They are now looking to study chimeric antigen receptor (CAR) T-cells and chemokines.

They have developed a 3D primary human hepatocyte (PHH) spheroid model that can be used as a screening tool for identifying early hepatotoxicity. Increased predictivity was seen in the 3D models when compared to 2D models.

Their colorectal cancer organoid was used to study CAR T-cell activity. Within this model they showed an increase in HER2 expression and others, these cells were placed into MPS to study why there was an increase in HER2 with colorectal cancer (CRC) versus the normal models. Results are not yet available as it is currently ongoing.

The IQ MPS affiliate is only 3 years old. They aim to be a thought leader and to drive the qualification and implementation of MPS. They are working on making a platform for data sharing and have a manuscript series to help understand the requirements of MPS and their context of use in the pharmaceutical industry. CIVM often takes part in the NC3Rs CRACK IT challenges to help support the greater adoption of OOC technology.

Future directions aim to use CIVMs, such as patient or OOC and bioprinted tissue, and the many models mentioned, to replace the currently used cell line models. At present they use manual low throughput platforms that they hope to use scalable miniaturization on in the future to make automatable and flexible systems. This is currently used for testing MOA in single compartments, with the hope that future applications will include multiple compartments, pharmacokinetic/pharmacodynamic (PK/PD) assays, in vitro therapeutic index, and patient stratification.

S105: The role of clinical research on the understanding and treatment of diseases (Theme: Disease)

The first presenter of this session was Beatriz Silva-Lima of the University of Lisbon. She provided an overview of the research undertaken for potential treatments and basic research. She mentioned that the research and testing that happens before candidate selection lasts longer than the development, preclinical, clinical, and industrial testing combined on the selected candidate.

The Symdeko drug combination of tezacaftor and ivacaftor on Cystic Fibrosis Transmembrane Regulator (CFTR) mutants specifically are investigated in vitro, in vivo, and with a combination of both for proof of efficacy testing. The FDA accepted in vitro data as sufficient to allow a patient to be treated with this combination. However, this was an exception as the FDA usually requires predictive and prospective studies in rodents, but perhaps could be an indication of what’s to come.

Patient samples provide the most important matrix to justify adverse drug effects. Through diabetes research in human beta-cell lines, they showed that patients can be further stratified for example for severe autoimmune disorders, diabetes can be associated with a tendency for obesity, or a higher risk of hypoglycemia now.

A new paradigm needs to be produced based on the new tools developed such as computational models, eTOX, MIP-DILI model, MARCAR, EBISC, and ORBITO to predict gastrointestinal (GI) uptake in patients. She says that these will most likely happen through precision medicine whether we like it or not.

Next to present was Alexandra Maertens of Johns Hopkins University. She spoke to us about how annotations allow the mapping of genes to biological processes and functions. Gene ontology is an example of this but has limitations such as the overemphasis of known genes and non-random missing information. Most genes associated with cancer have minimal literature published on them, so much so that 67% of cancer-associated genes are not mentioned in the literature.

Genes are not studied in proportion to their clinical significance. An R package called WGCNA can overcome this as it clusters the data into biological function groups or modules. A lot of cancer research focuses on a set of popular genes, this is not always the best way as it takes a narrow viewpoint on causal genes. As such the APOL6 gene is present in most human cancers but is generally overlooked as it is not present in mice. Although it is present in rats it limits the models that can be used.

Joost Boeckmans of the University of Brussels took over from there to present his work using stem cells. Skin stem cells derived from hepatic cells can be used as a tool for toxicity testing. In his lab, they use amplified and cryopreserved human multipotent stem cell-derived hepatic cells (hSKP-HPC) from foreskin circumcisions that undergo 24 days of differentiation into a hepatic model. To test these models they used paracetamol to monitor the formation of hepatotoxic liver injury. Drug-induced fatty liver disease was also modeled in the cells. These models were used to understand the mechanisms of lipid accumulation, which occurs through increased fatty acid oxidation, increased fatty acid, and triglyceride synthesis, and decreased CD36 linked to fatty acid transport.

When they recreated drug-induced phospholipidosis in the models they noted gene expression changes in response to amiodarone. To recreate non-alcoholic steatohepatitis (NASH) they exposed cells to factors of metabolic syndromes such as insulin, glucose, fatty acids, TNF-a, etc, and noted a permanent increase in inflammatory gene expression. This model of NASH showed potential for drug testing when exposed to known drugs. The hSKP-HPCs also have applications in translational research as shown by their research of lipid accumulation mechanisms.

Lindsay Marshall of the US Humane Society International (HSI) was our final speaker. She introduced us to the BioMed 21 collaboration. This project brings together scientists and institutions to work towards a more human-focused paradigm for health research. One of the major issues in biomedical research is the reliance on animal models which are not good proxies of humans. 50,000 dogs per year are used in the US and the top research area they are used in is toxicity and IND applications, the top biomedical research area is of course cancer. Research is thankfully moving away from dogs, a Californian bill, number 252, is soon to be presented to the judge for the prohibition of toxicity testing in dogs. 

AOP training is freely available on their website which she highly recommends if you are interested in learning about how they work or how they can help in the motion to replace animals. The start point in disease AOPs is a healthy model and the endpoint is the disease. The position of the drug in drug safety AOPs can be replaced by disease-causing mutations for their evaluation. Case studies are used to demonstrate their effectiveness.

Current areas of work include publication bias and respiratory tract models. The respiratory tract models include 264 models from over 700 papers. The disease models recapitulate specific disease features and have applications in drug development studies. There is a disconnect between lung and lung cancer models where additional models are needed to enable more commercial use with the likes of mucilAir (EPA) and OncocilAir.

She ended the session by mentioning the need for NAM-focused funding for their standardization, as well as animal-on-chips which can bridge the gap from human to animals, and for veterinary medicine reducing the need for live animal studies. Emphasis is put on building a body of evidence to increase researcher confidence. This can also be done through animal chips as confidence in them may translate to human chips.

S118: A global movement to improve science using animal-free antibodies (Theme: Disease)

The first presenter of this session on animal-free antibodies was Afability’s, Alison Gray. She discussed the barriers and challenges associated with the replacement of animal-derived antibodies (ADA). Monoclonal antibodies (mAbs) come from the spleen of immunized animals (usually mice, rats, and rabbits) and polyclonal antibodies (pAbs) are extracted from their blood. Producing antibodies via immunization creates similar but unique batches of antibodies for each new animal immunized. This makes it difficult to reproduce experiments that use these antibodies.

Animal-free antibodies (AFA) have many advantages over ADAs. They have a one-time library production, they allow easy access to genetic sequences for genetic engineering to produce specific antibodies, they have no batch-to-batch variation, and they are quicker to make. The real question is why are we not already using them? She says that some people are under the impression that they are not yet mature, have a poor affinity, and for the production of therapeutic antibodies, greater developability is required which may cause larger attrition rates. These are not true, and lack of expertise and motivation are major factors behind their non-use. In terms of therapeutics, many researchers prefer to use technology they are familiar with rather than risk a failed outcome with new technology. The industry is making good progress on getting regulatory approval for AFAs; once more are approved, interest is bound to spike.

Other challenges to their progression include project licenses being refused for AFA projects, the lack of commercial antibodies accessible for research, and the lack of support and resources. Resources should include equipment, services, training facilities, bioinformatics support, and financial funding in order to adopt the 3Rs.

ADAs are prone to failure and no replacements are available for the failed in vitro tests. Animals used in the research need to add value that cannot be seen with non-animal methods, or else their use is not justifiable.

Joao Barroso of the European Commission’s JRC was the next to present. He discussed the scientific validity of AFAs. Antibodies are one of the most commonly used reagents in research. Animal-derived antibodies, as Alison mentioned, lack reproducibility. Regardless of the EU 2010/63 directive, the ascites method has increased in use by 65% between 2015 and 2017. The ESAC is working to change this through their research and evaluation of NAMs and AFAs. Their evaluation concluded with their endorsement of the use of AFAs. They also concluded that there were scientific and economic benefits to using AFAs. Guide selection was particularly beneficial as it enabled the selection of antibodies with specific characteristics which the ESAC plan to promote. They recommend replacing animals with AFAs in many situations as they greatly improve reproducibility along with the other factors which can improve the quality of research performed.

The ECVAM recommendations don’t ban or suggest new legislation, it determines if the therapeutic antibody is recommended or not without requiring the working group to evaluate them. AFAs are available from catalogs and from custom generation. They are in an abundance of literature and are equivalent to, if not better than, ADAs for most applications. They require only standard equipment for production but do need significant time investments for the generation of the library. However, this library only has to be created once for a lifetime supply of recombinant antibodies. The cost of production is similar to that of animal mAbs but, given perspective, more money is spent on animal antibodies of questionable quality, where their hybridoma clones can die off, killing their supply of antibodies with it.

The promotion of AFAs has been via education and training in webinars and Ecourses. They have been encouraged by the provision of funding to promote the transition to AFAs. The last thing mentioned was that the manufacturing industry should put a timescale on when they plan to phase out the production of ADAs.

The next speaker was Stefan Dubel from both Yumab and Abcalis. He spoke about how AFAs are a solution to the antibody crisis as they use sequence-defined antibodies which are more reproducible. There are billions of human antibody genes in phage display, making it more likely to find a match to the antigen of interest. Phage display is advantageous as its panning step allows complete control at each stage enabling identification of very specific antibodies.

He mentioned that ADAs cause a lot of false positives in research. MAbs are used as the solution but they are not always the best as hybridomas have issues with specificity also. Abcalis makes multi-clonal sequence-defined antibodies using the advantages from both mAbs and pAbs in phage display to combine more epitopes. These antibodies are stable after freeze-thaw cycles and have a lower background than ADAs. The COVID-19 antibodies they are developing are also more reproducible. Diphtheria antibodies have also been developed to replace the traditionally used horse sera. These human antibodies have better neutralizing properties than the horse antibodies.

They are developing antibodies that recognize common epitopes of different viruses in a clade, regardless of their fragment crystallizable (Fc) component. This is in an attempt to make broad-scale antibodies that will bind to all virus subpopulations as a general treatment for that clade of the virus.

Antibodies can be enhanced through their production methods. Longer washing steps allow for slower dissociation of the antibody and competitive panning can prevent antibodies from binding to other epitopes of that antigen, making them more specific. AFAs are not more widely used due to a misconception resulting from old data generated from bacteria with low affinity. Awareness of commercial sources of antibodies, new technology, and new information available about antibody production and use from non-animal sources is lacking.

Pierre Cosson of the University of Geneva then took over to talk to us about the ABCD project. This is a project aiming to supply recombinant antibodies to academic researchers. It consists of a database of 23,000 antibodies/4000 antigens. This database is an open-access, searchable platform that links antibodies (and their unique identifiers) to their specific page on UniProt where more information can be found. It also includes a link to a manufacturing facility that can make them on demand. They also offer the ability to change the Fc domain to the species of interest for specific research needs.

This project is a collaborative effort, if you use the platform to source AFAs they encourage you to publish your results whether good or bad to build data on each of the antibodies. They have a dedicated journal to make this easier called Antibody Reports that publishes technical reports from researchers to say which tests the AFAs were/were not compatible with. This new information is then updated on the ABCD website under the specific antibodies detailed information. This is a community project that characterizes the antibodies over time.

Recombinant antibodies will replace classical antibodies in time but they require action, such as the ABCD project to increase researcher use and confidence in AFAs. As a last note, Pierre recommended that anyone with hybridomas stored away should get them sequenced before they are lost and any other antibody sequences they may have he recommends depositing them to the ABCD platform to help it grow.

The next speaker was Lia Cardarelli from the Toronto Recombinant Antibody Centre (TRAC). Currently, there is a high approval rate for antibodies. The variable (V), heavy (H) and V, light (L) chains are responsible for their specificity and there are many fragment display libraries available such as fragment antigen-binding (Fab), single-chain variable fragment (scFv), diabody, scFab, and VH. Starting with phage selection, the selected antibodies are re-formatted via genetic engineering to IgG (or others) and then validated. The antibodies can be adapted to suit different species by fusing the VH and VL chains to the Fc of the new species. Validation is done through standardized assays such as squamous cell carcinoma (SCC) for monodisperse antibodies, surface plasmon resonance (SPR)/ bio-layer interferometry (BLI) for affinity and specificity, and T cell killing assays to test for killing capability.

There are lots of disease indications within the TRAC portfolio. Regenerative fibrosis and viral infection are new areas of study within TRAC. One case study of theirs is COVID-19 antibodies. These were made as antibodies that compete with angiotensin-converting enzyme 2 (ACE2) for binding. They identified 6 leading antibodies each targeting a different epitope of ACE2. One was more effective than the others at blocking infection. This antibody then underwent optimization by affinity maturation to make a sub-library of antibodies based on that specific antibody. In this sub-library, they found a daughter antibody that had a higher binding affinity than the parent which was chosen as the lead.

The final speaker from this session was Katherine Groff from PETA. She talked us through some of the findings of the NICETAM expert meeting to increase the use of recombinant antibodies and PETA’s work. They identified three characteristics that needed improvement for wider adoption. These were interchangeability, versatility, and marketability. They all agreed that the following are the challenges that emerge often; there is a lack of awareness of the advantages of AFAs, availability and lack of availability of off-the-shelf recombinant antibodies for purchase, and lack of quality antigens from which to find useful antibodies. Basic research is the largest antibody market so availability is of big importance for this.

To transition to AFAs, commonly ADA need to be sequenced to allow recombinant production and already developed recombinant antibodies need to be validated for specific applications. For long-term actions, we need to develop recombinant antibodies for new targets, as well as develop a central resource, run by antibody experts, that has links to useful information and quality manufacturers. Of course transition, specific funding was also mentioned.

In the Q&A some very interesting questions were answered and thought-provoking discussions were had. One such question asked about why France was such a large contributor to the increased use of the ascites method. There was no specific explanation for this but the speakers concluded that in France, as with many other countries, work on a 5-year license that is generally renewed without being checked, so areas that routinely use the ascites method have had no incentive to change. They need to implement a program for the move to in vitro production. Routine production of approved antibodies was another possible explanation as to why the ascites method is used a lot, they just haven’t changed production methods over to recombinant production.

When asked about the advantages of using Fab libraries over other libraries the answer was quite simple. There are no specific advantages to the Fab libraries as the VH and VL can be rearranged at any point to change the library type from Fab to scFv for example. To this another speaker added that the antibodies are like Lego, they can change library formats by just changing their layout.

S117: Personalized medicine through human organoid models (Theme: Innovative Technologies)

Chris Denning from the University of Nottingham started this session on personalized medicine through human organoid models. He spoke about the cardiotoxicity model he has been working on. It is created from human induced pluripotent stem cells (hiPSCs) and models long QT syndrome, this is an elongation of the QT phase of the PQRST wave. He has differentiated the hiPSCs into cardiomyocytes to give a genetic model of long QT. This is a very useful model as it allows allele-specific RNAi to be performed and arrhythmias can also be induced.

When using CRISPR, to make the mutants, they can target the MYH7 gene for hypertrophic cardiomyopathy (HCM) and if they induce the R453C mutation, the model will have the severe phenotype. A suite of tools can be used to obtain quantitative data. They noted a decrease in contraction force in the mutants at the myocyte level. HCM patients generally have more cardiac muscle and smaller ventricles which reduces the amount of blood that can be pumped around the body. Actin mutations are hypercontractile, whereas myosin mutants are hypocontractile. They overloaded the models with calcium and sodium and included GFP in the mutant models to visualize the models that don’t show increasing arrhythmias. They found potential gene modifiers residing in the mitochondrial genome which are now going to be cross-referenced with databases to identify.

They are working with GSK on an NC3R-GSK drug development pipeline. This uses an iPSC-based system that includes many modalities focusing on the contraction. They blindly tested 28 drugs, supplied by GSK, to examine the different test systems. The results were not what they hoped for but they were helpful to refine the system. The main take-home message from this project is that the models need to work at a slower beating pace. This would have been missed in mice as they have a higher tolerance to chemotoxicants.

The next speaker was Alfonso Martinez Arias from the University of Cambridge and more recently Spain. He spoke about his research on gastrulation using gastruloids. He says that cells are the link to interpret genes and DNA. Engineering approaches are implemented to learn what the cells can do, not how we can grow them. There already are models of early mammalian development for pre-implantation development but they are not complex enough for thorough research. The bulk of information we have on preimplantation development is thanks to in vitro fertilization (IVF). This is why he created the gastruloid. This is an embryonic organoid that started as an aggregate showing elongation that represents the axial elongation of the system. The Hox genes correctly expressed in these organoids are associated with axial elongation. The organization of other genes also mirrors that of embryonic development and Wnt coordinated fate decisions. This system is both quantitative and reproducible as more than 80% of the aggregates are polarized. They can model the endoderm, spinal cord, hond-midbrain axis, etc. These organoid models arise from mouse cells so there is only so much we can understand of human biology using these organoids.

He is also interested in post-implantation human development which has been hindered by ethical and legal regulations. In vitro research must stop at day 14 before gastrulation starts as this is when they become a protected life, this prevents gastrulation development studies in embryos. To overcome these limitations they used human gastruloids. They show inhibition of nodal signals and loss of both endoderm and cardiac structures. The endoderm fails to influence primary and secondary structures that affect the heart. 2D and 3D models can have very different development patterns also. These models can be used for fundamental biology, integral organ systems, and disease modeling. Their ongoing work with these models is looking at teratogenicity in toxicity testing.

Christine Mummery from the University of Leiden then took over to talk to us about OOC produced with stem cells. There are no drugs for many chronic diseases and no validated models for the human body or tissues, as they lack vascularization, mature cells, and tissue elasticity. Personalized health OOCs do include vascularization and elasticity. They have applications in biomedical drug development and regenerative medicine. There are several initiatives to standardize these models including CiPA, the International Society for Stem Cell Research (ISSCR), and smart multiwell plates.

hPSCs are immature and differentiate into immature cardiomyocytes. Immature cardiomyocytes beat constantly, whereas mature cardiomyocytes only beat in response to stimuli such as a pacemaker. Immature cardiomyocytes lack a 3D environment and other cardiac-specific cells. Cardiac microtissues coincide with gene expression profiles of adult hearts. They included T-tubules, which are of at least postnatal maturity, and cardiac fibroblasts which increase contractions, in their model. In fibroblasts, tight junctions that are usually present in mature heart tissue, connecting to cardiomyocytes, are not seen. They required structural, electrical, mechanical and metabolite maturation to recreate these structures. These models only cost 22 cents per microtissue to produce and are compatible with high-throughput screening.

PKP2 mutations inhibit trafficking in arrhythmogenic cardiomyopathy (ACM) iPSC mutant lines. ACM results from incorrect connections that can be attributed to fibroblasts. They are working on blood vessel models for this condition. These are isogenic models of vasculature with both macrophage and monocytes included mimicking inflammation. The model also has applications in hereditary hemorrhagic telangiectasia (HHT) disease to identify ways of stabilizing weak and leaky blood vessels. These vessels include a lumen, tight junctions, and are perfusable. Pericytes stick to the smooth muscle cells in the control. This is not seen in HHT, showing weak and leaky cells. Thalidomide initially worked but they are looking for others due to the adverse effects associated with that drug. Engineered heart muscles have been miniaturized but cells are not their limiting factor, throughput is – at present the models can reach medium-throughput but work is being done to increase this.

The final speaker of this session was Steven Kushner of the University of Rotterdam, who presented his work on 3D adherent brain organoids. He gave us an overview of the types of diseases he intended to model. One interesting fact about schizophrenia (SZ) he mentioned was that identical twins have an increased risk of schizophrenia (associated with global white matter microstructure abnormalities), 48% higher risk than non-twin humans.

He performed a family-based analysis to uncover novel mendelian mutations associated with SZ. No real electrophysiological changes between SZ and control siblings were identified, but he noted abnormal oligodendrocyte progenitor cell (OPC) morphology and reduced viability of these cells. These OPCs are involved in the myelination of the brain.

He created a 3D platform to study these; a cortical spheroid. Some issues he encountered when making it 3 dimensional were heterogeneity, necrotic cores, and difficulty with analysis. These were fixed with 3D micro-brains. These are present in 384 well plates, with single structures in each well. They start with a monolayer of neural, frontally patterned, progenitor cells and also have clear topographically separated axons and dendrites with cortical layering. It doesn’t have all 6 layers as is in vivo but upper and deep layers can be seen. They also see signs of astrocytes and radial glia along with synapsin expressing mature dendritic spines. These models have a clear function and physiology that can be monitored over time using calcium imaging. Myelin production was enhanced when OPCs increased. The effects of taurine on myelination is a current project they are working on with Danone and Nutritia. The microglia can be viewed by titration and retain their engulfing properties. At present these models are missing vascularization but this is an ongoing project that they aim to fix.

S314: Applications of New Approach Methods in Genotoxicity and Developmental toxicity testing

The first presentation on the application of NAMs in genotoxicity and developmental toxicity testing run by Toxys was Amer Jamalpoor. He spoke about the Toxys ReproTracker assay.

Animal results haven’t had the best correlation with clinical outcomes in humans. One reason for this is that they don’t take into consideration maternal pharmacokinetics or placental influences as pregnant women are usually excluded from clinical trials. Alternative models should cover all of these developmental stages in addition to those already studied. To make their ReproTracker assay Toxys used iPSCs as opposed to ESCs as there are no ethical considerations with somatic cells. ReproTracker is a human test system. In this system, monocytes differentiate in 14 days, hepatocytes in 21 days (treated with an endodermal induction medium and further treatments to form a cuboidal shape), and neural cells in 13 days. Effective differentiation of each cell type is checked via marker gene expression.

This system was exposed to different compounds. The exposure caused a reduction in biomarker gene expression indicating disruption of the developmental program caused by the compound. This system can be used to accurately identify teratogens. It determined that thalidomide is a teratogen as it affects the beating of cardiomyocytes caused by a decrease in the myosin marker and the hepatocyte marker alpha-foetoprotein (AFP). Saccharin was classified by the ReproTracker as a non-teratogen as it did not affect the expression of the biomarkers. This model has a 79% predictivity rate and is further validated with neural rosettes as an additional read-out to improve the performance of the system.

Next Iris Muller of Unilever took over to talk about the integration of the ReproTracker into the NGRA framework.

The NGRA framework is adaptable. It includes four essential components; PBK modeling in vitro, pharmacological profiling, cell stress paneling, and high-throughput transcriptomics. They are working on integrating developmental and reproductive toxicity (DART) into the framework to cover gametogenesis, fertilization, embryonic development, postnatal development, and multi-generation effects for developmental toxicity testing. The exposure component of the framework includes occupational health which enables them to build a pregnant model. In vitro bioactivity includes pharmacological and stamina assays for this specific application.

The DevTox quick predict assays require additional assays, ReproTecker can be useful for this. The ReproTracker is used to classify chemicals as teratogens or not. It works based on – if one or more differentiation processes are inhibited by a compound it is teratogenic. They aim to extend their point of departure modeling by adding an additional 60 chemicals to those already tested. They also hope to develop an osteoblast differentiation process soon.

Then Inger Brandsma took over to introduce another Toxys platform called ToxTracker.

Current toxicity assays are low specificity and sensitivity and have no mechanistic insight into toxicity. Genotoxicity testing is performed in two parts. The first is through in vitro assays of mutagenicity and clastogenicity. The second is via in vivo assays that determine tissue specificity. Hazard screening is performed by in silico QSAR analysis, miniature versions of the standard tests, and biomarker assays.

The ToxTracker system is a stem cell-based platform that includes 6 unique reporter lines that can detect carcinogens and provide insight into the MOA. The 6 reporter lines are activated by DNA damage, protein damage, p53 activation, and oxidative stress. There are two unique reporters associated with both DNA damage and oxidative stress, protein damage and p53 activation have one apiece. ToxTracker assays can be performed in 96-384 well plates compatible with high-throughput screening. They can evaluate cell survival using GFP and flow cytometry. This model is validated using ToxCast and other databases. It has 94% sensitivity and over 90% specificity. OECD inter-lab validation is currently ongoing, expected to be published later this year based on a suite of systems such as ToxTracker, ACE, AO, and TubulinTracker. They include a 2 fold threshold for positive response.

The goal is to go beyond qualitative to quantitative assays including both bootstrapping variations in vehicle control and benchmark dose which is predefined over the background. They performed principal component analysis (PCA) and axis scores reflected the toxicological MOA. They applied this to ToxTracker and an in vivo micronucleus test (MNT) to quantitate the genotoxicity potency correlations. They can calculate the threshold for positive response with this system.

Lastly, Marc Beal from Health Canada spoke about their work. PODs can be hazard agonists or specific agonists. They applied IVIVE to derive PODs from in vitro bioactivity data. The bioactivity PODs are 100 fold lower than the animal POD, therefore providing a better protective effect on human health.

A Health Canada case study compared IVIVE with ToxCast data, ranking the chemicals by bioactivity exposure ratios (BERs). They identified quinoline as genotoxic but in the ToxCast database, it would have been low risk due to ToxCasts’ poor ability to detect genotoxicity. They use HTTPk models for IVIVE, to determine the dose at steady state in plasma that is associated with general toxicity in the ToxCast database. IVIVE supports in vivo derivation of protective PODs and can be used to support chemical safety evaluation. They do need to account for in vitro disposition though which will help with the smaller molecules that did not have a protective POD.

S162: Novel cell-based technologies for predicting drug-induced liver injury (Theme: Innovative technologies)

This session described the use of cell-based technology for drug-induced liver injury (DILI) prediction. The first to present was Bruno Filippi of InSphero. He talked about their work modeling liver steatosis a disease that affects 22-30% of the global population.

They produced a 3D model of steatosis to determine how it affects the toxicity of compounds. They used spheroid microtissues from PHHs, liver endothelial cells, kupffer cells, and stellate cells. These models were viable for up to 28 days. They generated both lean and steatosis models by including different media for efficacy and toxicity testing applications. The lean model was grown in a lean medium, whereas the steatosis model was grown in a diabetic-low density lipoprotein (LDL) medium. The lean model had no inclusions and displayed normal physiological levels of glucose and insulin. The steatosis model had cytoplasmic inclusions. The steatosis model without LDL had increased glucose and insulin expression. The steatosis model with LDL had increased glucose, insulin, and LDL expression. Hematoxylin and eosin (H&E) staining at day 16 showed that the diabetic model without LDL led to a partial reversion of steatosis. Through histological staining they determined that all tissues remained functional and produced albumin, they saw no canalicular structure in the diabetic model with LDL but a partial reversion in the model without LDL noting the formation of canalicular structures again, and steatosis increased the production of triglycerides. PCA of transcriptomic profiles showed changes in the gene expression of the diabetic model with LDL particularly up-regulation of extracellular matrices remodeling homeostasis and regeneration, and downregulation of lipid metabolizing genes. Bile salt export pump (BSEP) staining is weaker in the steatosis model as BSEP is downregulated also. Cytochrome P450 (CYP450) expression changes in the steatosis model correlated with activity changes.

They tested 5 compounds in both of the models and found that 3 of the drugs (including paracetamol) had significant differences in toxicological profiles compared to the normal model. Steatosis leads to an accumulation of cytosolic lipid droplets, an increase in cellular adenosine triphosphate (ATP), and changes in both transcriptomic and metabolic profiles which can affect the toxicity of a drug. This model is proving to be a promising component of precision medicine for drug safety evaluation.

Next up was Hans Clevers of the Hubrecht Institute who presented his group’s work on liver organoids. They have created human cholangiocyte organoids in a hollow bowl shape. They first studied the BAP1 association with cholangiocarcinoma in these models. The wild-type (WT) model without BAP1 mutation was highly polarized, when the BAP1 gene was knocked out they lost all aspects of the organoid structure due to a disorganized epithelium. ZO-1 (gap junctions) also do not localize properly. They determined that cholangiocarcinoma is a result of the nuclear function of BAP1, not its cytoplasmic function as previously thought. The genes affected are all involved in cell adhesion and cytoskeletal rearrangement contributing to cancer progression in vivo.

PHH organoids can form bile canaliculi. Bile produced from the cells is transported via canaliculi to a few centralized locations where bile is deposited. They suspect that these locations may be where bile ducts usually are. They used these organoids to model the hepatitis B virus in the human fetal liver and modeled non-alcoholic steatohepatitis (NASH) through high levels of free fatty acids and via clustered regularly interspersed palindromic repeats (CRISPR) engineering.

Manoj Kumar from KU Leuven then spoke about the work on liver models for drug discovery. They use PSCs as they can be fated to cells with hepatocyte features but they are not mature enough when compared with PHHs. Genome and metabolic engineering create more mature hepatocytes from PSCs but they still lack structurally defined microenvironments. To overcome this they screened microenvironments and looked at synthetic matrices such as polyethylene glycol (PEG). They identified the optimal microenvironment to support the maturation of hepatocyte-like cells (HLCs). This is the HepMat hydrogel which supports further maturation allowing increased gene expression and increased rifampicin tolerance indicating maturation. In terms of other liver cells, they evaluated non-parenchymal cells which quickly lose markers ex vivo and accumulate lipids. Stellate cells stored lipids and vitamin A and when exposed to transforming growth factor

(TGF)-𝛽 they saw an increase in response to fibrotic marker and macrophage which was maintained for a month.

They cocultured the HLCs and non-parenchymal cells (NPCs) in a 7-Benzoyloxy-4-trifluoromethyl coumarin (BFC) assay noting an increase in CYP activity. They used MILAN to identify the unique cell types in culture. For metabolic-associated fatty liver disease (MAFLD) and NASH, there are no approved drugs due to a lack of predictive models. This model was tested by exposing it to oleic acid (OA)-induced liver steatosis, fibrosis, and inflammation. The fibrosis model saw an increase in inflammatory cytokines. As a benchmarking model, they tested two late-phase anti-NASH drugs. They saw a decrease in fibrotic and inflammatory gene expression in response showing the effectiveness of this model to test drugs. Their future aims are to test more anti-MAFLD/NASH compounds.

The final presentation was from Eva-Maria Dehne of TissUse. She spoke about the platforms and research performed for liver models at TissUse. Their chips are produced from polydimethylsiloxane (PDMS) and utilize iPSCs, cell lines, primary cells, biopsies, and precision cut organ slices (PCOS) but frequently use HepaRG cells too. They mostly stick to a 2-organ co-culture per chip but they also possess chips that can hold more.

They co-cultured pancreatic islets and liver spheroids on a single chip. This co-culture ran for 15 days after which a decrease in glucose to physiological levels was noted as the pancreatic islets activated feedback loops to secrete and maintain insulin levels to homeostasis.

They co-cultured primary liver cells and intestine cells to create a gut-on-a-chip. For this, they use two commercial models called EpiIntestinal and 3D InSight. They tested two substances on this model. Compound A was readily absorbed into the liver and blood where it was metabolized. Compound B was not absorbed into the liver, it stayed in the intestine where it was metabolized and secreted into the blood. To allow for perfusion of this system they lined all surfaces with endothelial cells and made channels containing hydrogel for vascularization. They are looking to integrate kidney cells to model the secretion of the compounds.

They have also developed a 4 organ chip for ADME testing and have filled it with iPSC-derived cells to model the intestinal barrier. In the future, they hope to make a full human-on-chip to enable basic research and substance testing.

S84: Beyond the 3Rs: Expanding the Use of Human-Relevant Replacement Methods in Biomedical Research (Theme: Disease)

This session addressed the expansion of the use of human-relevant replacement methods in biomedical research beyond the 3Rs as they are now 62 years old, formed in a time of less complex technology.

The first to take to the stage was Lindsay Marshall of Humane Society International (HSI) who introduced us to the BioMed21 collaboration. She started by emphasizing that even if we could scale up mouse models to human size they still would not be a perfect translation of human processes. She also mentioned that fewer animals are used for regulatory purposes than in previous years.

It’s time to move to new methods for toxicity testing to get over the backlog of chemicals that need to be tested but with better biological relevance. These NAMs can cost less especially if computational and in silico methods are used. AOPs work well for toxicity testing and they now want to use them for biomedical testing too. Key events that cause disruption should be included to build a pattern capable of identifying relationships between the initiating events and the endpoints. This is key to finding diseases that have similar interconnections so that drugs with good efficacy can be identified as broad treatments for similar diseases.

A case study within the BioMed21 collaboration evaluated human biology-based disease research. It included a workshop series about a disease mapping initiative. Common themes across all of the workshops were the need for NAM prioritized funding, high quality, and open access human data, a common reporting format should be implemented, and case studies should be used more frequently.

So far they have worked with the JRC to generate a knowledge service about respiratory tract diseases. They have also worked with the Centre for Predictive Human Model Systems (CPHMS) Indian webinar series to build a capacity of NAMs across India specifically. They are currently working with academic partners to create educational resources on NAMs, and are working with pharmaceutical companies to make an assessment tool to identify translational failures within their protocols.

The next to present was Cristian Trovato from the University of Oxford. He presented his work on a human cardiac computational model. Contractility is driven by electrical signals which can be portrayed by an electrocardiogram (ECG), electrical abnormality can be a cause of cardiac failure. Testing the cardiac safety of drugs uses >500,000 animals per year.

Human multiscale modeling and simulations can just as effectively describe the subcellular to the whole heart level. The ToR-ORd and Trovato2020 models used action potential recording from healthy cells and were capable of looking at the effects of drugs on action potentials and calcium transmission. The Margara2020 model is an electromechanical model that looks at the effects of antiarrhythmic drugs. These models are all available for free from MatLab and CellMI.

Single-cell models cannot reproduce whole hearts as their cell-to-cell variability is an issue. Human in silico models, on the other hand, include the experimental variability as seen in vivo. Virtual assay software is so sophisticated now that it can evaluate the drug-induced effect on cardiac output using action potential, calcium, and contractility biomarkers. The input for this model is the IC50 value of the drug effects on the ion channels.

They have collaborated with industry and regulators for personalized whole organ modeling and for simulation of cardiac activation under different electrical conditions. These models are useful for disease modeling enabling the differentiation between causes and effects. It can include scars and fibrosis and is working to find the link between proarrhythmia and disease phenotype. Machine learning is being used to identify HCM phenotypes to find risk factors using data from magnetic resonance imaging (MRI) scans. An inverted wave was, as identified, caused by ionic remodeling. This model integrated clinical and experimental data showing high accuracy and high spatiotemporal resolution.

Francesca Pistollato from the JRC was next to present the JRC’s work on the replacement of animals in AZ disease research. Dementia (including AZ) affects 50 million people globally. The last 15 years have seen no approval of disease affecting drugs, only drugs that treat symptoms. The animal models used are based on gene-disease links from human studies, but these models cannot recapitulate the clinical-pathological complexity of AZ and tend to generate false-negative results. It’s hard to recapitulate the lifestyle risk factors of late-onset AZ such as age, low activity levels, smoking, and poor sleep in animal models. Their AOP concept can identify what signaling pathways are perturbed at the onset of AZ showing potential applications in drug screening studies.

The AZ non-animal models that are being studied include hiPSCs for an epigenetic background, 3D cell models with improved tissue qualities, MPS in high throughput screening, next-generation sequencing for Omics studies, computer modeling, and observational studies in AZ humans. She says that the integration of these models is the key to moving beyond the 3R reductionist approaches.

Helena Hogberg from CAAT then talked to us about neuroscience research. The inter individuality within humans and species differences between humans and animal models are issues that need to be addressed. Animal models also do not include environmental risk factors.

Human models have been posed as the solution to these limitations. Human iPSC-derived neuronal models include microglia, are reproducible in size and composition, and can have different genetic backgrounds thanks to the iPSCs characteristics. These neurons have some functionality due to calcium signaling in the glial cells, neurons, and the oligodendrocytes that produce the myelin sheath wrapped around the axon. They have been using it for developmental neurotoxicity and degeneration research, cancer research, myelination, blood-brain barrier, resilience, autism, and infectious disease research. One particular study looked at pediatric anesthetic neurotoxicity. They noted that early exposure to anesthetics can impact brain development and lead to cognitive deficits. The developmental switch was inhibited by isoflurane (anesthetic) and was also perturbed in the developmental toxicity AOP. Reverse transcriptase-polymerase chain reaction (RT-PCR) performed on these iPSC models enables the quantification of RNA. Characterization of oligodendrocyte immature markers decreases over time in NPCs over the full 8 weeks. They are looking into lipidomics for quicker characterization rather than assays such as western blot and immunohistochemistry (IHC). The microglia in BrainSpheres examine cytokine release at the gene expression level, studying the role of cellular communication in cytokine release.

Next, John Greenman of the University of Hull presented his work using lab-on-chips to diagnose biopsies. He put slices of a biopsy specimen into individual chips to identify the correct treatment plan. The tissues were perfused, via gas concentrations in the media, at all times to keep them viable. There are many different chip types that can use many different tumors in these platforms.

One particular study looked at the effectiveness of head and neck small cell carcinoma (HNSCC) irradiation treatment. They stained the cells with CK2 to identify tumor cell viability after M30 treatment. They noted reduced viability in the M30 treated cells and increased doses saw an increased apoptotic effect increasing cellular damage.

PCOS tissues can be subjected to any assay, ultimately showing that radio and chemotherapy affect the tissue. In thyroid slices, the noted lactate dehydrogenase (LDH) release at the start and benign tissues made more thyroxine than the cancerous tissues.

The last presentation was from Clebar Trujillo from VyantBio. His presentation looked at stem cells in brain cortical organoids that are recapitulating molecular and cellular stages of corticogenesis. The formation of neural networks in vivo is not well understood.

Cortical organoid network activity using multi-electrode arrays (MEAs) can measure and quantify the neural firing activity over time. Oscillatory patterns are created over time but at the start very little patterning is seen. This is a powerful model of brain activity and dysregulation.

Epilepsy is associated with CDKL5 gene deficiency, altered morphology, changing connections, and neural activity is at a constant hyperexcitability state. They made a high throughput screening platform from stem cells forming 3D astrocyte and neuron spheroids. Their activity was monitored by calcium oscillations. They employed CDKL5 deficiency disorder (CDD) phenotyping for activity testing to find specific changes. From these changes, they made a platform to test the compound impact on test methods and drug-associated characteristics such as efficacy. They noted the rescue of normal expression in the CDD specific profiles. Their future hope is to utilize these for personalized medicine.

S313: Roadmap to integration (Theme: Safety)

The first speaker of this session was Jan Willem van der Laan of the Medical Evaluation Board in the Netherlands. He spoke to us about animal-free testing of cell-based medicinal products (CBMPs). The starting point for these tests is at 100% uncertainty as no data is available. The clinical relevance of animal models in this context is questionable. The toxicity studies for CBMPs evaluate the clinical impact of the animal dose. The products are classified by the location and type of cells present, most of the animal toxicity studies had no findings. Heterologous systems refer to human cells tested in animal models, whereas homologous refers to animal cells tested in animal models. Dendritic cells have had no animal toxicity studies performed. The animal-free safety package without in vivo tests is composed mainly (90%) of clinical experiment information of which 33% were derived from in vitro tests. Human cells have been tested in rats and mice for general toxicity. Animal toxicity studies were informative 53% of the time for local tolerance and NSG mice were relevant for CD3+ but there were split opinions on this (36% of those who voted).

The next to present was Carl Westmoreland of Unilever, who discussed their Chinese scientific workshops on chemical safety. They aim to change the world towards using NAMs through the discussion of the political, social, and technological means. The social aspect, in particular, is an encouraging driving force as in today’s society many more people, the large majority of younger generations, are demanding cruelty-free products. Significant changes in safety and cosmetic regulation allow the importation of cosmetics without animal testing in China but safety assessment must be performed and production of these cosmetics must be good manufacturing practices (GMP) compliant. NGRA can be used as part of this safety assessment.

Unilever organized a workshop in 2019 in Shanghai, to assess the role of NGRA and NAMs in safety assessment. NAMs are not new to China, there have been 3 books published in Chinese on NAMs. In this workshop, they identified the challenges they face when trying to integrate NAMs and NGRA. This included increasing their lab capabilities and updating the safety regulations to adopt NAMs. They then saw presentations of Chinese scientists’ work using NAMs. Dr. Gao is working on a tiered approach in an exposure-led framework for NGRA. Dr. Zhang is researching dose-dependent transcriptomic approaches for screening and chemical toxicity prediction. Dr. Xu discussed his work using multi-omics for chemical risk assessment. Other research looked at the increased prevalence of OOCs for chemical safety assessment.

BioCell has developed a 3D corneal epithelial model called BioOcular for use in predictive toxicology. Chinese regulators rely on animal tests but they are taking steps towards using QSAR and read-across when possible. They then discussed the education in undergraduate and graduate toxicology areas, identifying places where NAMs could be included in the discussion.

Emma George from Cruelty-Free International then took over to discuss the Reach program and the 3Rs. Reach promotes alternative assessment methods. Polymer substances are being bought under Reach legislation, this sees 2 million animals used for testing them and a further 2 million to test for endocrine-disrupting properties on top of the many animals already used in Reach. ECHA has to identify potential risks and demonstrate potential risks to human health or the environment to make improvements. They do encounter challenges particularly with replacing in vivo tests for read-across or weight of evidence assays as these are seen as poorly documented. NAMs have lower acceptance and are rarely used as a stand-alone replacement, usually as a precursor for in vivo testing. Animal testing methods are rarely evaluated for their validity as a test system to mimic humans and there is no target in place to phase them out.

To overcome these shortcomings they have developed a 10 point plan to reinvigorate REACH. The first thing on their list was to investigate the low uptake of in vitro, in silico, and in chemico techniques to understand how to encourage their use. Secondly, they had to acknowledge the total number of animals used under REACH to evaluate its success and consider areas that can be targeted. The third thing they proposed was to make data sharing mandatory between registered substances to avoid repeat animal tests leading to fewer animals being used. Fourth, they reviewed the use and acceptance of waiving arguments in annex XI. Then they revised the legislation to ensure it was promoting alternatives and animal use as only a last resort with appropriate actions directed to the relevant bodies. They are committed to direct funding towards the development of alternative methods by member states and the commission. The seventh thing on their plan is to enforce the last resort principle that member states can enforce under the 2010/63 directive. They plan to honestly assess the uncertainty produced by animal tests and embrace the alternative approaches that can offer the same level of protection if not better. The final thing they included to revitalize the REACH program is to set a deadline for phasing out of animal tests that can be worked towards.

The penultimate speaker was Shannon Bell of the Integrated Laboratory Systems who spoke to us about the tools and resources that can be used to increase the adoption of alternative methods. Knowledge gaps impede the adoption of new methods, but technical information is only half of the reason, lack of confidence is the other half. The use of user-friendly open-access resources can help to bridge the gap of information. Data for compounds can be found through the ICE website or other databases. The chemical quest tool can find information on your chemical by finding structurally similar compounds. Most of the data in this database are from in vitro sources from high throughput applications but some in vivo data are also present. This information can be put into context using the IVIVE tool within ICE. The data can then be interpreted against other similar compounds to assess their additional chemical features’ effect on bioactivity. This is possible with the use of the EPA chemical dashboard. Improving access to diverse data and being able to put it into context is very useful for bridging data gaps. The use of user-friendly tools such as the ones mentioned can help to increase confidence and allow communication between domain experts. This all works in favor of promoting NAMs.

The final speaker was Lorenzo Del Pace from FerSci. He presented the findings of their 22 question survey. The survey consisted of ethically unloaded questions with no judgment or positive spin, with the aim for maximum inclusivity. They aimed for the survey to be taken by people with higher seniority in their fields as they would have the most impact on what models are used for their research. For years we have been working on the perception that there is a single direct link between research and innovation, but this is not true and it most certainly is not linear. Stakeholders do not merely include investors, employees, and customers, it also includes societal approval. From their survey, they were able to make a more detailed model of how socio-economic approval comes about. The old model was under the impression that it was just researchers and results that led to approval. They now understand that the models used, regulators involved, and societal approval are all involved to achieve socio-economic approval.

S300: (Multi-)organ models-1 (Theme: Innovative Technologies)

This session aimed to highlight the broad spectrum of organ models available today and the many many ongoing studies using them and endeavoring to better them.

The first speaker in this session was Katharina Schimek from TissUse. She started by emphasizing why we need these organ models as opposed to the use of animals stating that only 8% of translation from animals to humans is useful.

TissUse’s solution to this is to combine organ models with test assays. They have three multi-organ chips available with the capacity to hold 2, 3, and 4 different sets of tissues. Their breathable multi-organ chip is based on a HUMMIC chip set up. It has a microscopic glass slide base, PDMS middle, an adapter plate top, and has a pumping frequency of 0.5 Hz. It has one 96 well compartment and two 24 well compartments, this enables the volume in the chip to be increased to 4 mL.

In their liver spheroids, they use a combination of HepaRG and PHH stellate cells in a ratio of 24:1. Their bronchial epithelial model uses mostly primary human bronchial epithelial cells. These were put in a chip together noting increased oxygenation. They measured the PO2 by adding oxygen sensors into the chip. PO2 is stable in the compartments and CO2 decreased when MucilAir was added. This model was suitable for long-term culture over 14 days. Co-cultures see decreased LDH release and specifically, in the lung monoculture they noted almost no LDH release, but LDH was produced in the liver spheroids. While assessing the model they noted a 35% decrease from baseline ATP levels in the static and dynamic chip cultures, but they saw no effect on tissue viability or albumin release over the 14 days. Through Aflatoxin B1 (AFB1) exposure they were also able to look at the crosstalk of the tissues. After exposure to AFB1, they noted a decrease in albumin production indicating that AFB1 caused some impairment in the function of the liver chip.

Their future plans are to look at the inclusion of a micro respirator and to combine the InHALE system with HUMMIC technology to study the systemic effects of drug exposure.

The next to present was Julia Scheinpflug from B𝑓R and B𝑓3R Berlin. She spoke to us about her work on the human bone-on-a-chip model. Bone is a complex tissue as it has both physical and biological parameters and hypoxia can affect its phenotype. She studied the differentiation from osteoblasts to osteocytes. Osteoblasts are embedded in the ECM which has the potential for an ossification model. In her lab, they use a bioprinter to build osteoclasts in both static and dynamic cultures. They witnessed no difference in the models for survival but the static system had the highest cell proliferation rate and they all underwent anaerobic glycolysis.

All models were then exposed to mechanical load and hypoxia. In osteoblasts, the expression of alkaline phosphatase (ALP) was increased. Osteoclasts saw increased expression of ORP150 and up-regulation of vascular endothelial growth factor A (VEGFA). She also noted intramembranous ossification. Some cells died under mechanical load in the first few days. No increase in glucose was seen so there was no anaerobic glycolysis occurring. Lipoprotein lipase (LPL), CAPG, sclerostin (SOST), and ORP150 were up-regulated which explained the mixed phenotype seen in some models.

This bioprinted model system has viability, metabolic activity, decreased osteoblast activity, increased osteocyte differentiation, and they showed obvious signs of biological response to stimuli making them a good candidate for carrying out mechano-biological studies on osteocytes.

They also created a micro mass culture that was also viable, metabolically active, included both increased and decreased osteocyte differentiation, and a mixed phenotype with local distributions. This model is currently under works with the future adaptation of it on their priority list to extend cultivation time and potential preculture in osteogenic medium.

Guillermo Alberto Gomez from the Centre for Cancer Biology was next to present his work on brain cancer models. Glioblastomas are notoriously hard to model due to their intratumor heterogeneity. To overcome this Guillermo has been focusing on patient-derived tumor tissues to discover new targets using cutting-edge technology. He did an in-depth analysis of patient-derived biopsy samples. The novel patient-derived glioblastoma preclinical models are based on organoid technology. They use high-content imaging techniques to study the role of the tumor microenvironment in these organoids. To ensure growth on the bottom of the well (important for imaging) they used microplate inserts which also ensured growth in a set XY position enabling high-resolution whole organoid imaging. These microplate inserts enable long-term liver organoid cultures that have very minimal effects on organoid growth rate and do not affect the gene expression profiles or structure.

Their studies have looked at tumor microenvironment interactions by analyzing tumor cell behavior within brain organoids, looking at patterns of migration, and monitoring division and growth. This is done in the hopes of better patient stratification and improved outcomes as a result.

The next presentation was from Madalena Cipriano from the Organ-on-a-chip Group who presented her work on the choroid-on-chip model. The choroid is a pigmented tissue between the sensory retina and the sclera. The chip models include RPE cells, melanocytes, microvascular endothelial cells (MVEC), and immune cells. Melanocytes have a role in PK and PD so it is important to include them. It is important to test the ocular side effects of drugs, which is commonly not done as up to 70% of people that take biologics present with ocular side effects.

She is working with a MPS that includes vascular-like perfusion and human tissues such as iPSC RPE cells, primary endothelial, pigment melanocyte, and immune cells. It has three layers and is cultured with laminin fibronectin and collagen. This model is viable for over 2 weeks in culture.

When testing the model she found tight barrier junctions by staining with ZO-1 and pigmentation stained with TYRP1 in the RPE cells. The endothelium had a fenestrated barrier with 2 layers. There was two-fold higher retention in the outer (oBRB) barrier than the endothelial stromal barrier. Studies of the melanocytes showed pigmentation, cell density with the ability for adjustment, release of cytokines, specifically IL-6, and 3D tissue structures. The peripheral blood mononuclear cells (PBMCs) were active, also had cytokine release, and had migratory functions. Through her studies, Madalena discovered that T-cell activation alone was not enough to trigger migration and after perfusion, the cells had high viability. Activation promoted migration in both endothelium and melanocytes, and whole PBMC recruitment was increased. Melanocyte density promoted PBMC and specific T-cell migration. When exposed to Cyclosporin A and activation beads they saw increased cell viability and decreased immune cell retention. Cytokine release was dose-dependent to TNF-alpha and other such drugs. This model is sensitive enough to evaluate bispecific T-cell engagement, decreased volume release, and decreased T-cell recruitment. Madalena’s future research will focus on creating a toolbox for in vitro models.

Our penultimate speaker was Oussama El-Baraka from BASF who presented his work on hair bulb models. The human microfollicle model has restored the mesenchymal-epithelial cross-talk and bulb polarization, but these models have challenges in that they are not autologous and are limited to HTS-HTC.

Oussama’s group has been studying iPSC models to overcome these limitations. iPSCs differentiated into keratinocytes – overexpressing cytokeratin genes, epithelial markers, and bulb markers. Differentiation into melanocytes sees a star-shaped morphology, expression of melanogenesis markers, melanosomes, and transcription factors, and sees melanin transfer into keratinocytes which they confirmed with flow cytometry. Differentiation into dermal papilla sees a fibroblastic morphology and expression of mesenchymal markers, extracellular matrix, and signaling pathway markers.

They combined the three of these differentiated cells into a 3D autologous hair bulb with a core of neo papilla surrounded by keratinocytes and melanocytes. This model had improved maturation compared to the other models and had no polarization after 12 days. To enhance maturation they improved the co-culture medium which also enabled exploration of crosstalk and single-cell RNA seq analysis. This model is continually being validated with the testing of known drug compounds. It is designed for high throughput screening and they are considering up-scaling production with bioreactors. In the future, they hope to incorporate a skin model for further maturation and biological similarity.

The final speaker of this session was Ilka Maschmeyer from TissUse. At TissUse their assay development protocol has 4 main areas: model preparation, repeated dose exposure regimen, in-process sampling, and analysis using quantitative PCR, histology, omics, and microarrays.

Their recent endeavors looked at a thyroid-liver co-culture that aimed to simultaneously assess direct and indirect thyroid toxicity. They also looked at thyroid and liver spheroids in a 21-day culture in which the liver produced normal albumin levels and normal gene expression was seen in both.

Their pancreas vasculature tumor model utilized the vascularized two organ chips (chip2). They loaded the tumor and islets over 10 days, the tumors grew dramatically and the islets displayed normal insulin production. They applied an oncolytic virus on day 2 or 5 to identify the effects of infection. In the tumor, it caused inhibition of growth and the virus did not infect the islets.

They developed a 2 chip system that includes skin and liver cells for toxicokinetics and toxicodynamics allowing both topical and systemic dosing of test drugs. They tested AHT, which is a form of hair dye, and noted more prominent 4-amino-2-hydroxytoluene (AHT) metabolites after dosage. There was an increase in n-acetyl AHT for topical application in the epidermis-only model. This demonstrated that route and application frequency cause different metabolic changes. They also tested with a full-thickness model with epidermis and dermis. The topical application (rinsed off to mimic normal use) saw an increase in n-acetyl AHT which was caused by the first-pass metabolism of AHT.

They then went on to make a 4 chip model for ADME testing that included liver, intestine, proximal tubule, and skin cells. In this model, they saw a reduction in glucose concentration over 24 hours in the media. They were able to show proof of long-term homeostasis and stable mRNA expression for the full 28-day culture.

S154: NASH, the liver disease of the 21st century? Alternative technology in the spotlights (Theme: Disease)

Filip Beirinckx of Galapagos was the first speaker in this session on alternative technology for NASH research. He spoke about the use of human hepatocytes for lipogenesis among other things. Lipostem, an insulin-dependent de novo lipogenesis (DNL) pathway model for NASH and non-alcoholic fatty liver disease (NAFLD), was the first mentioned. In particular NAFLD requires better therapeutics as the current primary treatment is weight loss, despite the >50 clinical candidates. Better models are also needed to unravel NAFLD pathways and to identify suitable therapeutic targets. Metabolic defects lead to the development of hepatic steatosis, uptake of fatty acids from adipocytes and the gut, and increased insulin production causing hypoglycemia or hypoinsulinemia.

This is also known as the lipogenic pathway. Insulin is a trigger of this pathway and acts by Sterol regulatory element-binding protein 1C (SREBP-1C) to enable transcription of DNL enzymes. Glucose is the substrate in this reaction and acts via the GLUT2 transporter or the steps from glucose-6-phosphate to pyruvate which can be changed straight to carbohydrate-response element-binding protein (ChREBP) to carry out the same function as sterol regulatory element binding proteins (SREBP) or it can enter the Krebs cycle to enable the reactions that drive transcription of DNL enzymes.

Studies of NAFLD mostly use rodent models of DNL but these are low throughput. Human models use free fatty acids to induce lipid accumulation, but current models lack the sensitivity for insulin-dependent DNL modeling and lack reproducibility. They have been developing models using hepatocyte-like cells (HLCs) differentiated into hSKP-HPCs to recapitulate DNL. These cells are obtained from foreskin specimens and undergo sphere formation, digestion, and are then cryopreserved until use. They exhibit a mixed phenotype. These models were triggered with glucose, insulin, and fructose. Insulin increased fatty acid synthase (FASN) and other gene expression. These cells demonstrated insulin-driven lipid accumulation and were reproducible. The triggering compounds were added 21 days after differentiation along with the test compounds. To ensure the response was hepatic they used undifferentiated SKPs as the control.

Their unique DNL model used SKP-HCPs, iPSC hepatocytes, cryopreserved primary human hepatocytes, and HepaRG cells. They validated this model against known small molecule acetyl-CoA carboxylase (ACC) inhibitors. These inhibitors reduced the incorporation of acetate into the fatty acids. No reduction in stearoyl-CoA desaturase 1 (SCD1), insulin, or lipid accumulation in the models showed that the compounds were active. These models were miniaturized in 384-well formats to increase their throughput. They used 1600 kinase and G-protein coupled receptor (GPCR) focused small molecule compounds with already existing pharmacological information to confirm the pharmacology within their model. From this they obtained 192 lipid accumulation inhibitors that were whittled down to 134 more specific inhibitors using dose-response assays. Of the 134 they removed the less selective compounds that inhibited more than 5 targets, this left 92 inhibitors which were further reduced through analysis of the targets modulated leaving 10 inhibitors, 8 of which were linked to steatosis. There was low variability in the triplicates and responses created a nice dose-response curve. In conclusion, these models were capable of evaluating novel therapeutics to inhibit the DNL pathway.

The next speaker was Joost Boeckmans from the University of Brussels. He spoke about how type 2 diabetes and dyslipidemia are factors that influence NASH. The in vivo models of both are generally dietary or modified. The in vitro models utilize primary human hepatocytes, carcinoma-derived cells, stem cells, and co-cultures. The cell source of his model is the same as Filips, he too uses foreskin hSKPs. These are differentiated into hSKP-HPCs and then into hSKP-HPC NASH models that mimic the metabolic syndrome. To do this cells were exposed to increased insulin and glucose levels and exposed to kupffer and stellate cell factors. This model displayed increased interleukins in response to these factors. To determine the disease relevance of these models they compared them to chemotaxis, recruitment of cells, and lipid accumulation witnessed in clinical samples.

After exposure to Elafibranor they noted a restriction of lipid accumulation in cells, a decrease in natural lipids, and a lipid load reduction. After analysis of the inflammatory secretions they concluded that the compound does have anti-inflammatory properties. More compounds were tested on the model along with other models including the HepaRG, PHH, LX-2, and HepG2 cell line models. Lipid load increase was the least pronounced for the HepaRG model potentially explained by the different mechanisms present in different models. Inflammation occurred in all models but was least pronounced for HepaRG also. By scoring the anti-NASH efficacies of the compounds they created an algorithm to predict efficacy based on the in vitro test results. The scores were based on potencies and other factors. Combinations of different in vitro tests on NASH models can contribute to testing potential drug candidates.

The third and final speaker of this presentation was Siobhan Malany of the University of Florida who discussed drug discovery, disease modeling, and therapeutic evaluation of NAFLD compounds. NAFLD is a spectrum disorder that ranges from normal liver to liver steatosis to NASH to cirrhosis and finally to hepatocellular carcinoma (HCC). The use of hiPSC technology can be beneficial for studies of toxicity, function, phenotype, and screening.

For drug discovery applications they developed a model of steatosis under stress. They increased the fat accumulation and added triglycerides (TG) to cause endoplasmic reticulum (ER) stress. ER stress is known to induce NAFLD, it disrupts protein folding, Ca++ storage and lipid synthesis. To this model they added the test compounds at day 7, the lipids at day 8, and performed staining on day 9. They used an Opera high content imaging system that quantified spot detection for lipid uptake readings. ER stress induces steatosis, is dose responsive, and is analyzed by RNA seq analysis. Obiticholic acid was used to inhibit ER stress which in turn caused lipid accumulation. It also caused upregulation of FGF19 and SLCF1B genes, and downregulation of the CYP7A1 gene. After compounds are identified as hits they are screened against annotated compound libraries for quick identification. Phenotypic pilot screening showed that it is a robust platform that can identify lipid accumulation, primary hits, and can filter for cytotoxicity. They did 2 days of testing with cyclin-dependent kinase (CDK) inhibitors and then created a dendrogram of hierarchical clustering. CDK4 was identified as a part of the complex that causes hepatic steatosis in mice, inhibition of CDK4 prevented steatosis. Gene analysis of these models identified a significant decrease in the regulation of CDK4 and CbpA.

For disease modeling patient-specific cells and iPSCs were utilized in their models. The models with the PNPLA3 SNP were associated with an increased risk of NASH. They are currently developing spheroids for this type of testing which is advantageous as it requires small quantities of cells.

For evaluation of candidate molecules they looked at the CXCR6 antagonists to inhibit HCC. This antagonist doesn’t inhibit HCC in mice but it does potently inhibit HCC in humans, highlighting one such downfall of mouse models in drug evaluation studies. They identified compound number 457 as an inhibitor of tumor growth that caused no adverse effects. CXCR6 inhibitors have the potential to block the inflammatory response that can lead to fibrosis progression. Current work includes using mixed cell spheroids to look for anti-inflammatory, anti-steatosis, and anti-fibrotic compounds and to learn more about their mechanisms of action.

S305: (Multi-)organ models-3 (Theme: Innovative Technologies)

This session was the third and final installment of the multi-organ models series. It consisted of presentations from 6 presenters around the world. The first to present was Chandani Sen from the University of California, Los Angeles (UCLA) who presented her work on lung cancer models. She has focused on small-cell lung cancer (SCLC) specifically, which is an aggressive neuroendocrine tumor that, in two thirds of cases, are metastatic. This type of cancer has chemoresistance and there is a lack of therapeutic information from existing preclinical models. These preclinical models do have some technical challenges such as oversimplification of the tumor as a whole and underrepresentation of the microenvironment with 2D cell cultures, and the obvious difference between the mice and human lung microenvironment in patient-derived xenograft models as well as a long developmental time.

Organoids are being used to overcome these downfalls. They have the potential to model the complex alveolar lung architecture in the inverse opal geometry. They have developed a microbead base to mimic the lung. It consists of an AI-bead formed by electrostatic droplet generation that has a collagen and poly-dopamine coating. The cells adhere to the beads in the bioreactor and completely coat them. They have the ability for scaling to high throughput screening (98 well or 384 well) formats. Through comparison of the structure and function of the lung organoids with in vivo data they have confirmed that the histology is compatible with functionally similar stretching also being a benefit. Their validation looked at the expression of the neuroendocrine markers between the organoid and the xenograft models. The xenograft models were much inferior to the organoids for marker expression. Its ability to model SCLC relapse and chemoresistance is also showing great potential for therapeutic screening.

The next speaker was Liisa Vilen of AstraZeneca who is working on multi-organ models for cardiovascular disease. For this research she chose a pump-driven recirculating platform. This model included human cardiac spheroids created from hiPSC cardiomyocytes and primary cells. In order to increase the number of cells to accurately fit the model they had to modify the protocol. The models were characterized in terms of viability, maturation, beating, and bioenergetics. As it was a long term culture the cardiac functionality was preserved. Structural maturation was also possible over time which was shown by gene expression. The model showed signs of hyperglycemia but this alone did not induce disease phenotypes and there were no signs of fibrosis or hypertrophy.

Another part of this study was the inclusion of primary liver steroid cells in a HUMMIC chip 4. 15 cardiac spheroids and approximately 125 liver models were added. To induce disease states they exposed the models to free fatty acids and hyperglycemia. Beating frequency was maintained over the full two weeks but the rate was lower than single cultures. The liver spheroids were more functional in the co-culture, as in single cultures they saw no albumin secretion and urea levels decreased over time. The ketone levels were higher in the diseased chip media. This became the steatotic disease media. The cardiac spheroids did not accumulate lipids.

After disease medium treatment in single cultures but not co-cultures proliferation increased. Proliferation was also stimulated in the healthy chip co-cultures but not in the healthy single cultures which could be indicative of a chip effect or organ crosstalk. Their next step with these models is to create a mathematical version of their co-culture MPS and to integrate electrodes into the chip for in-chip contractility monitoring.

Isabell Durieux from TissUse was the next to discuss her work, which is focused on a human transplantation platform. She used liver cells, vasculature and PBMCs from one donor and used liver cells from another donor as the transplant organ. Differentiated iPSCs were utilized to generate organoids and were placed into their endothelialized chip 2 platform. They put regulatory T cells into the platform to mimic a rejection therapy.

One area TissUse are still working on in this project is making microvessels in the organ such as capillaries. They bioprinted microvessels into the chip compartments with a dissolvable gel and covered it with a hydrogel to create hollow endothelialized structures (beads in the medium compartment also assisted with this).

One adaptation they are looking into is using more abundant smaller channels in the chip 2 and to connect the sprouting endothelial cells to the organ models.

The next speaker was a representative of Rensselaer Brazil, Carolina Motter Catarino’s work looks at the use of 3D bioprinter in the creation of hair follicle models. Current models lack diversity in biomolecules and cells needed to accurately represent them. Hair follicles have more than 15 different cells that are important for skin regeneration and permeation. In the 3D bioprinter the bioink consists of keratinocytes, melanocytes, dermal papilla cells, human endothelial cells (HUVECs), and fibroblasts. They grew the cell layers one on top of the other, so that the epidermis was grown on the top. This resulted in a structure that closely resembles the nature of the human dermal papilla morphologically and biochemically. The hair follicles were injected into the dermal filler layer, but they did not witness hair growth as the culture time was not long enough (hair can take months to grow). Nonetheless it did resemble the natural root, including a root sheet and inner root sheet surrounding the hair core representing both temporal and tridimensional organization. With this model she reached more than 90% cell viability and as it is made with a bioprinter it was fully automated and easily reproducible.

Then Alex Bastiaens of InnoSer Netherlands spoke to us about his research on neurodegenerative diseases (NDD) using organ-on-chip models. Before organ-chip models the NDD models lacked translatability. A typical preclinical read out from these focused on synapse formation, dopamine production/release, electrophysiological activity, and neuromelanin formation. Exposure to Alzhiemers disease risk factors can cause irrational signaling leading to synaptic failure which, in turn, can cause brain atrophy. Mutational risk factors include cytosolic mutations, mutations that affect Ca2+ and mTOR signaling as well as Tau phosphorylation, which ultimately leads to cell death. His research aimed to make a 3D brain chip to overcome the limitations of previous NDD models. He started with a 2D neuronal cell network, which was mature and had integrity, along with tau secretion and mitochondrial membrane potential, intending to build this up to a 3D model.

For amyotrophic lateral sclerosis (ALS) he looked into a combination of technology to make these advanced 3D models. Parkinson’s disease required recapitulation of the heterogeneous and complex pathophysiology of the disease including investigation of the brain-gut hypothesis, alpha-synuclein, and mitochondrial dysfunction. The models included mutations in SNCA, LRRK2, PINK1/Parkin, and the GBA genes. This was the missing link of previous models enabling more accurate recapitulation of patient physiology. Overall these models still require better preclinical prediction to create a sufficient overlap.

Additionally, he is using a unique library of patient-derived material that can be used to make human midbrain organoids from the readouts of patient-derived hallmarks. This method employs artificial intelligence and machine learning for the likes of TH+ neuron counts. He validated this PD human midbrain organoid model with a rare DJ1 mutation in a personalized screening manner. The recovery rate for dopaminergic markers is significantly different. Current projects use of hiPSC.

The final speaker of this session was Birthe Dorgau from Newcastle University who presented her work on retinal organoids using hiPSCs. This research was a part of the NC3Rs CrackIt challenges. The organoids she created contain an outer nuclear layer (ONL), outer plexiform layer (OPL), inner nuclear layer (INL), inner plexiform layer (IPL), and ganglion cell layers (GCL). The overall goal of this project was to use these organoids for toxicity studies as a proof of concept study. They focused on 4 drugs: digoxin, thioridazine, sildenafil, and ketorolac. The organoids were differentiated for 150 – 200 days. Immunofluorescence was harnessed to show the drug’s effect on the photoreceptor cells. Both rods and cones were stained for this. Thioridazine reduced the number of rods at day 200, all drugs reduced the number of cones present, ganglion cells quantity decreased after digoxin, and astrocytes increased after all drugs but thioridazine caused the most significant increase. From this study they determined which genes were up- or down-regulated, highlighting biomarkers that are already used for diagnostics, showing the relevance of these models. Exposure to light increased the activity in the ganglion ON center, where OFF center receptors were decreased in response to light.

These retinal organoids recapitulated the multi-layered form of the in vivo organ, they displayed the same drug effect results as that in the literature, and responded as expected to light exposure. From the identification of biomarker genes they will be able to form a toxicological panel for future studies.

KEYNOTE: Joseph Wu - Stem Cells & Genomics For Precision Medicine

On the final day of the 11th World Congress for alternatives to animal methods we were treated to a fantastic keynote presentation from Joseph Wu of Stanford University. His presentation dealt with the use of stem cells and genomics for precision medicine applications. Precision medicine over the years has been improved by a few breakthroughs such as DNA sequencing, hiPSC platforms as a testbed for validating NAMs, and CRISPR-Cas9 genome editing. They use two topics to answer key biological questions: elucidating disease mechanisms and the use of hiPSCs in cardio-oncology.

To elucidate disease mechanisms they generated iPSCs from diverse patient populations to create 3D engineered heart tissue and organoids. To study dilated cardiomyopathy (DCM) they used a family that had a familial history of TNNT2 mutation across three generations, as mouse models of the disease do not recapitulate the disease phenotypes. They collected iPSCs from the whole family, differentiating them into cardiomyocytes to compare the differences between affected and unaffected members. Family members that had the mutation displayed altered regulation of calcium, decreased contractility, and abnormal distribution of alpha actinin. Treatment with beta-blockers improved the function of the affected models.

Familial HCM was another disease studied. It is a leading cause of sudden cardiac death in young adults. It’s caused by >500 mutations spread across >30 genes. They did a similar study using a large family with known HCM diagnosis. Some of the younger family members were phenotypically negative but genotypically positive indicating that they were too young to display physical symptoms of HCM at that time. The MYH7 mutation in particular causes increased intracellular calcium leading to defects and subsequently arrhythmia and hypertrophic responses. Treatment with calcium channel blockers and beta-blockers could reverse cell hypertrophy and arrhythmia. 

Left ventricular non-compaction (LVNC) cardiomyopathy was also studied in a similar way using two families with known LVNC. Mutations in cardiac transcription factors DBS20 cause abnormal activation of the TGF-beta signaling pathway leading to decreased proliferative capacity of the cardiomyocytes causing the arrested development seen in LVNC patients.

A large family with the LMNA mutation was used to study the link between the mutation and DCM. As with the other DCM family they generated iPSC-CMs that displayed both cell arrhythmia and impaired contractility seen with the LMNA mutational activation of the platelet-derived growth factor (PDGF) signaling pathway. Blocking this pathway with tyrosine kinase inhibitors rescued the abnormal phenotype.

They also studied the aldehyde dehydrogenase SNP. They determined that it was the cause of alcohol flush in the Asian population. A drug to stop alcoholics from drinking works using a similar mechanism of action as the SNP. This SNP has also been linked to an increased risk of coronary artery disease, increased hypertension, and an increased risk of cancer. They used Asian donors to collect and create iPSC-CMs from. The heterozygous mutants experience headaches, and facial flushing when they drink alcohol, where the homozygous mutants display the worst symptoms. The iPSC-CMs were exposed to hypoxia to model the myocardial infarction. The heterozygous CMs had significantly increased apoptosis compared to the wild type. They hypothesize that the enzyme not only toxifies alcohol but also plays a role in the removal of reactive oxygen species. This ethnic diversity is very hard to study in mice as you might imagine so these models are proving very useful.

Human genetics is complex and messy. Physicians currently don’t know how to approach the genetically mutated but phenotypically normal patients. As such, they did a study with iPSCs from family members exposing these cells to genome editing. The study allowed the characterization of individual variant types within the family. If characterized as a benign variant the mutation would not be fatal later in life, therefore would need no intervention.

The second topic deals with the application of hiPSCs in cardio-oncology. Patient-derived stem cells are being used for breast cancer research. These cells were exposed to doxorubicin and through in vitro assays they were able to determine which patients were more susceptible to doxorubicin toxicity, which caused more apoptosis and arrhythmias. These phenotypes are hard to clinically predict based on SNPs. These hiPSCs were also used to model trastuzumab-induced cardiac dysfunction. Trastuzumab toxicity causes metabolic derangement, mitochondrial dysfunction, and energy depletion. For evaluation of tyrosine kinase inhibitors (TKI) they used the 21 most commonly used TKIs to create a cardiac safety index also using hiPSCs.

They have been integrating iPSCs with multi-omics approaches to further understand the biology of heart disease. Their clinical trial in a dish used iPSCs from a large biobank which offered reduced trial error therefore increasing confidence in its use. Transcriptomic profiling has been used to predict patient-specific drug safety and efficacy results in vitro using toxicological analysis, CRISPR genome editing, and prediction of drug responses.

In development they have a human encyclopedia of drug-gene signatures from HMG-CoA reductase inhibitors and calcium channel blockers, and a double reporter system to purify cardiac lineage subpopulations with distinct drug profiles and functions.

They used the LMNA family lineage again to focus on cardiomyopathy, but instead of differentiating the iPSCs into cardiomyocytes they differentiated them into endothelial cells. From their screening studies they identified lovastatin as a drug that can reverse the effects of the mutation. EndoPAT was used to visualize the increase in reactive hyperemia index (RHI) in the clinical trial of this study, confirming what was seen in vitro. They started the trial with low RHIs (<1) and results after 6 months identified increased RHIs (>1). The last thing Joseph mentioned was that the Stanford cardiovascular institute iPSC biobank has more than 1500 iPSC lines from patients such as the families mentioned in this keynote presentation.

Abbreviations

AAV – Adeno-Associated Virus

ACC – Acetyl-CoA Carboxylase

ACE2 – Angiotensin-Converting Enzyme 2

ACM – Arrhythmogenic CardioMyopathy

ADA – Animal-Derived Antibodies

ADME – Absorption, Distribution, Metabolism, Excretion

AFA – Animal-Free Antibody

AFB1 – AFlatoxin B1

AFP – Alpha-FoetoProtein

AHT – 4-Amino-2-HydroxyToluene

AI – Artificial Intelligence

ALP – ALkaline Phosphatase

ALS – Amyotrophic Lateral Sclerosis

AOP – Adverse Outcome Pathway

APOLLO – Applied Proteogenomics Organizational Learning and Outcomes

APVMA – Australian Pesticides and Veterinary Medicine Authority

ATP – Adenosine TriPhosphate

BER – Bioactivity Exposure Ratios

BFC – 7-Benzoyloxy-4-triFluoromethyl Coumarin

BLI – Bio-Layer Interferometry

BOC – Body-On-Chip

BSEP – Bile Salt Export Pump

CAR T-cells – Chimeric Antigen Receptor T-cells

CBMP – Cell-Based Medicinal Product

CDD – CDKL5 Deficiency Disorder

CDK – Cyclin-Dependent Kinase

CFTR – Cystic Fibrosis Transmembrane Regulator

ChREBP – Carbohydrate-Response Element-Binding Protein

CIVM – Complex In Vitro Models

COPD – Chronic Obstructive Pulmonary Disease

CPHMS – the Centre for Predictive Human Model Systems

CRC – ColoRectal Cancer

CRISPR – Clustered Regularly Interspersed Palindromic Repeats

CYP450 – Cytochrome P450

DART – Developmental And Reproductive Toxicity

DCM – Dilated CardioMyopathy

DMPK – Drug Metabolism and PharmacoKinetic

DNL – De Novo Lipogenesis

DNT – Developmental NeuroToxic

DoDSR – Department of Defense Serum Repository

ECG – ElectroCardioGram

ECM – ExtraCellular Matrix

ED – Endocrine Disruptor

EFSA – the European Food Safety Authority

EGFR – Epidermal Growth Factor Receptor

EPA – Environmental Protection Agency

ER – Endoplasmic Reticulum

ESAC – EURL ECVAM Scientific Advisory Committee

EURL ECVAM – EU Reference Laboratory for alternatives to animal testing

Fab – Fragment antigen binding

FASN – Fatty Acid Synthase

Fc – Fragment crystallizable

GARD – Genomic Allergen Rapid Detection

GCL – Ganglion Cell Layers

GFP – Green Fluorescent Protein

GI – GastroIntestinal

GMP – Good Manufacturing Practices

GPCR – G-Protein Coupled Receptor

H – Heavy

H&E – Hematoxylin and Eosin

HCC – HepatoCellular Carcinoma

HCM – Hypertrophic CardioMyopathy

HESI – the Health and Environmental Sciences Institute

HHT – Hereditary Hemorrhagic Telangiectasia

hiPSC – human induced Pluripotent Stem Cells

HIS – the US Humane Society International

HLC – Hepatocyte-Like Cell

HNSCC – Head and Neck Small Cell Carcinoma

HSI – Humane Society International

hSKP-HPC – human multipotent stem cell-derived hepatic cells

HUVEC – human endothelial cells

IATA – Integrated Approach to Testing and Assessment

IHC – ImmunoHistoChemistry

IL-2 – InterLeukin-2

IND – Investigational New Drug

INL – Inner Nuclear Layer

IPL – Inner Pexiform Layer

iPSC – induce Pluripotent Stem Cells

ISSCR – the International Society for Stem Cell Research

IVF – In Vitro Fertilization

JRC – Joint Research Center

KC – Key Components

L – Light

LDH – Lactate DeHydrogenase

LDL – Low Density Lipoprotein

LPL – LipoProtein Lipase

LVNC – Left Ventricular Non-Compaction

mAb – monoclonal antibody

MAFLD – Metabolic-Associated Fatty Liver Disease

MEA – Multi-Electrode Array

MIE – Molecular Initiating Events

MNT – MicroNucleus Test

MOA – Mechanism Of Action

MOS – Margin Of Safety

MPS – MicroPhysiological System

MRI – Magnetic Resonance Imaging

MVEC – MicroVascular Endothelial Cells

NAFLD – Non-Alcoholic Fatty Liver Disease

NAM – New Approach Method

NAS – the US National Academies of Sciences, Engineering, and Medicine

NASH – Non-Alcoholic SteatoHepatitis

NDD – NeuroDegenerative Diseases

NGRA – Next-Generation Risk Assessment

NHP – Non-Human Primate

NOAEL – NO Adverse Effect Level

NPC – Non-Parenchymal Cells

NSCLC – Non-Small Cell Lung Cancer

OA – Oleic Acid

OECD – Organization for Economic Cooperation and Development

ONL – Outer Nuclear Layer

OOC – Organ-On-Chip

OPC – Oligodendrocyte Progenitor Cell

OPL – Outer Plexiform Layer

pAb – polyclonal antibody

PARP – Poly(ADP)-Ribose Polymerase

PBK – Physiologically-Based Kinetic

PBMC – Peripheral Blood Mononuclear Cells

PCA – Principal Component Analysis

PCAGW – Pan-Cancer Analysis of Whole Genomes

PCOS – Precision Cut Organ Slices

PD – PharmacoDynamic

PDGF – Platelet-Derived Growth Factor

PDMS – PolyDiMethylSiloxane

PEG – PolyEthylene Glycol

PETA – People for Ethical Treatment of Animals

PHH – Primary Human Hepatocyte

PK – PharmacoKinetic

PMRA – Health Canada Pest Management Regulatory Agency

POD – Point Of Departure

QIVIVE – Quantitative In Vitro In Vivo Extrapolation

QSAR – Quantitative Structure Activity Relationship

ReCAAP – Rethinking Carcinogenicity Assessment for Agrochemicals

RHI – Reactive Hyperemia Index

RPE – Retinal Pigment Endothelial cells

RT-PCR – Reverse Transcriptase-Polymerase Chain Reaction

SCC – Squamous Cell Carcinoma

SCD-1 – Stearoyl-CoA Desaturase 1

scFv – single-chain variable fragment

SCLC – Small-Cell Lung Cancer

SOST – Sclerostin

SPR – Surface Plasmon Resonance

SREBP – Sterol Regulatory Element Binding Proteins

SZ – SchiZophrenia

TCGA – The Cancer Genome Atlas

TG – TriGlycerides

TGF – Transforming Growth Factor

TKI – Tyrosine Kinase Inhibitors

TNF – Tumor Necrosis Factor

TRAC – Toronto Recombinant Antibody Centre

TSAR – Tracking System for Alternative methods towards Regulatory acceptance

V – variable

VEGFA – Vascular Endothelial Growth Factor A

WoE – Weight of Evidence

WT – Wild-Type

Show Buttons
Hide Buttons