Target Based Discovery | Industry Spotlights & Insight Articles

Examining Target Identification and Validation in Drug Discovery

Target identification and validation forms the backbone of efforts to discover new drugs. Our expert panel discussed challenges and solutions in this field to drive safety and efficacy.

One highlight of Discovery Europe 2022 was a stirring panel discussion on target identification and validation in drug discovery. This conversation went right to the heart of drug design and discovery and tackled some common challenges and inconvenient truths in the field.

Meet the Panellists

Magnus Walter, Senior Director Drug Discovery, Medicinal Chemistry and Screening Biology at AbbVie, joined our panel and leads AbbVie’s drug discovery organisation in Germany. Walter specialises in neuroscience drug discovery, working specifically on Alzheimer’s and Parkinson’s disease.

Walter said that one challenge in treating neurodegenerative diseases was that they manifest and evolve over decades. He said the question that he and his colleagues are asking is “what does treating those diseases actually mean?” Furthermore, “how should we think about curing a long-term degenerative disease like Alzheimer’s or Parkinson’s?”

Laura Starkie is a Senior Group Leader at UCB and also joined our panel discussion. Starkie heads a target validation team that works in small molecules, large molecules, and gene therapy for validating new targets into their portfolio.

She said that the biggest challenge was coming up with new targets, linking them to the disease of interest, and showing that they are actually causal. “The future is reset and drug-free remission,” said Starkie, adding: “so that patients don’t have to take a drug every day to stay in remission—but that’s not easy.”

Finally, David Chambers, Systems Biology Head at Grunenthal, focuses on pain therapeutics. “My role is to fundamentally support target discovery and disease understanding,” explained Chambers. He said that the way he likes to think about the role was putting in place as many data-capture methodologies as possible for the right patient samples and human relevant translational models. “Then we can put the right analytical pipelines in place with the right subsequent processing so we can empower data-driven decisions,” described Chambers, adding “—both on target identification and disease understanding.”

Chambers’s biggest challenge is “linking up what we know about genetic causality with the benefits  or disadvantages these modifications offer and having models that reflect these effects.” His team work on increasing understanding of biology and layering that onto the genetics. “How do you do that in terms of the biology, and how do you do that computationally?” Chambers asked. “In this respect we are very fortunate at Grunenthal to have established a group of dedicated highly talented bioinformaticians and data scientists who can apply the latest iterations of machine learning to integrate genetic and genomic information to enable and accelerate target discovery.”

Target Identification After the Human Genome Project

The next point of discussion concerned the disconnect between what is currently known about the human genome and the proportion of known validated targets. Since the human genome has been sequenced, why is there a still a difficulty in finding targets?

Chambers explained that this was “certainly the view when the human genome was first released — but clearly that’s only the very first level of instruction-set.” He added that the issue becomes a question of mathematical complexity: of 25,000 genes, scientists know the function of an estimated 12,000 to 16,000.

“That information flows down through the central dogma, adding layers of complexity the whole time,” said Chambers. Therefore, function emerges on the journey from gene to protein, to modified protein. Chambers described the complexity divide between the sequenced genome and the emergent function of biological targets: “it’s something that we’re still only scraping the surface of.”

When asked whether he thought the problem was the fact that the biology was simply too complex, Chambers said “it’s a problem of complexity, but it’s not too complex.” He continued: “You have an obvious endpoint; function emerges, so there’s a logic to it. The question is how do you unpick that logic in a way that is meaningful to whatever you’re trying to drug?”

Walter weighed in on this question from the perspective of time and cost: “Another way of looking at this is with a thought experiment. Let’s say you have 30,000 targets; we could prosecute them all in parallel. But assume that, conservatively, a clinical proof-of-concept from target to read-out costs somewhere around 10,000,000 USD (it’s probably a little bit more than that). If you multiply that by 30,000 targets, even if you could, this would not be a feasible undertaking.” Walter said that due to this, the current condition of the field was concerned with narrowing down the number of targets, fast-tracking the more promising ones, and reducing this cost.

Starkie then offered her insight on top of these points. She said that often the problem is not just the target itself, but rather “what the whole biology does to compensate, once you have knocked down that target in the disease phenotype.” Therefore, although the genome unravels a good deal of useful information, the task of target finding and drugging requires more involved processes and further biological understanding.

Why do Drugs Fail Clinical Trials so Often?

It is a well-known and unfortunate fact that only one in ten drugs that reach clinical trials ever make it to the market. 50% of these fail for efficacy reasons – meaning that they don’t achieve what they set out to. The panel was asked what they thought was the reason for this, and whether they thought that this failure rate could be circumvented.

Chambers said that a key reason for the failure rate in clinical trials is that not enough is known about the pathophysiology of disease contextually in humans. “Not just in the cells, but also the environment that those cells find themselves in,” he explained. Chambers added that signalling mechanisms and cell-to-cell contact also play a critical role in determining the context to which a target has a chance of operating within. “When you understand that, you have a better chance of increasing efficacy,” he said. “But without knowing that, you’re just throwing things out there.”

Walter added his opinion on this topic: “I take a different view on these data,” he said. “Obviously, it’s not good that 90% of clinical candidates don’t work out, but back in the early 2000s when similar analyses were done, there was a very high rate of failure due to ADME (absorption, distribution, metabolism, and excretion) reasons.” In those cases, Walter said that the candidates failed because the compound was just not good enough.

“My proposal would be that if we fail for efficacy reasons and we are sure that we have really engaged the targets we have tested the therapeutic hypothesis.” Walter suggested then the experiment should be considered as having a valid outcome and therefore a success. “I would like this to be more than 10%, of course,” said Walter, “it is our job to make sure that we are putting forward the right agents that test the biological hypotheses in a competent way.”

Walter asserted that this should be the objective for drug discovery scientists. “I’m not so optimistic that we will ever understand the complexity of human biology sufficiently enough to predict the outcome of these clinical experiments,” he explained. But he thought that the overall focus for scientists should be on running the right experiments. “We also know from the work done across the industry, the five Rs, and the three pillars, we are on the path to getting better but, for now, we are nowhere near as good we need to be,” Walter concluded.

Join and network with over 200 industry leaders at Discovery US: In-Person, where we will address the latest advancements in target identification, validation and HIT optimisation.