Speeding the Process for Pharmaceutical Drug Approvals
The need for rapid drug discovery and approval has never been more salient as large pharmaceutical companies navigate rising costs, substantial price pressure, and record lows in both drug approval rates and phase III pipeline proportions. Covid has only strengthened the pressure on pharma margins, with sharp reductions in detailing activity, disruptive hesitation from trial participants, widespread deferral of treatments, and unexpected setup costs of remote working.
Large pharmaceutical companies are battling on two fronts to reduce both the ~10-year average drug development timeline and also increase the number of drugs that make it to approval — which is currently sitting at only 12%.
What’s driving such low new drug development and approval rates?
- 39% of terminated trials are terminated due to insufficient participant enrollment.
- 21% of terminated trials are terminated due to new information that raises safety or efficacy concerns.
- Other trials are terminated due to insufficient funds or administration pressure.
Speeding up drug development timelines and increasing the number of drugs that get approved is the only way for large pharmaceutical companies to survive in an increasingly commoditized and competitive market.
In short: we need more drugs, and faster.
Producing More Pharmaceutical Drugs Faster
The pharmaceutical industry needs to adopt the approach of fewer participants in more specific and less costly trials. That is, trials must be more targeted.
Trials can be more targeted if clinical research teams can utilize more existing information to aid discovery, research, and prep. Teams can build on what already exists and trial only the specific gaps in information, meaning less duplication, less cost and waste, and much faster results.
If it’s that simple, what’s the problem?
The sheer quantity of information available globally commands very lengthy synthesis processes where teams of people must spend months, or even years, combing through everything to find, analyze and organize the relevant information from all possible sources.
For example, to make use of historical trial data, the team must find studies that meet research criteria, annotate them, and perform statistical analyses to summarize the findings. And clinical trial data isn’t the only data source the team must comb through. Other data sources could include for example genomics databases, medical journals, medical databases, patents databases or published competitor data.
Manual vs. Automatic Information Synthesis
In theory, everything can be automated, but there are a few things that make automating this synthesis process difficult.
First, the information is in different databases, in varying formats, and it’s mostly unstructured. That means generally a single computer application can’t meaningfully search through it all at once.
Second, often the relevant information to be extracted is actually found through inference. Clinical research teams can combine learnings from two different sources into a third piece of inferred information that is relevant for the new scenario. Most computer applications can’t infer information very easily – especially not from multiple sources and in multiple formats.
Third, even when all information has been transformed and combined into a single data store or lake of some sort, often the information has differing levels of confidentiality which must be respected in the access permissions given to users. Balancing the ability to search all of the data holistically with the ability to construct privacy walls throughout the data is a complex technical challenge for most technologies.
Finally, (as if all of that wasn’t complex enough!), the superset of information that needs to be synthesized for this process is not a static, finished library. There are new trials, discoveries, patents, and learnings being created and published all the time, all over the world, and all of these must be factored into the process. The ability to constantly add new data into the process can create major automation headaches, especially when the automation rests on some pre-defined or hard-coded rules.
TigerGraph connects all of your data within its own database and then enables you to use its native and customizable algorithms to search all or any part of that data at once. For example, you could load many types of data – trial, medical, patent, drug, or patient – all into the same graph database and then ask any question you need of that connected data. It presents to you the results of your search both as visualized insight and as machine-readable output that can serve as an input to another automated process or visualization tool.
Insights from Data Connected by TigerGraph
There are so many different questions you can ask of your connected data that we find it easiest to group them into categories (different types of algorithms). For example, you could identify whether there are specific genes or phenotypes associated with specific symptoms using community detection algorithms, or you could understand how well-cited an article is, and how well-cited its own citations are, using our classification algorithms.
Basically, with TigerGraph you can automate information synthesis at scale, all in real-time. And we know that automated information synthesis is what will drive accelerated drug discovery timelines and enable much more targeted, less costly trials.
Getting Started with TigerGraph
You can download our free product if you’d like to get your hands on it straight away. Or you can reach out directly to our sales team if you’d like to see a demo, and talk about how we could run a proof of concept with you using some of your data.