1 EMA approved medicines

How do we figure out how many clinical trials likely failed due to bad methodological design? While I was pondering the thought, I came up with the stream graph below. It shows the total amount of past experiences that we could use as research ethicists to learn from. If you have any idea or suggestion, please leave a comment using the annotation function on this website (simply select a text passage with the mouse).

In such a Stream Graph (Byron and Wattenberg 2008), the volume of individual streams is proportional to the values in each category (i.e. number of approvals per year and therapeutic area). This plot is useful to quickly assess “trends” in approval rate per therapeutic area over time.

For instance, if you hover over the graph, you will quickly discover that there were quite disproportionately many approvals for Arthritis lately. Shown are therapeutic indications mentioned in EPARS (Papathanasiou et al. 2016). Data freeze: 25. November 2019. Now imagine, we had retrospective access to the clinical trial application packages.

1.1 Approvals per therapeutic area

If you are more interested in “exact numbers, then here is a donut plot of the same data with a focus on the most active therapeutic areas. Note, however that the”therapeutic areas" are not necessarily on the same level of abstraction in terms of disease ontology. All data comes with absolutely no warranty.

1.2 Past regulatory applications

With increasing experience on CTAs or INDs, there may be associations between “success” metrics and certain formal characteristics of application materials. According to BfArM there are 3,647 CTA applications for 2014-2018 in Germany and according to EMA there are 132,700 CTA applications submitted to the EudraCT database in total. What great potential to learn from past experience!

Unfortunately, as Wong, Siah, and Lo (2019) have demonstrated, the estimation of “success rates” is not without pretty sophisticated complexities. The most dramatic barrier, however, is simply that the data is not open access. This severely limits research ethicists like me who suffer from scarce resources and high academic pressure to produce findings that “sell”.

For further information, please refer directly to source of origin: Pei 2019.

References

Byron, Lee, and Martin Wattenberg. 2008. “Stacked Graphs–Geometry & Aesthetics.” IEEE Transactions on Visualization and Computer Graphics 14 (6): 1245–52.

Papathanasiou, Peter, Laurent Brassart, Paul Blake, Anna Hart, Lel Whitbread, Richard Pembrey, and Jill Kieffer. 2016. “Transparency in Drug Regulation: Public Assessment Reports in Europe and Australia.” Drug Discovery Today 21 (11): 1806–13.

Wong, Chi Heem, Kien Wei Siah, and Andrew W Lo. 2019. “Estimation of Clinical Trial Success Rates and Related Parameters.” Biostatistics 20 (2): 273–86.


This book is in Open Review. I want your feedback to make the book better for you and other readers. To add your annotation, select some text and then click the on the pop-up menu. To see the annotations of others, click the in the upper right hand corner of the page