Assay sensitivity
Assay sensitivity is a property of a clinical trial defined as the ability of a trial to distinguish an effective treatment from a less effective or ineffective intervention.[1] Without assay sensitivity, a trial is not internally valid and is not capable of comparing the efficacy of two interventions.
Importance
Lack of assay sensitivity has different implications for trials intended to show a difference greater than zero between interventions (superiority trials) and trials intended to show non-inferiority. Non-inferiority trials attempt to rule out some margin of inferiority between a test and control intervention i.e. rule out that the test intervention is no worse than the control intervention by a chosen amount.
If a trial intended to demonstrate efficacy by showing superiority of a test intervention to control lacks assay sensitivity, it will fail to show that the test intervention is superior and will fail to lead to a conclusion of efficacy.
In contrast, if a trial intended to demonstrate efficacy by showing a test intervention is non-inferior to an active control lacks assay sensitivity, the trial may find an ineffective intervention to be non-inferior and could lead to an erroneous conclusion of efficacy.[2]
When two interventions within a trial are shown to have different efficacy (i.e., when one intervention is superior), that finding itself directly demonstrates that the trial had assay sensitivity (assuming the finding is not related to random or systematic error). In contrast, a trial that demonstrates non-inferiority between two interventions, or an unsuccessful superiority trial, generally does not contain such direct evidence of assay sensitivity. However, the idea that non-inferiority trials lack assay sensitivity has been disputed. [3][4]
Differences in sensitivity
Assay sensitivity for a non-inferiority trial may depend upon the chosen margin of inferiority ruled out by the trial, and the design of the planned non-inferiority trial. The chosen margin of inferiority in a non-inferiority trial cannot be larger than the largest effect size which the control intervention reliably and reproducibly demonstrates compared to placebo or no treatment in past superiority trials. For instance, if there is reliable and reproducible evidence from previous superiority trials of an effect size of 10% for a control intervention compared to placebo, an appropriately designed non-inferiority trial designed to rule out that the test intervention may be as much as 5% less effective than the control would have assay sensitivity. On the other hand, with this same data, a noninferiority trial designed to rule out that the test intervention may be as much as 15% less effective than the control may not have assay sensitivity, since this trial would not ensure that the test intervention is any more effective than a placebo given that the effect ruled out is larger than the effect of the control compared to placebo.[5] The choice of the margin is sometimes problematic in non-inferiority trials. Given investigators desire to choose larger margins to decrease the sample size needed to perform a trial, the chosen margin is sometimes larger than the effect size of the control compared to placebo. In addition, a valid noninferiority trial is not possible in situations in which there is a lack of data demonstrating a reliable and reproducible effect of the control compared to placebo.
In addition to choosing a margin based upon credible past evidence, to have assay sensitivity, the planned non-inferiority trial must be designed in a way similar to the past trials which demonstrated the effectiveness of the control compared to placebo, the so-called "constancy assumption". In this way, non-inferiority trials have a feature in common with external (historically) controlled trials. This also means that non-inferiority trials are subject to some of the same biases as historically controlled trials; that is, the effect of a drug in a past trial may not be the same in a current trial given changes in medical practice, differences in disease definitions or changes in the natural history of a disease, differences in outcome timing and definitions, usage of concomitant medications, etc.[6]
The finding of "difference" or "no difference" between two interventions is not a direct demonstration of the internal validity of the trial unless another internal control confirms that the study methods have the ability to show a difference, if one exists, over the range of interest (i.e. the trial contains a third group receiving placebo). Since most clinical trials do not contain an internal "negative" control (i.e. a placebo group) to internally validate the trial, the data to evaluate the validity of the trial comes from past trials external to the current trial.
References
- Chuang‐Stein, Christy (2014), "Assay Sensitivity", Wiley StatsRef: Statistics Reference Online, American Cancer Society, doi:10.1002/9781118445112.stat07119, ISBN 978-1-118-44511-2, retrieved 2020-01-21
- Snapinn, SM (2000). "Noninferiority trials". Current Controlled Trials in Cardiovascular Medicine. 1 (1): 19–21. doi:10.1186/cvm-1-1-019. PMC 59590. PMID 11714400.
- Howick, J (2009). "Questioning the Methodologic Superiority of 'Placebo' over 'Active' Controlled Trials". The American Journal of Bioethics. 9 (9): 34–48. doi:10.1080/15265160903090041. PMID 19998192. S2CID 41559691.
- Anderson, JA (2006). "The ethics and science of placebo–controlled trials: Assay sensitivity and the Duhem–Quine thesis". Journal of Medicine and Philosophy. 31 (1): 65–81. doi:10.1080/03605310500499203. PMID 16464770.
- Temple, Robert J (2002-02-19). "Active Control Non-Inferiority Studies: Theory, Assay Sensitivity, Choice of Margin". Food and Drug Administration. Retrieved 2007-09-16.
- International Conference on Harmonization Guidance E-10 (2000). "Choice of Control Group and Related Issues in Clinical Trials". Archived from the original on 2005-02-16. Retrieved 2007-10-21.