Validation of a transparent decision model to rate drug interactions
- Elmira Far†1,
- Ivanka Curkovic†1,
- Kelly Byrne1,
- Malgorzata Roos2,
- Isabelle Egloff1,
- Michael Dietrich3,
- Wilhelm Kirch4,
- Gerd-A Kullak-Ublick1 and
- Marco Egbring1Email author
© Far et al.; licensee BioMed Central Ltd. 2012
Received: 30 January 2012
Accepted: 30 July 2012
Published: 20 August 2012
Multiple databases provide ratings of drug-drug interactions. The ratings are often based on different criteria and lack background information on the decision making process. User acceptance of rating systems could be improved by providing a transparent decision path for each category.
We rated 200 randomly selected potential drug-drug interactions by a transparent decision model developed by our team. The cases were generated from ward round observations and physicians’ queries from an outpatient setting. We compared our ratings to those assigned by a senior clinical pharmacologist and by a standard interaction database, and thus validated the model.
The decision model rated consistently with the standard database and the pharmacologist in 94 and 156 cases, respectively. In two cases the model decision required correction. Following removal of systematic model construction differences, the DM was fully consistent with other rating systems.
The decision model reproducibly rates interactions and elucidates systematic differences. We propose to supply validated decision paths alongside the interaction rating to improve comprehensibility and to enable physicians to interpret the ratings in a clinical context.
KeywordsAlgorithm Severity Validation Drug Interaction Decision Model Mmx Epha.ch
The management of adverse drug events (ADEs) is an important issue in healthcare. While some ADEs are unpredictable (e.g. anaphylaxis), ADEs caused by drug-drug interactions (DDI) are likely to be preventable. Nevertheless, DDIs continue to present a major problem in medical treatment. One Swiss study estimated that 17% of all ADEs occurring in hospitalized patients are provoked by DDIs, while a Dutch study found that 28% of patients admitted to the hospital experienced at least one DDI. Clinical decision support software (CDSS) has been used as a supportive measure to improve medication safety[5, 6]. The information provided by CDSS focuses on management advice rather than alerts, since more prevalent alerts may dominate less common but equally dangerous ones.
In the past, DDIs were classified according to their potential severity e.g. minor, moderate, or major. In 2001 a new management-oriented approach to DDI classification was advanced by Hansten and Horn. More than 75% of majorly severe interactions are considered manageable; therefore this approach seems reasonable. Recently, a separate group in our department developed ZHIAS (Zurich Interaction System), an extension of the clinical management approach, which is based on Operational Classification of Drug Interactions (ORCA)[9, 10]. Another management-oriented classification system is based on types of adverse drug reactions. Even with multiple classifications being available, the assessment of DDIs depends on both the experience and the interpretation of the assessor as well as the sources of information used in the assessment. The discrepancies between different DDI ratings are well-documented[7, 12–14]. No two DDI databases use the same set of criteria to assign severity ratings. For example, the assigned interaction severity between alprazolam and digoxin ranges from “no interaction” to “major interaction”, depending on database[16–19]. It remains unclear whether these rating discrepancies arise from inconsistent study results or from the use of different DDI classification algorithms. One case report and one study showed that plasma digoxin concentrations significantly increase in the presence of alprazolam. A separate study involving healthy volunteers reported no clinically relevant change in digoxin plasma concentrations. In the past 30 years, more than 15,000 papers on DDIs have been published. The problem we face today is not the lack of information on DDIs or the type of classification, but the incompatibility of DDI rating systems. Alerts are often disregarded by physicians, if background information on the decision layer and practical management recommendations are lacking[22, 23]. In order to increase user acceptance, the DDI rating must be consistent and comprehensible, and the decision model must be transparent.
To improve rating comprehensibility, we developed a transparent decision model (DM) to rate drug interactions. The model is based on previous research by van Roon and colleagues. The aim of our current research is to validate the transparent decision model in terms of reproducibility and identification of systematic differences between DDI ratings.
Design of decision model
Apparent interaction (AIA) comprised two sub-questions:Only one “yes” answer is required to progress down the decision path to the next question.
Has this interaction been described in the scientific literature (e.g. credible clinical studies and credible case reports)?
Can one postulate a plausible, hypothetical mechanism of pathogenic interaction?
Serious adverse event (SAE) inquires into the clinical severity of the interaction: Is there an increased risk for the occurrence of an SAE within the normal patient population?
Action (ACT) determines whether medical intervention is necessary: Does the interaction outcome necessitate medical intervention, other than simple precautionary measures?
Surveillance (SUR) ascertains whether the consequences of the interaction can be easily monitored: Is the interaction risk difficult to assess in an out-patient setting and within a short time-frame?
Alternative (ATE) questions whether a safer alternative to either one of the drugs exists. It comprises two sub-questions.Both questions must be answered “yes” in order to proceed to the final step of the decision model.
Does a suitable alternative exist (within the same ATC category), which carries a lower potential for interaction?
Are credible dose adjustment guidelines unavailable?
Risk-benefit ratio (RBR): Does the risk outweigh the potential benefit?
The DM presents 13 possible decision paths leading to 5 possible interaction ratings: DM: A (no action required), DM: B (precautionary measures), DM: C (clinical monitoring), DM: D (avoid) and DM: E (contraindicated). For statistical analysis numbers 1 up to 5 were assigned to the ratings. The ratings are defined to avoid ambiguity and are based on clinical management. A rating of DM: A indicates that co-administration is safe, based on currently available scientific data. When an interaction is rated DM: B, precautionary monitoring for unusual side effects is sufficient. DM: C signifies that, although no alternative therapies are available, the likely effect of the interaction is easily monitored. Necessary medical action will be guided by the relevant published medical guidelines. DM: D indicates that co-administration should be avoided and only undertaken when deemed imperative. DM: E states clearly that the drugs must not be co-administered in any clinical situation. The interaction ratings were standardized to ensure consistency in rating outcomes by different physicians/pharmacists. The DDI rating was designed for integration into a network of additional decision support systems, such as patient-specific risk factors (e.g. old age, obesity, or renal insufficiency) or drug-disease state contraindications, whereas the DM refers to the low-risk normal population. A serious adverse event is defined as a life-threatening or debilitating event, resulting in death, inpatient hospitalization or prolongation of existing hospitalization, or persistent or significant disability/incapacity. Risk/benefit defines the balance between the effectiveness of a medicine and the risk of harm as specified by the World Health Organization Uppsala Monitoring Centre (WHO-UMC) in Sweden.
One of our assessors, a clinical pharmacologist, classified DDIs into five categories, namely: “no interaction”, "minor", "moderate", "major" and "contraindicated", based on her personal clinical experience and interpretation of the available literature relating to drug interactions. The Micromedex DrugDex (MMX) database classifies DDIs as "unknown", "minor", "moderate", "major" or "contraindicated". MMX also estimates the quality of DDI documentation, rating it as either "excellent", "good", "fair" or "unknown".
Validation of decision model
In our study we randomly selected 200 potential drug interactions and compared the individual rating outcomes generated by three different rating methods. Clinical relevance of the drug interactions was assessed from queries received at the Department of Clinical Pharmacology and Toxicology at the University Hospital in Zurich, raised by pharmacists and physicians in primary and secondary care and from ward rounds at the University Hospital. In the first rating method, one pharmacist applied our DM to manually rate the 200 interactions. The ratings were then reviewed and revised for plausibility by a team comprising two clinical pharmacologists and one physician. The second rating was performed by an independent senior clinical pharmacologist who was blinded with respect to the DM and who assigned each interaction rating based on her clinical experience and knowledge. The clinical pharmacologist was not permitted use of an interaction database, but was allowed access to available scientific sources such as PubMed database, Excerpta Medica database (Embase), European Public Assessment Reports (EPARs) and summary of product characteristics. The same information sources were accessible to the pharmacist. In the third rating method, a physician rated the 200 interactions using the commercially available MMX database.
The concordance between all three ratings was determined using cross-tables, together with ordinary and weighted Cohen’s Kappa coefficients. Cohen’s Kappa measures the extent to which any two rating systems agree by chance alone. It ranges from zero (agreement no better than chance) to one (perfect agreement). In the tables, values adjacent to the diagonal (ratings differing by a single category) are considered less serious than deviations of two or more categories. Cohen’s Kappa evaluates inter-rater agreement as follows: 0.01–0.2 slight agreement; 0.21–0.40 fair agreement; 0.41–0.60 moderate agreement; 0.61–0.80 substantial agreement and 0.81–1 perfect agreement. To identify systematic differences between the rating systems, Bland–Altman plots, which illustrate agreement limits, were constructed. Identified systematic differences were reviewed individually by the aforementioned review team and were excluded from further analysis. The relative frequencies and confidence intervals of the remaining disagreements were determined by the Wilson method.
The pharmacist, physician and the clinical pharmacologist independently assessed all cases of potential drug interactions (n = 200). 62 of the interactions yielded no information from MMX regarding possible DDIs. The ratings evaluated by the pharmacist and the clinical pharmacologist ranged from DM: B (precautionary measures) to DM: E (contraindicated).
Cross correlation of drug-drug interaction ratings for clinically identified cases (n = 200) between the proposed decision model (DM) and a clinical pharmacologist
Cross correlation of drug-drug interaction ratings for clinically identified cases (n = 200) between the proposed decision model (DM) and Micromedex (MMX)
We corrected the rating of the pharmacist in two cases, where the DM was applied incorrectly. The application error rate occurred in 1% of all 20 cases (95% CI [0, 3]). The first error, in the assessment of roxithromycin and simvastatin co-administration, was caused by incorrect interpretation of the DM question. The pharmacist applied the serious adverse events (SAEs) question to the “at-risk” population instead of to the “normal patient” population. Therefore the rating of DM: D (avoid) assigned to this interaction by the pharmacist, required correction to DM: B (precautionary measures). No further information about this rating was extracted from MMX, so a third rating was unavailable for comparison. The second error regarded the combination of atenolol and bupropion. The pharmacist did not use all available information to rate the interaction and in particular did not consider that co-administration can induce blood pressure changes, and thus may alter the effect of atenolol. Therefore the rating of DM: A (no action required) assigned to this interaction by the pharmacist, required correction to DM: B (precautionary measures).
The remaining 14 (of the 200 ratings) disagreed between the DM and clinical pharmacologist for reasons not explained by systematic differences (these non-systematic discrepancies account for 7% (95% CI[4, 11]) of all ratings). The remaining 19 non-systematic disagreements between DM and MMX constitute 9.5% (95%CI:[6, 14]).
We evaluated a transparent decision model that reproducibly rates drug interactions and identifies systematic rating discrepancies. Altman suggests that kappa is the appropriate means of judging agreement or reproducibility between classification categories obtained by two different rating methods and is supported by the higher weighted Kappa values, which strengthened the approach in the present study. No systematic differences showed up on the Bland–Altman plot of DM versus MMX, following removal of the systematic differences. Divergence in decision making remains an issue and review of certain cases is unavoidable. The review time, however, decreases as a result of the standardization. When comparing two ratings, our visualization of the decision path enables rapid comprehension of one side of the differences, thus clarifying (at least partially) the rating discrepancies. Such transparency improves the clinical value of the interpretation of the rating[29, 30]. To our knowledge, we publish the first visualized decision model that is comparable with other ratings. Previously published ratings, though based on expert group decisions, are not guided by specified rules of an algorithm. The output of the decision model, corrected for systematic differences between rating systems, closely resembles that of other ratings. To illustrate the systematic nature of these differences, we summarize the most important ones (highlighted in the cross tables) below.
If more than simple precautionary measures are required in first line therapy, or if complex monitoring of a likely side-effect is required, we assume that a suitable drug alternative precludes co-administration, because the latter disproportionately raises patient risk or health care costs. This explains why DM rated 30 cases of higher severity than the clinical pharmacologist (Table1) and 25 cases of higher severity than MMX (Table2).
Interactions requiring complex monitoring were rated of higher severity by DM than either the clinical pharmacologist (DM rated 18 of 30 cases more severely) or MMX (DM rated 21 of 25 cases more severely). (i) The clinical pharmacologist assigned a rating of C (“moderate”) to the combination of citalopram and tramadol, whereas both DM and MMX recommended avoiding this combination (ratings: DM: D and “major”, respectively), since co-administration increases the risk of serotonin toxicity. Monitoring for SAEs such as hyperreflexia, CNS symptoms, myoclonus, sweating and hyperthermia is imperative and is complex in an outpatient setting. (ii) Risk of amiodarone and phenytoin co-administration was rated C (“moderate”) by MMX and C (“precautionary measures”) by the clinical pharmacologist. The DM assigned a rating of D (“avoid”), since amiodarone concentrations in plasma may be reduced to as low as 30% in the presence of phenytoin. This effect can occur several weeks into phenytoin therapy, therefore amiodarone concentrations must be monitored for several weeks to enable dose adjustment. Furthermore, phenytoin toxicity can occur and surveillance requires considerable effort. (iii) Co-administration of duloxetine and amitriptyline increases the risk of anticholinergic or serotonin syndrome and may lead to elevated amitriptyline plasma concentrations. Because of the complex clinical surveillance required, this interaction was rated D by the DM, whereas MMX assigned a C rating.
The inclusion of suitable treatment alternatives in the decision process caused DM to rate an interaction more severely than the clinical pharmacologist in 12 of 30 cases, and more severely than MMX in 4 of 25 cases. (i) Co-administration of digoxin and alprazolam was rated C by the clinical pharmacologist, since alprazolam interferes with digoxin levels and therefore requires drug concentration monitoring at the initiation and discontinuation of alprazolam therapy. The DM rated this interaction as D, because a suitable alternative (lorazepam) exists. (ii) MMX rated the combination of midazolam and phenytoin as “moderate”. Although the co-presence of phenytoin depresses midazolam levels, alternative benzodiazepines are available which carry a lower potential for interaction.
In one case, a rating discrepancy of two categories was found (the drug combination was rated B by MMX and D by DM). The drugs in question were fluconazole and fluvastatin, for which co-administration increases the risk of severe myopathy while an alternative to fluvastatin exists.
This study focused solely on the decision making process, and the positive contribution of the rating output to medical therapy was not evaluated. Although every attempt has been made to ensure that the categories are objective (i.e. they represent a consensus between four clinical specialists in three different fields), they are nonetheless subject to user interpretation and should not be regarded as a “gold standard”, but as an approach to standardize ratings with defined rules. We hope that publication of this decision model will stimulate other groups to test the models’ reproducibility. The feasibility of the decision model to illustrate system differences has been tested with a single database, MMX. In future, the DM may elucidate systematic differences between other rating discrepancies reported in the literature[11, 13, 14]. Concordance between the DM and expert assessment has been validated by only one pharmacist from our group.
The agreement between DM and MMX was evaluated as “fair”, which can be explained partly by systematic differences in 25 cases, but which must also consider the missing information from MMX in 62 cases. The omission of information in MMX regarding a specific drug combination cannot be considered as the absence of a DDI. Therefore our database distinguishes between missing information and a safe combination (DM: A). No information was yielded by MMX for the following complications of drug co-administration. (i) The combination of phenobarbital and acetaminophen increases the risk of hepatotoxicity. (ii) The concurrent use of phenobarbital and mirtazepine may inhibit mirtazepine efficacy and therefore requires clinical monitoring. (iii) Duloxetine increases the area under the plasma concentration time curve (AUC) of metoprolol 1.8-fold. As a result, blood pressure and heart rate monitoring are required, particularly at the start and cessation of duloxetine therapy. Drugs that are used in Europe but not in the U.S. explain a portion of the missing data.
The decision model reproducibly rates interactions and identifies systematic differences. Ratings are based on critical indicators of clinical significance, namely; the risk of an SAE, the extent of medical intervention required, the clinical surveillance required, the existence of a safer alternative and the risk-benefit ratio. The decision model is consistent with other rating systems, following removal of systematic differences between methods. We propose to supply the decision path alongside the interaction rating, to facilitate rating comprehensibility and to assess mortality and morbidity rates in a clinical setting. If factors such as length of hospital stay or risk of complications are improved by using the model, then the model represents a significant advance over existing models.
Adverse drug event
All authors are funded by their listed institutions and did not receive any additional funding.
- Bates DW, Spell N, Cullen DJ, Burdick E, Laird N, Petersen LA, Small SD, Sweitzer BJ, Leape LL: The costs of adverse drug events in hospitalized patients. Adverse Drug Events Prevention Study Group. JAMA. 1997, 277 (4): 307-311. 10.1001/jama.1997.03540280045032.View ArticlePubMedGoogle Scholar
- Juurlink D, Mamdani M, Iazzetta J, Etchells E: Avoiding drug interactions in hospitalized patients. Healthc Q. 2004, 7 (2): 27-28.View ArticlePubMedGoogle Scholar
- Krahenbuhl-Melcher A, Schlienger R, Lampert M, Haschke M, Drewe J, Krahenbuhl S: Drug-related problems in hospitals: a review of the recent literature. Drug safety: an international journal of medical toxicology and drug experience. 2007, 30 (5): 379-407. 10.2165/00002018-200730050-00003.View ArticleGoogle Scholar
- Zwart-van Rijkom JE, Uijtendaal EV, ten Berg MJ, van Solinge WW, Egberts AC: Frequency and nature of drug-drug interactions in a Dutch university hospital. Br J Clin Pharmacol. 2009, 68 (2): 187-193. 10.1111/j.1365-2125.2009.03443.x.View ArticlePubMedPubMed CentralGoogle Scholar
- Bates DW, Leape LL, Cullen DJ, Laird N, Petersen LA, Teich JM, Burdick E, Hickey M, Kleefield S, Shea B: Effect of computerized physician order entry and a team intervention on prevention of serious medication errors. JAMA. 1998, 280 (15): 1311-1316. 10.1001/jama.280.15.1311.View ArticlePubMedGoogle Scholar
- Garg AX, Adhikari NK, McDonald H, Rosas-Arellano MP, Devereaux PJ, Beyene J, Sam J, Haynes RB: Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. JAMA. 2005, 293 (10): 1223-1238. 10.1001/jama.293.10.1223.View ArticlePubMedGoogle Scholar
- Hansten PD, Horn JR, Hazlet TK: ORCA: Operational Classification of drug interactions. J Am Pharm Assoc (Wash). 2001, 41 (2): 161-165.View ArticleGoogle Scholar
- Bergk V, Gasse C, Rothenbacher D, Loew M, Brenner H, Haefeli WE: Drug interactions in primary care: impact of a new algorithm on risk determination. Clin Pharmacol Ther. 2004, 76 (1): 85-96. 10.1016/j.clpt.2004.02.009.View ArticlePubMedGoogle Scholar
- Guzek MZO, Semmler A, Gonzenbach R, Huber M, Kullak-Ublick GA, Weller M, Russmann S: Evaluation of Drug Interactions and Dosing in 484 Neurological Inpatients Using Clinical Decision Support Software and an Extended Operational Interactions Classification System (ZHIAS). Pharmacoepidemiol Drug Saf. 2011, in pressGoogle Scholar
- Frolich T, Zorina O, Fontana AO, Kullak-Ublick GA, Vollenweider A, Russmann S: Evaluation of medication safety in the discharge medication of 509 surgical inpatients using electronic prescription support software and an extended operational interaction classification. Eur J Clin Pharmacol. 2011, 67 (12): 1273-1282. 10.1007/s00228-011-1081-9.View ArticlePubMedGoogle Scholar
- Vitry AI: Comparative assessment of four drug interaction compendia. Br J Clin Pharmacol. 2007, 63 (6): 709-714. 10.1111/j.1365-2125.2006.02809.x.View ArticlePubMedGoogle Scholar
- Chan A, Tan SH, Wong CM, Yap KY, Ko Y: Clinically significant drug-drug interactions between oral anticancer agents and nonanticancer agents: a Delphi survey of oncology pharmacists. Clin Ther. 2009, 31 (Pt 2): 2379-2386.View ArticlePubMedGoogle Scholar
- Olvey EL, Clauschee S, Malone DC: Comparison of critical drug-drug interaction listings: the Department of Veterans Affairs medical system and standard reference compendia. Clin Pharmacol Ther. 2010, 87 (1): 48-51. 10.1038/clpt.2009.198.View ArticlePubMedGoogle Scholar
- Fulda TR: Disagreement among drug compendia on inclusion and ratings of drug-drug interactions. Curr Ther Res. 2000, 61 (8): 540-548. 10.1016/S0011-393X(00)80036-3.View ArticleGoogle Scholar
- Horn JR: Reducing Drug Interactions Alerts: Not So Easy. Available athttp://www.hanstenandhorn.com/hh-article06-07.pdf,
- Drug-Reax System: Micromedex Healthcare Series (database on CD-ROM) Version 5.1. 2007, Thomson Reuters (Healthcare) Inc, Greenwood Village, ColoradoGoogle Scholar
- Pharmavista Interactions: 2010, Available at:http://www.pharmavista.ch,
- Drug Interactions Analysis and Management. Edited by: Hansten PD, Horn JR. 2011, Facts & Comparisons, St.Louis, MOGoogle Scholar
- Stockley’s drug interactions. Edited by: Baxter K. 2009, Pharmaceutical Press, London, 8Google Scholar
- Tollefson G, Lesar T, Grothe D, Garvey M: Alprazolam-related digoxin toxicity. Am J Psychiatry. 1984, 141 (12): 1612-1613.View ArticlePubMedGoogle Scholar
- Ochs HR, Greenblatt DJ, Verburg-Ochs B: Effect of alprazolam on digoxin kinetics and creatinine clearance. Clin Pharmacol Ther. 1985, 38 (5): 595-598. 10.1038/clpt.1985.230.View ArticlePubMedGoogle Scholar
- Hansten PD: Drug interaction management. Pharm World Sci. 2003, 25 (3): 94-97. 10.1023/A:1024077018902.View ArticlePubMedGoogle Scholar
- Isaac T, Weissman JS, Davis RB, Massagli M, Cyrulik A, Sands DZ, Weingart SN: Overrides of medication alerts in ambulatory care. Arch Intern Med. 2009, 169 (3): 305-311. 10.1001/archinternmed.2008.551.View ArticlePubMedGoogle Scholar
- Smithburger PL, Buckley MS, Bejian S, Burenheide K, Kane-Gill SL: A critical evaluation of clinical decision support for the detection of drug-drug interactions. Expert Opin Drug Saf. 2011, 10 (6): 871-882. 10.1517/14740338.2011.583916.View ArticlePubMedGoogle Scholar
- van Roon EN, Flikweert S, le Comte M, Langendijk PN, Kwee-Zuiderwijk WJ, Smits P, Brouwers JR: Clinical relevance of drug-drug interactions: a structured assessment procedure. Drug safety: an international journal of medical toxicology and drug experience. 2005, 28 (12): 1131-1139. 10.2165/00002018-200528120-00007.View ArticleGoogle Scholar
- Altman D: Practical statistics for medical research. 1991, Chapman & Hall, LondonGoogle Scholar
- Wilson EB: Probable inference, the law of succession, and statistical inference. JASA. 1927, 22: 209-212. 10.1080/01621459.1927.10502953.View ArticleGoogle Scholar
- Larkin JH, Simon HA: Why a Diagram is (Sometimes) Worth Ten Thousand Words. Cogn Sci. 1987, 11 (1): 65-100. 10.1111/j.1551-6708.1987.tb00863.x.View ArticleGoogle Scholar
- Weingart SN, Seger AC, Feola N, Heffernan J, Schiff G, Isaac T: Electronic drug interaction alerts in ambulatory care: the value and acceptance of high-value alerts in US medical practices as assessed by an expert clinical panel. Drug safety: an international journal of medical toxicology and drug experience. 2011, 34 (7): 587-593. 10.2165/11589360-000000000-00000.View ArticleGoogle Scholar
- Seidling HM, Phansalkar S, Seger DL, Paterno MD, Shaykevich S, Haefeli WE, Bates DW: Factors influencing alert acceptance: a novel approach for predicting the success of clinical decision support. Journal of the American Medical Informatics Association: JAMIA. 2011, 18 (4): 479-484. 10.1136/amiajnl-2010-000039.View ArticlePubMedPubMed CentralGoogle Scholar
- The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/2050-6511/13/7/prepub
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.