Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a recommendation algorithm that predicts the likelihood of a urinary tract infection (UTI) in a patient being treated with first- or second-line antibiotics. With this information, the model makes a recommendation for a particular treatment, choosing a first-line agent as often as possible without causing an excessive amount of malpractice.
UTIs, which affect half of all women, cause nearly $ 4 billion in healthcare costs annually. Doctors often treat urinary tract infections with antibiotics called fluoroquinolones. However, it has been found that women are at risk of contracting other infections. They are also associated with a higher risk of tendon injuries and life-threatening conditions such as aortic rupture, leading medical associations to issue guidelines recommending fluoroquinolones as “second line treatment”. (Second-line treatment is treatment for a disease that is used after initial treatment has failed, stopped working, or caused unbearable side effects.) Even so, with limited time and resources, doctors continue to prescribe fluoroquinolones at high rates.
The CSAIL team claims that its model, trained on data from more than 10,000 patients from Brigham & Women’s Hospital and Massachusetts General Hospital, would enable clinicians to reduce second-line antibiotic use by 67%. In patients where clinicians chose a second-choice drug but the algorithm chose a first-choice drug, the first-choice drug worked more than 90% of the time. When clinicians selected an inappropriate first-line drug, the algorithm selected an appropriate first-line drug almost half the time.
The system takes a threshold approach that the team hopes will be intuitive for clinicians to apply to a range of drugs. To achieve this goal, the model is structured in such a way that it can be embedded directly in electronic patient records (EHR). A doctor could set the threshold for treatment failure at a relatively high number of 10%, reflecting the fact that UTI treatments are unlikely to result in life-threatening side effects. In contrast, treatments for certain bloodstream infections have a much higher risk of death, so a doctor could set the treatment failure rate much lower (e.g. 1%) in these cases.
The team admits that they haven’t tested their algorithm on more complicated forms of UTI, and that it wasn’t evaluated in a randomized controlled trial. Indeed, studies show that much of the data used to train disease diagnosis algorithms can sustain inequalities. A team of British scientists found that almost all eye disease records come from patients in North America, Europe and China, which means algorithms for diagnosing eye diseases among racial groups from under-represented countries are less certain. In another study, Stanford University researchers claimed that most of the U.S. data for studies on the medical use of AI came from California, New York, and Massachusetts.
In the future, the MIT team will focus on studies that compare common practices with algorithmic medical decisions. They also plan to increase the diversity of their sample size to improve recommendations related to race, ethnicity, socioeconomic status, and more complex health backgrounds. “The exciting thing about this research is that it provides a blueprint for the correct path to retrospective assessment,” said research co-author and MIT professor David Sontag. “We do this by showing that an apples-to-apples comparison can be made within existing clinical practice. When we say that we can reduce the use of second-line antibiotics and inappropriate treatment by certain percentages, we trust these numbers when compared to doctors. “
How startups scale communication: The pandemic is causing startups to examine their communication solutions more closely. Learn how