Scientists working at the intersection of AI and cancer care need to be more transparent about their methods and publish research that is reproducible, according to a new commentary co-authored by CSAIL's Tamara Broderick.
“The foundation of the scientific method is that research results must be testable by others. Testability is even more important in clinical applications because we need a high level of confidence in our methods before they are used with patients,” says Harvard professor John Quackenbush, co-author of the new piece. “In [AI] applications this requires that the models, software code, and data are available for independent validation. Transparency will accelerate research, advance patient care, and will build confidence among scientists and clinicians.”
The article, co-authored by more than two dozen researchers from around the world, was published online in Nature on October 14, 2020.
Quackenbush and several colleagues organized the commentary in response to a January 2020 study led by researchers at Google Health in which the researchers claimed that an AI system they developed was, in certain settings, better at screening for breast cancer than trained radiologists. The Google Health study also claimed that the AI system improved the speed and reliability of breast cancer screenings. The study enjoyed wide media coverage at the time of its publication.
Researchers not involved with the original study, however, have been unable to reproduce the findings due to a lack of details about the methods and algorithm code. AI methods run the risk of “overfitting,” or working only with the specific dataset being tested. This can only be addressed by understanding and testing the methods outside of the original study. The lack of reproducibility impedes cancer research and could lead to unwarranted and even potentially harmful clinical trials, according to the commentary.
The authors of the commentary wrote that “transparency in the form of the actual computer code used to train a model and arrive at its final set of parameters is essential for research reproducibility.” They also raised concern that the Google Health study relied on two large datasets that are under license and cannot be easily accessed by outside researchers.
While there are numerous obstacles to overcome in order to improve transparency and reproducibility when applying AI methods in medicine, the commentary noted there is a growing number of effective frameworks and platforms to share code, overcome software challenges of large-scale machine learning applications, and ensure patient privacy.
“Transparency and reproducibility in artificial intelligence,” Benjamin Haibe-Kains, George Alexandru Adam, Ahmed Hosny, Farnoosh Khodakarami, Massive Analysis Quality Control (MAQC) Society Board of Directors, Levi Waldron, Bo Wang, Chris McIntosh, Anna Goldenberg, Anshul Kundaje, Casey S. Greene, Tamara Broderick, Michael M. Hoffman, Jeffrey T. Leek, Keegan Korthauer, Wolfgang Huber, Alvis Brazma, Joelle Pineau, Robert Tibshirani, Trevor Hastie, John P. A. Ioannidis, John Quackenbush, Hugo J. W. L. Aerts, Nature, online October 15, 2020, doi: 10.1038/s41586-020-2766-y