Each year, more than 300,000 unnecessary breast biopsies are performed in the United States alone due to false positive mammogram results. But a software recently, developed at Houston Methodist Cancer Center in Texas, may change this paradigm.
The study, “Correlating mammographic and pathologic findings in clinical decision support using natural language processing and data mining methods,” published in the journal Cancer, shows that the software can analyze mammograms and patients’ history to determine breast cancer risk in a faster and more accurate way than a pathologist, having the potential to save time and reduce unnecessary biopsies.
“This software intelligently reviews millions of records in a short amount of time, enabling us to determine breast cancer risk more efficiently using a patient’s mammogram. This has the potential to decrease unnecessary biopsies,” Stephen Wong, chair of the department of systems medicine and bioengineering, and team co-leader, said in a press release.
According to the federal Centers for Disease Control and Prevention, nearly 12 million mammograms are performed annually in the United States. However, about half of those mammograms give false-positive results, meaning that one in every two healthy women is falsely diagnosed with breast cancer.
This is important because when mammograms have suspicious findings, patients are recommended for a breast biopsy, which can induce such complications as bruising, prolonged bleeding, or infection near the biopsy site. Currently, more than 1.6 million biopsies are performed annually in the U.S., but estimates show that about 20 percent of those are unnecessary, as they follow false-negative mammogram results.
The new software was tested in more than 500 breast cancer patients, evaluating both the mammograms and pathology reports to assess the risk for breast cancer. Results revealed that the software had a 99 percent accuracy and could analyze the data 30 times faster that doctors.
In fact, Houston Methodist Cancer Center researchers said that while two doctors took 50 to 70 hours to review the charts of 50 patients, the software was able to review 500 in just a few hours, saving more than 500 hours of the physicians’ working time.
“Accurate review of this many charts would be practically impossible without [artificial intelligence],” Wong said.