Study: AI faster, more accurate than humans at analyzing heart scans

By Allen Cone  |  March 22, 2018 at 2:05 PM
share with facebook
share with twitter

March 22 (UPI) -- A form of artificial intelligence can analyze heart ultrasound tests than more quickly and better than board-certified echocardiographers, according to a study.

Researchers at the University of California San Francisco trained a computer to assess the most common echocardiogram views and tested them against skilled human technicians. Their findings were published Wednesday in the journal npj Digital Medicine.

The researchers used 180,294 real-world echo images for the advanced machine learning, finding that the computers accurately assessed EKG videos 91.7 percent to 97.8 percent of the time, compared to 70.2 percent to 83.5 percent when humans reviewed them.

"These results suggest our approach may be useful in helping echocardiographers improve their accuracy, efficiency and workflow, and also may provide a foundation for better analysis of echocardiographic data," senior author Dr. Rima Arnaout, a health cardiologist and assistant professor in the UCSF Division of Cardiology, said in a press release.

In an echo, numerous video clips, still images and heart recordings are measured from more than a dozen different angles, or "views," several of which may have only subtle differences.

Interpreting medical images, including echocardiograms, typically requires extensive training.

Although deep learning has been used to detect abnormalities for radiology, pathology, dermatology and other fields, it hasn't been widely applied to echocardiograms. The researchers said this is because of the complexity of their multi-view, multi-modality format.

"The versatility of training in deep learning represents a significant advantage over earlier machine-learning methods, which have sometimes been applied to echocardiography," the researchers wrote.

Arnaout and her colleagues used 223,787 images from 267 UCSF Medical Center patients aged 20-96 that were collected from 2000-2017. Eighty-percent were used for training, while the rest were used for validation and testing.

Each board-certified echocardiographer participating in the study was given 1,500 randomly selected images. That includes 100 from each view, drawn from the same test set given to the model.

The computer classified images from 12 video views with 97.8 percent accuracy, compared with 70.2 percent for humans. On single, low-resolution images, accuracy among 15 views was 91.7 percent for the computer and 83.5 percent for the echocardiographers.

The researchers also found that the file size could be reduced without losing accuracy, allowing for less storage space and easier transmission. They accomplished this by removing color and standardizing the sizes and shapes of videos and still images.

"Our model can be expanded to classify additional sub-categories of echocardiographic view, as well as diseases, work that has foundational utility for research, for clinical practice, and for training the next generation of echocardiographers," Arnaout said.

Related UPI Stories
Trending Stories