Jump to content
Gemini now has added data protection. Chat with Gemini to save time, personalize learning and inspire creativity.
Gemini now has added data protection. Chat now.

Duke researchers leverage deep learning on Google Cloud to improve medical imaging quality

Using Colab and Google Compute Engine, a research team mimics clinical-grade ultrasound imaging and gets results in one day instead of three weeks.

Medical professionals rely on ultrasound imagery for a variety of key diagnostics—from fetal health to cardiac function—so the images they work with must be perfectly clear. Clinical scanners generate raw data that go through advanced post-processing to enhance contrast and reduce electronic noise and speckle, which occurs when sound waves interfere with each other. That post-processing transforms the “beamformed” image into a drastically improved “clinical-grade” image that medical providers can interpret.

Yet clinical post-processing is typically proprietary and varies across manufacturers, making it difficult to establish current imaging baselines. In turn, this challenges clinical translation for medical researchers spanning fields from hardware to algorithmic development. Ouwen Huang, M.D./Ph.D. candidate at Duke University’s Biomedical Engineering program, wanted to solve that problem by designing a universal open-source framework that researchers anywhere could use to process raw data from any scanner to match images from a clinical-grade scanner.

On Google Cloud I can run one hundred experiments in parallel, and I can get results in one day that might have taken three weeks before.

Ouwen Huang Duke University, M.D./Ph.D. candidate, Biomedical Engineering, Duke University

Making medical research workflows easier

With his Principal Investigator, Associate Professor of the Practice Mark Palmeri, along with Will Long, Marcelo Lerendegui, Drs. Nick Bottenus, Gregg Trahey, and Sina Farsiu, Ouwen came up with a solution: a deep-learning tool called MimickNet, which he trained on Google Compute Engine and developed on Colab, Google’s Jupyter notebook environment in the cloud. “Our results with MimickNet show that research-grade scanners can mimic the post-processing found on some of the best clinical-grade scanners,” Ouwen says. “So hypothetically, cheaper scanners with comparable hardware can produce images similar to the commercial scanners widely used in clinical practice. Google Compute Engine and Colab have helped tremendously in smoothly being able to disseminate our findings to the research community.”

Ouwen was already familiar with Google Cloud from his work in an imaging start up, but when he moved into academia he had to manage without a personal IT team. “Google Cloud became my DevOps team,” he says. “I was able to spin up servers very quickly whenever I wanted. There’s a smooth onboarding process and everything is guaranteed to work.” He also found that using Google Cloud fit his research workflow best: “As a researcher, your compute needs typically operate in bursts. You'll spend three weeks formulating a hypothesis and debugging an idea on a modest single GPU. Once your framework is ready, you might want to run it on 100 GPUs for two hours. Most universities can provide a lab five GPUs 24/7, but it is challenging to support the 100 GPU burst mode use case.” With funding from the National Institute of Health Medical Scientist Training Program, the National Institute of Biomedical Imaging and Engineering, and in-kind technical support from Siemens Health, Ouwen quickly got the framework up and running.

Optimizing image quality and accelerating results

Ouwen and his colleagues started with 1,500 fetal, liver, and heart ultrasound images acquired under an IRB-approved protocol from Siemens and Verasonic scanners to train their model. Using Google Compute they were able to do batch training in parallel, which accelerated their run time dramatically: “on Google Cloud I can run one hundred experiments in parallel, and I can get results in one day that might have taken three weeks before,” Ouwen reports.

To assess the accuracy of their images, Ouwen used the average similarity index measurement (SSIM), a metric that rates image similarity by luminance, contrast, and structure. Comparing ultrasound images processed by proprietary systems to those processed by MimickNet achieved a mean SSIM score of 0.940, where 1 is an optimal score and the human eye can’t tell the difference between the two images. With MimickNet, beamformed data largely understood only by research domain experts can be easily translated to clinical-grade images familiar to medical providers.

Our results with MimickNet show that research-grade scanners can mimic the post-processing found on some of the best clinical-grade scanners. So hypothetically, cheaper scanners with comparable hardware can produce images similar to the commercial scanners widely used in clinical practice. Google Compute Engine and Colab have helped tremendously in smoothly being able to disseminate our findings to the research community.

Ouwen Huang, M.D./Ph.D. candidate, Biomedical Engineering, Duke University

Benefitting medical researchers everywhere

As an open-source platform, MimickNet is accessible to researchers anywhere. The training model is already available as a Google Colab notebook to make it easy to get started. “Colab is a fully managed system, which helps scientific reproducibility,” Ouwen says. “With Colab I can write a notebook and scientists can immediately run it without worrying about underlying code dependencies. It’s an instant quick start. You just click and it’s ready to go.”

MimickNet is an innovative solution for a field that is changing rapidly. Ouwen points out that more and more ultrasound image processing is moving to mobile devices for ease of use and affordability. “We’re moving away from giant bulky machines. We have some preliminary data that’s promising for MimickNet’s use on mobile platforms in real time.” Eventually, he sees the MimickNet framework compatible with magnetic resonance (MRI) and computerized tomography (CT) imaging as well, making image post-processing more affordable and accessible for everyone. “It would be amazing if you could just plug in an ultrasound device into your phone and have different post-processing right there to produce a high-quality medical image,” he says. “Any researcher trying to match today’s clinical imaging pipelines can benefit from MimickNet. Its main goal is to enable more clinically translational research.”

Sign up here for updates, insights, resources, and more.