The Future of Healthcare is Data-driven

By Rudeon Snell Global Partner Lead: Customer Experience & Success at Microsoft

April 17, 2023

As analytics tools and machine learning capabilities mature, healthcare innovators are speeding up the development of enhanced treatments supported by Azure’s GPU-accelerated AI infrastructure powered by NVIDIA.

Improving diagnosis and elevating patient care

Man’s search for cures and treatments for common ailments has driven millennia of healthcare innovation. From the use of traditional medicine in early history to the rapid medical advances of the past few centuries, healthcare providers are locked in a constant search for effective solutions to old and emerging diseases and conditions.

The pace of healthcare innovation has increased exponentially over the past few decades, with the industry absorbing radical changes as it transitions from a health care to a health cure society. From telemedicine, personalized wellbeing, and precision medicine to genomics and proteomics, all powered by AI and advanced analytics, modern medical researchers can access more supercomputing capabilities than ever before. This quantum leap in computational capability, powered by AI, enables healthcare services dissemination and consumption in ways, and at a pace, that were previously unimaginable.

Today, health and life sciences leaders leverage Microsoft Azure high-performance computing (HPC) and purpose-built AI infrastructure to accelerate insights into genomics, precision medicine, medical imaging, and clinical trials, with virtually no limits to the computing power they have at their disposal. These advanced computing capabilities are allowing healthcare providers to gain deeper insights into medical data by deploying analytics and machine learning tools on top of clinical simulation data, increasing the accuracy of mathematical formulas used for molecular dynamics and enhancing clinical trial simulation.

By utilizing the infrastructure as a service (IaaS) capabilities of Azure HPC and AI, healthcare innovators can overcome the challenges of scale, collaboration, and compliance without adding complexity. And with access to the latest GPU-enabled virtual machines, researchers can fuel innovation through high-end remote visualization, deep learning, and predictive analytics.

Data scalability powers rapid testing capabilities

Take the example of the National Health Service, where the use of Azure HPC and AI led to the development of an app that could analyze COVID-19 tests at scale, with a level of accuracy and speed that is simply unattainable for human readers. This drastically improved the efficiency and scalability of analysis as well as capacity management.

Another advance worth noting, is that with Dragon Ambient Experience (DAX), an AI-based clinical solution offered by Nuance, doctor-patient experiences are optimized through the digitization of patient conversations into highly accurate medical notes, helping ensure high-quality care. By freeing up time for doctors to engage with their patients in a more direct and personalized manner, DAX improves the patient experience, reducing patient stress and saving time for doctors.

“With support from Azure and PyTorch, our solution can fundamentally change how doctors and patients engage and how doctors deliver healthcare.”—Guido Gallopyn, Vice President of Healthcare Research at Nuance.

Another exciting partnership between Nuance and NVIDIA brings directly into clinical settings medical imaging AI models developed with MONAI, a domain-specific framework for building and deploying imaging AI. By providing healthcare professionals with much needed AI-based diagnostic tools, across modalities and at scale, medical centers can optimize patient care at fractions of the cost compared to traditional health care solutions.

“Adoption of medical imaging AI at scale has traditionally been constrained by the complexity of clinical workflows and the lack of standards, applications, and deployment platforms. Our partnership with Nuance clears those barriers, enabling the extraordinary capabilities of AI to be delivered at the point of care, faster than ever.”—David Niewolny, Director of Healthcare Business Development at NVIDIA.

GPU-accelerated virtual machines are a healthcare game changer

In the field of medical imaging, progress relies heavily on the use of the latest tools and technologies to enable rapid iterations. For example, when Microsoft scientists sought to improve on a state-of-the-art algorithm used to screen blinding retinal diseases, they leveraged the power of the latest NVIDIA GPUs running on Azure virtual machines.

Using Microsoft Azure Machine Learning for computer vision, scientists reduced misclassification by more than 90 percent from 3.9 percent to a mere 0.3 percent. Deep learning model training was completed in 10 minutes over 83,484 images, achieving better performance than a state-of-the-art AI system. These are the types of improvements that can assist doctors in making more robust and objective decisions, leading to improved patient outcomes for patients.

Photo of doctor reviewing films’

For radiotherapy innovator Elekta, the use of AI could help expand access to life-saving treatments for people around the world. Elekta believes AI technology can help physicians by freeing them up to focus on higher-value activities such as adapting and personalizing treatments. The company accelerates the overall treatment planning process for patients undergoing radiotherapy by automating time-consuming tasks such as advanced analysis services, contouring targets, and optimizing the dose given to patients. In addition, they rely heavily on the agility and power of on-demand infrastructure and services from Microsoft Azure to develop solutions that help empower their clinicians, facilitating the provision of the next generation of personalized cancer treatments.

Elekta uses Azure HPC powered by NVIDIA GPUs to train its machine learning models with the agility to scale storage and compute resources as its research requires. Through Azure’s scalability, Elekta can easily launch experiments in parallel and initiate its entire AI project without any investment in on-premises hardware.

“We rely heavily on Azure cloud infrastructure. With Azure, we can create virtual machines on the fly with specific GPUs, and then scale up as the project demands.”—Silvain Beriault, Lead Research Scientist at Elekta.

With Azure high-performance AI infrastructure, Elekta can dramatically increase the efficiency and effectiveness of its services, helping to reduce the disparity between the many who need radiotherapy treatment and the few who can access it.

Learn more

Leverage Azure HPC and AI infrastructure today or request an Azure HPC demo.

Read more about Azure Machine Learning:

#MakeAIYourReality
#AzureHPCAI
#NVIDIAonAzure

Return to Solution Channel Homepage
Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire