Back to Posts

[Use Cases] How to make the most out of HPC – health

High-Performance Computing can be used to create simulations, reducing the need for physical testing, it can perform large-scale computations in minutes instead of weeks or months, saving time and money.

The main benefits are a significant improvement in clinical diagnosis through increased diagnostic speed, higher diagnostic quality, increased diagnostic throughput and more accessible storage of samples.

To better understand how HPC solutions can be used in the health sector, we have put together 6 use cases, each with a different application.

  1. Medical Diagnosis and Treatment

For this use case, we chose VIRTUM-DP, which has the potential to remove obstacles and bottlenecks in current oncological diagnostics. The use of HPC has reduced the time required to process test cases from a day to a few hours, a typical reduction of a factor of 5. In addition, a cross-platform, simple user interface was developed. This supports visualisation of data and its management from almost any device.

VIRTUM-DP can result in a reduction in staff costs by 50% through increased efficiency. Extrapolating this to the USA alone results in an overall saving per annum of $1.7 billion.

Read all about this use case here: https://www.fortissimo-project.eu/en/success-stories/420/hpccloudbased-molecular-modelling

2. Drug and Vaccine Discovery

This test case addresses the identification of existing drugs to treat illnesses other than those for which they are currently prescribed. This has the potential to make a significant impact in drug discovery where the costs of developing new treatments are becoming prohibitive. The assessment of target compounds requires the use of Cloud-based HPC because the search space is so large and complex.

This resulted in a significant reduction in the time and cost of the evaluation of a single compound. The Cloud-based approach enabled significant computational resources to be deployed without the need to purchase and maintain expensive hardware.

Read more about the use case here: https://www.fortissimo-project.eu/en/success-stories/508/cloudbased-simulation-of-the-binding-capacities-of-target-drug-compounds

3. Medical Data Analysis

An upgraded HPC system, originally built by TGen, Dell Technologies and Intel in 2012, is enabling TGen to handle the massive quantities of data involved in genome sequencing. Building the system was no small engineering feat, because next-generation sequencing (NGS) typically captures a terabyte or more of data each time a sequencer runs, and TGen runs multiple sequencers run around the clock. TGen needed an HPC system capable of storing, processing and computing against those massive data sets—with a goal of dramatically reducing time to results.

The upgrade cuts the data processing time from two weeks to just eight hours. This dramatic performance improvement enables TGen to deliver personalized treatments that save lives, sooner. It’s also helping TGen unravel the secrets of infectious diseases, which are also DNA-based organisms.

Read all about it here: https://www.delltechnologies.com/en-us/blog/using-power-of-hpc-to-save-lives/

4. Medical Imaging

GE Healthcare partnered with Hewlett Packard Enterprise to create the imaging chain engine for the Revolution CT, its flagship scanner that shows bones and organs in stunning 3D detail, while slashing patients’ radiation exposure. The faster a scanner spins, the more data it collects.

GE also developed powerful algorithms to process the raw data coming off of the scanner and generate human-readable 3D images. GE paired the scanner with a customized IT solution architected by HPE. As an on-premise, high-performance computing (HPC) platform deployed at the edge, the HPE solution avoids latency issues that would arise if the data was transmitted over a network. And it has the horsepower and capacity to ingest, store, and process the large volume of data coming off the Revolution CT scanner without losing a single byte.

The system can scale up to 12 GPUs in concert to reconstruct 3D images for instant review. After the CT process is completed, the images are stored on the hospital’s picture archiving and communications system (PACs) for future reference, completing the edge-to-core loop.

Read more about this use case here: https://www.hpe.com/ae/ar/customer-case-studies/ge-hpc-healthcare.html

5. Biomedical Research

The first objective of this use case was to demonstrate and assess the strong scaling performance of HemeLB to the largest core counts possible. This was to enable CompBioMed to evaluate current performance and identify improvements for future exascale machines. The second main objective was to demonstrate the ability of HemeLB to utilise this performance to study flows on human-scale vasculatures. The combination of both aspects will be essential to enabling the creation of a virtual human able to simulate the specific physiology of an individual for diagnostic purposes and evaluation of potential treatments.

In collaboration with POP CoE, they were able to demonstrate HemeLB’s capacity for strong scaling behaviour up to the full production partition of SuperMUC-NG (>300,000 cores) whilst using a non-trivial vascular domain. This highlighted several challenges of running simulations at scale and also identified avenues for them to improve the performance of the HemeLB code. They have also run self-coupled simulations on personalised 3D arteries and veins of the left forearm with and without an arteriovenous fistula being created. The initial flow from their modified model showed good agreement with that seen in a clinical study.

Read all about this use case here: https://www.hpccoe.eu/2021/03/17/compbiomed-strong-scaling-performance-for-human-scale-blood-flow-modelling/

6. Neuroscience

Researchers at the Salk Institute (San Diego, CA) are using supercomputers at the nearby NSFfunded San Diego Supercomputer Center to investigate how the synapses of the brain work. In addition, the use of supercomputers is helping to change the very nature of biology – from a science that has relied primarily on observation to a science that relies on high performance computing to achieve previously impossible in-depth quantitative results.

The supercomputing-simulation driven approach is one of the key ways in which Salk Institute is building computational bridges between brain levels from the biophysical properties of synapses and function of neural systems. This research could ultimately help reduce the overwhelming cost for treatment and long-term care of brain related disorders. Modeling driven precision in circuit information will help researchers understand the scale and scope of problems while enabling them to test and develop targeted therapies that are ultimately more effective.

Read all about this use case here: https://www.hpcuserforum.com/wp-content/uploads/2021/03/HPCSuccessStories.pdf

Subscribe to our newsletter

Subscribe to our newsletter. Stay updated on training events and latest news.

Thanks for joining our newsletter.
Oops! Something went wrong.