The convergence of cheaper and more easily accessible computing power, the explosion of data, and the rise of applications that require big data analytics, machine learning, cognitive computing and AI, is making high-performance computing a trend CIOs need to keep an eye on.

Think about high-performance computing. What comes to mind? Research into the beginnings of the universe? Scientists analysing massive amounts of data to discover the Higgs-Boson particle? The behaviour of anti-matter?

Or maybe, less esoterically, weather forecasting, sussing out intelligence from piles of seismic data for oil and gas exploration, or simulating wind flows over different terrain to figure out the best place to plonk a wind farm?

Not exactly the stuff of IT leadership. Or in the realm of enterprise IT departments.

Or is it?

What if high-performance computing (HPC) and its applications aren’t as arcane as we assume they are? What if high-performance computing is actually the sort of thing enterprise IT departments need to be looking at today—if they want to compete tomorrow?

“HPC has become a competitive weapon,” recently said Earl Joseph, IDC’s Program VP for High-Performance Computing and Executive Director of the HPC User Forum.

HPC has become a competitive weapon.
Earl Joseph, Program VP for High-Performance Computing, IDC

Really? What would an enterprise IT department want with an HPC system that functions above a teraflop or 1012 floating-point operations per second?

A lot, apparently.

Companies that want to set the gold standard for efficiency and customer experience, who are interested in predicting buying patterns or detecting fraud and sales opportunities in real-time, or who want to create new revenue models based on data, are all turning to high-performance computing systems.

Take the example of the auto insurer, GEICO. It uses a high-performance computing cluster to pre-calculate quotes for every adult and household in the entire country so that it can accurately offer automated phone quotes in just 100ms, according to an IDC presentation, High-Performance Data Analysis: HPC Meets Big Data. GEICO is a wholly-owned subsidiary of Warren Buffett’s Berkshire Hathaway. How it applied HPC to its business won it the HPCwire Readers’ and Editors’ Choice Awards at the 2016 Supercomputing Conference.

The explosion of data from billions of sensors, together with the advent of cheaper and more easily accessible computing power, combined with the need for greater big data analysis and man-to-machine interaction is creating a perfect storm for a more commercial use of high-performance computing.

HPC is Under the Hood

Most technology predictions will tell you that machine learning and AI are important trends to watch out for. Take Gartner’s top strategic technology trends for 2017, for example. At the top of the heap is AI & Advanced Machine Learning.
According to the research firm’s report, “AI and machine learning (ML), which include technologies such as deep learning, neural networks and natural-language processing, can also encompass more advanced systems that understand, learn, predict, adapt and potentially operate autonomously.”

But, strangely enough, few predictions ever cover high-performance computing—which, for the most part, forms the infrastructural basis for compute intensive operations required by artificial intelligence and cognitive computing.

HPC: Real Enterprise Use Cases

US-based auto insurer, GEICO, uses a high-performance computing cluster to pre-calculate quotes for every American adult and household — so that it can accurately offer automated phone quotes in just 100ms. This project won it the HPCwire Readers’ and Editors’ Choice Awards at the 2016 Supercomputing Conference.

GM’s OnStar Go, “is the industry’s first cognitive mobility service and will use machine learning to understand user preferences, and recognise patterns found in your decision data.” Based on its analysis OnStar Go will provide personalised marketing offers from its partners.

PayPal uses high-performance data analytics to detect fraud in real-time and find suspicious patterns it doesn’t even know exists. It’s high-performance data analytics solution saved the company over $700 million in its first year.

To Gartner’s benefit, the very next paragraph reads, “The combination of extensive parallel processing power, advanced algorithms and massive data sets to feed the algorithms has unleashed this new era.”

Extensive parallel processing power, that’s the very definition of high-performance computing.

The report continues, “In banking, you could use AI and machine-learning techniques to model current real-time transactions, as well as predictive models of transactions based on their likelihood of being fraudulent. Organizations seeking to drive digital innovation with this trend should evaluate a number of business scenarios in which AI and machine learning could drive clear and specific business value and consider experimenting with one or two high-impact scenarios.”

Enterprise use of high-performance computing.

That’s just the beginning.

Not Your Father’s HPC

The concept of high-performance computing has been around for a long time, and that’s part of the problem. High-performance computing suffers a bit of a branding problem: Because it’s been used primarily for abstract modelling and simulation applications in government institutions and scientific research, most people have pigeonholed the technology. It’s simply not something IT departments can use. It’s interesting et al, but when CIOs hear HPC, they tune out.
Here’s the status update on HPC: We’re upgrading the definition.

In the last couple of years, HPC has evolved thanks to the data-intensive nature of business. Because of all that data, we’re seeing a merging of traditional HPC and analytics.
Jonathan Wu, CTO, Data Center Group, Lenovo Asia Pacific

In his piece IDC: Searching for Dark Energy in the HPC Universe, Bob Sorensen, research vice president in IDC’s High-Performance Computing group, questions the definition of high-performance computing.

He talks about relatively new phenomenon shaping the high-performance computing market and suggests revisiting the definition of HPC. Some of these trends include: “New hardware to support deep learning applications that with their emphasis on high computational capability, large memory capacity, and strong interconnect schemes, can rightly be called HPC systems.”

He also points to, “New big data applications that are running in non-traditional HPC environments but that use HPC hardware, such as in the finance or cyber security sectors.”

There is a need to re-look what defines high-performance computing, he says. “The HPC universe is expanding in ways that are not being directly observed using traditional HPC definitions, and that new definitions may be needed to accurately capture this phenomenon… Ultimately…the sector needs to consider what exactly an HPC is.”

Please log in to read the rest of the article. If you do not have an account, click here to sign up.

Jonathan Wu
CTO, Data Center Group, Lenovo Asia Pacific

Based in Beijing, Jonathan leads the High-Performance Computing for Lenovo in the Asia Pacific region. He has over 20 years of experience in the IT industry.