The convergence of cheaper and more easily accessible computing power, the explosion of data, and the rise of applications that require big data analytics, machine learning, cognitive computing and AI, is making high-performance computing a trend CIOs need to keep an eye on.

Think about high-performance computing. What comes to mind? Research into the beginnings of the universe? Scientists analysing massive amounts of data to discover the Higgs-Boson particle? The behaviour of anti-matter?

Or maybe, less esoterically, weather forecasting, sussing out intelligence from piles of seismic data for oil and gas exploration, or simulating wind flows over different terrain to figure out the best place to plonk a wind farm?

Not exactly the stuff of IT leadership. Or in the realm of enterprise IT departments.

Or is it?

What if high-performance computing (HPC) and its applications aren’t as arcane as we assume they are? What if high-performance computing is actually the sort of thing enterprise IT departments need to be looking at today—if they want to compete tomorrow?

“HPC has become a competitive weapon,” recently said Earl Joseph, IDC’s Program VP for High-Performance Computing and Executive Director of the HPC User Forum.

HPC has become a competitive weapon.
Earl Joseph, Program VP for High-Performance Computing, IDC

Really? What would an enterprise IT department want with an HPC system that functions above a teraflop or 1012 floating-point operations per second?

A lot, apparently.

Companies that want to set the gold standard for efficiency and customer experience, who are interested in predicting buying patterns or detecting fraud and sales opportunities in real-time, or who want to create new revenue models based on data, are all turning to high-performance computing systems.

Take the example of the auto insurer, GEICO. It uses a high-performance computing cluster to pre-calculate quotes for every adult and household in the entire country so that it can accurately offer automated phone quotes in just 100ms, according to an IDC presentation, High-Performance Data Analysis: HPC Meets Big Data. GEICO is a wholly-owned subsidiary of Warren Buffett’s Berkshire Hathaway. How it applied HPC to its business won it the HPCwire Readers’ and Editors’ Choice Awards at the 2016 Supercomputing Conference.

The explosion of data from billions of sensors, together with the advent of cheaper and more easily accessible computing power, combined with the need for greater big data analysis and man-to-machine interaction is creating a perfect storm for a more commercial use of high-performance computing.

HPC is Under the Hood

Most technology predictions will tell you that machine learning and AI are important trends to watch out for. Take Gartner’s top strategic technology trends for 2017, for example. At the top of the heap is AI & Advanced Machine Learning.
According to the research firm’s report, “AI and machine learning (ML), which include technologies such as deep learning, neural networks and natural-language processing, can also encompass more advanced systems that understand, learn, predict, adapt and potentially operate autonomously.”

But, strangely enough, few predictions ever cover high-performance computing—which, for the most part, forms the infrastructural basis for compute intensive operations required by artificial intelligence and cognitive computing.

HPC: Real Enterprise Use Cases

US-based auto insurer, GEICO, uses a high-performance computing cluster to pre-calculate quotes for every American adult and household — so that it can accurately offer automated phone quotes in just 100ms. This project won it the HPCwire Readers’ and Editors’ Choice Awards at the 2016 Supercomputing Conference.

GM’s OnStar Go, “is the industry’s first cognitive mobility service and will use machine learning to understand user preferences, and recognise patterns found in your decision data.” Based on its analysis OnStar Go will provide personalised marketing offers from its partners.

PayPal uses high-performance data analytics to detect fraud in real-time and find suspicious patterns it doesn’t even know exists. It’s high-performance data analytics solution saved the company over $700 million in its first year.

To Gartner’s benefit, the very next paragraph reads, “The combination of extensive parallel processing power, advanced algorithms and massive data sets to feed the algorithms has unleashed this new era.”

Extensive parallel processing power, that’s the very definition of high-performance computing.

The report continues, “In banking, you could use AI and machine-learning techniques to model current real-time transactions, as well as predictive models of transactions based on their likelihood of being fraudulent. Organizations seeking to drive digital innovation with this trend should evaluate a number of business scenarios in which AI and machine learning could drive clear and specific business value and consider experimenting with one or two high-impact scenarios.”

Enterprise use of high-performance computing.

That’s just the beginning.

Not Your Father’s HPC

The concept of high-performance computing has been around for a long time, and that’s part of the problem. High-performance computing suffers a bit of a branding problem: Because it’s been used primarily for abstract modelling and simulation applications in government institutions and scientific research, most people have pigeonholed the technology. It’s simply not something IT departments can use. It’s interesting et al, but when CIOs hear HPC, they tune out.
Here’s the status update on HPC: We’re upgrading the definition.

In the last couple of years, HPC has evolved thanks to the data-intensive nature of business. Because of all that data, we’re seeing a merging of traditional HPC and analytics.
Jonathan Wu, CTO, Data Center Group, Lenovo Asia Pacific

In his piece IDC: Searching for Dark Energy in the HPC Universe, Bob Sorensen, research vice president in IDC’s High-Performance Computing group, questions the definition of high-performance computing.

He talks about relatively new phenomenon shaping the high-performance computing market and suggests revisiting the definition of HPC. Some of these trends include: “New hardware to support deep learning applications that with their emphasis on high computational capability, large memory capacity, and strong interconnect schemes, can rightly be called HPC systems.”

He also points to, “New big data applications that are running in non-traditional HPC environments but that use HPC hardware, such as in the finance or cyber security sectors.”

There is a need to re-look what defines high-performance computing, he says. “The HPC universe is expanding in ways that are not being directly observed using traditional HPC definitions, and that new definitions may be needed to accurately capture this phenomenon… Ultimately…the sector needs to consider what exactly an HPC is.”

In the meanwhile, enterprises are tapping high-performance computing for big data analytics, creating an HPC offshoot called High-Performance Data Analytics (HPDA).

“In the last couple of years, HPC has evolved thanks to the data-intensive nature of business. Because of all that data, we see a merging of traditional HPC and analytics,” says Jonathan Wu, CTO, Data Center Group, Lenovo Asia Pacific. One example is PayPal. It uses high-performance data analytics to detect fraud in real-time and find suspicious patterns it doesn’t even know exists. Its HPDA solution saved the company over $700 million, in its first year.

Every dollar invested in high-performance computing returns an average of $356 in revenue and $38 in profit. And average time to return is 1.9 years.

It doesn’t stop there. According to Adam Christensen, Head of Data Technology, PayPal, the company’s also using big data to provide frictionless customer experience, deliver relevant and customised offers, and assess creditworthiness and offer access to credit within minutes.

Another recent example is how media company Condé Nast is using HPDA to offer a totally new service—and a new revenue stream. Condé Nast’s clients include New Yorker and Vogue who want to create social media campaigns. Condé Nast’s solution offers them the ability to find the best brand ambassador for specific campaigns based on traits. Its system locates the best influencer based on the analysis of thousands of words and emojis.

The fact is high-performance computing is coming out from institutions like the Los Alamos National Laboratory in the US or the RIKEN Institute in Japan, where they are used to study arcane subjects, and is increasingly finding itself a new home in enterprise IT departments, where it’s creating provable financial impact.

There’s a growing body of proof that HPC-driven high-performance data analytics is already within the enterprise and it’s only going to expand its presence driven the growth of data, especially from billions of sensors, and millions of transactions and social media feeds, and the need for real-time answers.

HPC: What CIOs Should Know

However the future decides to define high-performance computing, it’s important that CIOs understand how high-performance computing is already driving many of the applications customers and enterprises want.

The fact is high-performance computing is coming out from institutions like the Los Alamos National Laboratory in the US or the RIKEN Institute in Japan, where they are used to study arcane subjects, and is increasingly finding itself a new home in enterprise IT departments, where it’s creating provable financial impact.

There’s a growing body of proof that HPC-driven high-performance data analytics is already within the enterprise and it’s only going to expand its presence driven the growth of data, especially from billions of sensors, and millions of transactions and social media feeds, and the need for real-time answers.

HPC: What CIOs Should Know

However the future decides to define high-performance computing, it’s important that CIOs understand how high-performance computing is already driving many of the applications customers and enterprises want.

The fact is high-performance computing is coming out from institutions like the Los Alamos National Laboratory in the US or the RIKEN Institute in Japan, where they are used to study arcane subjects, and is increasingly finding itself a new home in enterprise IT departments, where it’s creating provable financial impact.

There’s a growing body of proof that HPC-driven high-performance data analytics is already within the enterprise and it’s only going to expand its presence driven the growth of data, especially from billions of sensors, and millions of transactions and social media feeds, and the need for real-time answers.

HPC: What CIOs Should Know

However the future decides to define high-performance computing, it’s important that CIOs understand how high-performance computing is already driving many of the applications customers and enterprises want.

The fact is high-performance computing is coming out from institutions like the Los Alamos National Laboratory in the US or the RIKEN Institute in Japan, where they are used to study arcane subjects, and is increasingly finding itself a new home in enterprise IT departments, where it’s creating provable financial impact.

According to an IDC survey, the RoI on high-performance computing installations is high. Every dollar invested in high-performance computing returns an average of $356 in revenue and $38 in profit. And average time to return is 1.9 years.

These enterprise use cases tend to cluster around big data for real-time predictive analytics, cognitive computing, artificial intelligence, machine learning, and natural language processing. All of these require significant amounts of computing power.

US retailer Macy’s, for instance, is testing what it calls “Macy’s On-Call”, a mobile service using which shoppers can ask, in natural language, about a store’s products, services and faculties, according to ComputerWorld.

Another example is how GM teamed with a large technology provider to add artificial intelligence to its cars. According to media reports, it’s solution, called OnStar Go, “is the industry’s first cognitive mobility service and will use machine learning to understand user preferences, and recognise patterns found in your decision data.” Based on its analysis, OnStar Go will provide personalised marketing offers from its partners

Here's the status update on high-performance computing: We ARE Upgrading the Definition!

US retailer Macy’s, for instance, is testing what it calls “Macy’s On-Call”, a mobile service using which shoppers can ask, in natural language, about a store’s products, services and faculties, according to ComputerWorld.

Another example is how GM teamed with a large technology provider to add artificial intelligence to its cars. According to media reports, it’s solution, called OnStar Go, “is the industry’s first cognitive mobility service and will use machine learning to understand user preferences, and recognise patterns found in your decision data.” Based on its analysis, OnStar Go will provide personalised marketing offers from its partners

Here's the status update on high-performance computing: We ARE Upgrading the Definition!

These are just a few of the many uses of high-performance computing within the enterprise, driven by the need to create competitive differentiation.

“HPC systems are becoming an increasingly necessary ingredient in any industry’s ability to develop new and innovative products,” says Steve Conway, Research VP, IDC’s High-Performance Computing Group, in CIO Review.

strangely enough, few predictions ever cover high-performance computing—which, for the most part, forms the infrastructural basis for compute intensive operations required by artificial intelligence and cognitive computing.

It’s probably all this enterprise interest that’s driving the high-performance computing marketing. According to IDC, in 2016, the HPC market should have grown by between 6-7 percent, making the market worth about $24.6 billion, at the end of the year. Between 2015-2019, the research firm forecasts a compounded annual growth rate (CAGR) of 8 percent for HPC.

Of course, high-performance computing is still being used by researchers to make game-changing

strangely enough, few predictions ever cover high-performance computing—which, for the most part, forms the infrastructural basis for compute intensive operations required by artificial intelligence and cognitive computing.

It’s probably all this enterprise interest that’s driving the high-performance computing marketing. According to IDC, in 2016, the HPC market should have grown by between 6-7 percent, making the market worth about $24.6 billion, at the end of the year. Between 2015-2019, the research firm forecasts a compounded annual growth rate (CAGR) of 8 percent for HPC.

Of course, high-performance computing is still being used by researchers to make game-changing

strangely enough, few predictions ever cover high-performance computing—which, for the most part, forms the infrastructural basis for compute intensive operations required by artificial intelligence and cognitive computing.

It’s probably all this enterprise interest that’s driving the high-performance computing marketing. According to IDC, in 2016, the HPC market should have grown by between 6-7 percent, making the market worth about $24.6 billion, at the end of the year. Between 2015-2019, the research firm forecasts a compounded annual growth rate (CAGR) of 8 percent for HPC.

Of course, high-performance computing is still being used by researchers to make game-changing

strangely enough, few predictions ever cover high-performance computing—which, for the most part, forms the infrastructural basis for compute intensive operations required by artificial intelligence and cognitive computing.

It’s probably all this enterprise interest that’s driving the high-performance computing marketing. According to IDC, in 2016, the HPC market should have grown by between 6-7 percent, making the market worth about $24.6 billion, at the end of the year. Between 2015-2019, the research firm forecasts a compounded annual growth rate (CAGR) of 8 percent for HPC.

Of course, high-performance computing is still being used by researchers to make game-changing

discoveries. Recently, at the World Economic Forum, Katharina Hauck, Senior Lecturer, Department of Infectious Disease Epidemiology, School of Public Health, Imperial College London, spoke about how they used high-performance computing to assess something critical: Which variable, (healthcare, sanitation, schooling, etc.) had the greatest effect on life expectancy.

There are over 40 determinants that affect life expectancy, which makes it difficult to figure out where to best invest limited resources to achieve maximum impact. For example, policy-makers have taken it for granted that healthcare is a key determinant of life expectancy. It’s a fairly safe assumption. Hauck’s research, using HPC systems, showed that there were 20 “robust” determinants of life expectancy—and health care wasn’t one of them.

That’s the sort of new thinking that high-performance computing is changing. Maybe it’s time we changed the way they think about HPC, too.

Jonathan Wu
CTO, Data Center Group, Lenovo Asia Pacific

Based in Beijing, Jonathan leads the High-Performance Computing for Lenovo in the Asia Pacific region. He has over 20 years of experience in the IT industry.