On the surface, Hyperconverged Infrastructure is one of the fastest growing methods of data control and management in the data center. Here, we take a deep dive past its promises, and into how it works, so you can understand it, and maximise its uses.

Anyone who has anything to do with IT has heard of Hyperconverged Infrastructure or HCI. As far as infrastructure systems go this technology is driving customer delight and is changing the way customers architect and buy their infrastructure.

Nutanix, for example, had early successes that were around VDI deployments, but they quickly changed their overall strategy from desktop virtualization to enterprise workloads, and then to building for the Cloud.

VMware offers SDS services through the Virtual SAN (VSAN) product. When combined with virtualized compute and the VMware NSX, it becomes a complete software-defined player. The drawback of this solution, however, is that running bare metal workloads is simply not possible.

While architectures may defer, the fundamentals remain the same. To understand this, let’s take a step, or maybe several steps back into the past.

Back then, virtual machines (VMs) were stored on the servers they were running. By running ten to twelve VMs in a single system, businesses could see virtualization’s intended benefit of lowering TCO.

The early days of virtualization

Back then, virtual machines (VMs) were stored on the servers they were running. By running ten to twelve VMs in a single system, businesses could see virtualization’s intended benefit of lowering TCO.

But while this setup was good for testing and development, IT managers were worried about failure rates. If the main server went down, all the VMs running on them would follow suit.

To address this concern, VM files were stored on a centralised storage, i.e. SAN. To enable enough bandwidth for all the VMs, 4/8Gbps fibre channel fabric was needed—which included HBAs, transceivers, cables, and switches.

Back then, virtual machines (VMs) were stored on the servers they were running. By running ten to twelve VMs in a single system, businesses could see virtualization’s intended benefit of lowering TCO.

The early days of virtualization

Back then, virtual machines (VMs) were stored on the servers they were running. By running ten to twelve VMs in a single system, businesses could see virtualization’s intended benefit of lowering TCO.

But while this setup was good for testing and development, IT managers were worried about failure rates. If the main server went down, all the VMs running on them would follow suit.

To address this concern, VM files were stored on a centralised storage, i.e. SAN. To enable enough bandwidth for all the VMs, 4/8Gbps fibre channel fabric was needed—which included HBAs, transceivers, cables, and switches.

Though it increased the cost of the infrastructure significantly, the lower TCO benefit still applied to software licenses, servers, storage, and networking. This was the genesis of converged systems, and later integrated systems—all of which are derivatives of 3-tier architectures.

While beneficial, these architectures had several limitations. Their deep complexity meant that they were difficult to implement, scale and manage. Particularly so as they grew. Enter Google, Facebook et al., who worked around this problem using distributed file systems.

The raison d’être of 3-tier architectures was to preserve the VM data, and maintain high availability, on site or across sites.

Today’s solutions for yesterday’s problems

The raison d’être of 3-tier architectures was to preserve the VM data and maintain high availability, on-site or across sites. But what if we copied whatever data that was written, as it was being written, to another server?

That way, there would always be a copy of the data in another server. As a result, SAN storage—including its fabric and components—are ultimately not required. This is what the file system in a hyperconverged system does.

Now the resulting file system requires minimal compute resources (when compared with today’s processors). Hence we end with a lot of idle compute that can be used for other purposes.

For example, if the file system only required two to four cores out of a total of 24 cores, you could run the VMs stored on the server to utilise the idle capacity.

If there were still more idle cores? Your options would be—and not limited to—running de-dupe, compression, or maybe even a cloud portal.

To enhance overall performance, you can always add SSDs to act as a read/write first layer.

Do you need a reliable disaster recovery system? Easy. Simply copy the data from one file system in Site 1 to Site 2.

Not enough RAM? Not a problem. Today’s systems can scale up to 1.5TB of RAM for every two sockets.

New technologies need new platforms

In every new technology, lies the need of a platform to launch itself.

In every new technology, lies the need of a platform to launch itself.

For HCI, it was the virtual desktop infrastructure (VDI). In a VDI, we can run approximately 100 VMs on a single system—all small VMs with minute compute requirements.

In a 3-tier architecture, all of the VMs, across all of the nodes, access data simultaneously. The trouble here is that as the number of logged in users increase, the performance requirements of the nodes increase exponentially as well.

If many users logged in at the same time (a scenario which has been given the term “boot storm”), the SAN storage would get heavily loaded. This results in other workloads running on it to slow down or come to a complete stop.

In every new technology, lies the need of a platform to launch itself.

In every new technology, lies the need of a platform to launch itself.

For HCI, it was the virtual desktop infrastructure (VDI). In a VDI, we can run approximately 100 VMs on a single system—all small VMs with minute compute requirements.

In a 3-tier architecture, all of the VMs, across all of the nodes, access data simultaneously. The trouble here is that as the number of logged in users increase, the performance requirements of the nodes increase exponentially as well.

If many users logged in at the same time (a scenario which has been given the term “boot storm”), the SAN storage would get heavily loaded. This results in other workloads running on it to slow down or come to a complete stop.

This is the reason why IT administrators always prefer to have an independent environment just for running a VDI. Additionally, when they decide on a SAN size, they would always choose the largest one possible—since they could only scale it up, and not out.

Origins of HCI

This was where HCI was conceived. It provided customers with ample justification for its use, due to its low TCO. And as it wasn’t a critical workload they were willing to take a chance on it.
All VMs were made to run onboard the server that was also running HCI. Due to its architecture, and built-in SSDs, the customer, started to see better performance, and zero contention amongst the VMs as compared to older 3-Tier systems.

The natural evolution of technology took place, and customer moved from VDI VMs to Server VMs, from Test & Dev to Production.

And that’s where we were two years ago.

Today it’s about how to move the VMs, and how to manage them. Some of the solutions used today are:

  • Integrating with public cloud providers to offer a hybrid cloud environment
  • Integrating with OpenStack for more capabilities and flexibility
  • Using containers
  • Integrating with backup software vendors
  • Integrating with UPS vendors like Emerson

The story today is all about OEMs working together, and trying to do as much integration of hardware and software at the backend to provide customers with a seamless experience.

With the older 3-Tier architectures, integration was very restrictive, because the layers (server, storage, networking, OS and software) were all independent, but trying to speak the same language.

With HCI, the SAN fabric is not required, the network layer is virtualized, and the hardware has seamless integration with specialised software that can deliver more flexibility, easier management, and faster adaptation to new technologies.

Looking back at the result, the differences between then and now seems as though we have moved from Microsoft’s Windows 95 to Apple’s iOS—where all customers need to worry about are their applications. Simple.

Welcome to the World of Hyperconverged Infrastructure!

[/um_show_content]

Please log in to read the rest of the article. If you do not have an account, click here to sign up.

[/um_loggedout]

KASHISH KARNICK - PRODUCT MANAGER STORAGE AND SOFTWARE DEFINED, LENOVO ASIA PACIFIC
Kashish Karnick
Product Manager Storage and Software Defined, Lenovo Asia Pacific

New technologies creep up everyday, creating opportunities for new innovative solutions that can truly add value. But the complexity of technology is a road block to this value, and Kashish loves being able to simplify this – everyday.