Nutanix – a vendor to keep an eye on

I’ve been working and exploring many areas within the virtualisation industry since my first deployment of VMware’s ESX 2.5 Server back in 2005. So much has changed in the past 7 years with VMware and their portfolio. Seemingly every annual VMworld conference they announce a new innovation, an updated roadmap and an insight of their perception of the future data centre. The IT industry listens, the solution providers and integrators skill up leaving vendors to up their game on complementary technologies. In recent years the vendors have been taking note as they watch the adoption of virtualisation technologies increase dramatically. Greater adoption creates opportunity for point solutions and bespoke products that can drive further cost savings and utilisation of the newly deployed virtual infrastructures. In the past few years more and more vendors have appeared on the scene shaping their offerings to meet the latest market demands. If you’ve ever taken a stroll around a VMware or Citrix annual conference you’ll understand what I mean, the presence of vendors is quite overwhelming.

[Disclaimer] I’d like to clarify to you, the reader, that I’m not affiliated with Nutanix, have not been sponsored, paid or beaten into submission to produce this article. The images and statistics I’ve included below have been provided to me from Nutanix for the purpose of clarity and respect for accuracy.

What’s that to do with Nutanix?

So, I’ve set the scene. Virtualisation is big and vendor complementary products are large. Great, so what? Well, a few products have been catching my eye in the past 18 months, and one of which is provided by Nutanix. The offering from this vendor is simple. Wrap compute, network and disk into one physical blade-like server. Drop a small footprint hypervisor onto it, then put 4 blade-like servers into each server-looking box. Combine a few of these boxes together with a clever operating system managing the communications and hey presto – you have a highly available cluster. Not only do you have this highly available cluster but the underlying operating system is ‘virtual machine aware’ (VM-aware) and is sympathetic to the demands. No really, it’s that straight forward.

But what’s the point, that’s just a powerful rack server?

Yes and no. Yes, it’s a small form factor server (2U in fact) but it’s more than that. Each server-looking box is called a ‘Block’, each Block contains 4 blade-like servers called ‘Nodes’. Each Node has local storage comprising of a mixture of PCI-e SSD (320GB) , SATA SSD (300GB) and SATA HDD (5 x 1TB). As well as the local storage (don’t fret) you’ll find a couple of CPUs (Xeon x5640), some RAM (max 192GB) and network capability (1GbE & 10GbE) in each node. You read correctly by the way, I did write ‘local’ storage. The Nutanix developers have enabled the underlying operating system to spread data amongst all Nodes, so once you’re up and running with a single or multiple Block deployment , the disk is carved up, presented logically as a single drive from multiple disks across Nodes. With local disk presented as shared, all the benefits of distributed storage come into play. The hypervisor is layered on the same box to allow virtual machines to run locally.

Immediately this means you don’t need a SAN, separate rack or blade server provision – one Block does it all. But while one Block does it all, the real benefits come into play when you plug many Blocks together.

No SAN? But what about redundancy?

The nodes are highly available by design through their underlying operating system, storage presentation is redundant because it’s distributed across many Nodes within one or across many Blocks.

No separate servers or blades for virtual machines?

The Node is populated with dual CPUs and 48GB RAM (by default) so there’s no need to use other hardware to host virtual machines (VMs).

Nutanix architecture

They (Nutanix) refer to their architecture as Hyperconvergence, the diagram below (courtesy of Nutanix) shows the nodes with the operating system ‘wrapper’ (in blue) managing all the nodes.

Nutanix Architecture

The next generation

Today, Nutanix publicly announced the release of their updated product and operating system, a significant milestone for them. Not only are they announcing an updated, more powerful and capable piece of tin but also a completely updated and new version of their proprietary operating system. It’s this that really defines the capability of the product.

NX-3000 Series

If you’ve seen the NX-2000 series you’ll immediately notice that aesthetically it’s changed but what about under the bonnet (or hood if you’re in the USA)?

Nutanix NX-2000 vx NX-3000

As the table above shows, the storage remains unchanged but the raw power and network capabilities have been given a considerable boost. Increased raw power means greater virtual machine capacity and capability, which of course is subjective, but nonetheless you could half the number of VMs and still have a high consolidation ratio. The graph below clarifies the VM count is VDI based. Another big NIC means they’re offering dual 10GbE, this really opens up the redundancy and throughput capabilities.

Nutanix Linear Scale.png

Importantly, the graph above highlights the building block approach increases performance in a linear fashion without detriment to the existing provision.

NOS 3.0

The Nutanix Operating System is the real ‘man behind the curtain’ here, without it the 2U nodes would be standalone servers (nice looking but not much use). Today’s announcement focused on the following areas of the greatly enhanced OS:

Operating System highlights:

  • Native Replication & Disaster Recovery (DR)
  • Compression, in-line and post-processed
  • Bonjour-based dynamic cluster grow
  • Hypervisor agnostic platform (KVM support)
I’ll attempt a little paraphrasing below for each one.

Native Replication & DR

During a discussion with one of the Nutanix guys at VMworld this year we were talking product and roadmaps. It was ‘hinted’ then that DR and replication were high on their agenda. Well, ‘high’ it was. Let me breakdown what they’re able to achieve and explain the diagram below.


If you’re familiar with replication technologies to create a DR solution within your current virtualised infrastructure, you’ll know that you replicate at a SAN (block / LUN) or a VM (delta change) level. Typically, a primary and secondary site are defined with failover being either one-way or two-way based upon rules (and licensing). SAN replication relies upon vendor specific technologies and plug-ins to a hypervisor and VM replication requires that the hypervisor does the work. So, depending on your hardware, connectivity and importantly ‘budget’ you chose accordingly.

I’ve always seen this as a retro-fix to provide a DR solution but don’t mis-understand me, it can work incredibly well and is better than nothing. Still, it’s an either / or between 2 sites.

Nutanix have approached this from a slightly different angle. There isn’t a typical primary or secondary site definition that prevents the use of them for other ‘DR’ roles, any site can be an opposite to another. Replication is performed at the VM level with all Nodes participating in a multi-way distribution. The NOS handles all multi-way replication of VM changes simultaneously to all Nodes in the Cluster. Basically, it’s Master – Master Multi-way replication enabling replication of VMs to more than one site, clever stuff.


Disk capacity and performant availability will make or break a service provision. Listening to the Nutanix techies there’s a lot of optimisation that’s integral to ensuring the ‘VM-aware’ aspects of their underlying OS really return on their claim, but what about I/O and laying down the data? This is where the use of the 3 types of storage technologies I referred to a the top of this article come into play. Simply put there are 3 ways in which data is handled:

  1. Performance optimised – Adaptive compression that learns I/O patterns to ‘best fit’ the VM requirements
  2. In line – Data is compressed during writes (ideal for sequential workloads)
  3. Post processed – Compression is delayed until data is cold

The diagram here shows the data flow.

Nutanix Compression.png

Of course this is just about disk. Network bandwidth isn’t cheap whether in-house or over a WAN, and I’m not just referring to the price here either. Any traffic requiring transmission has the potential to constrain bandwidth to the detriment of another piece of infrastructure. Where you can, always consider the bigger picture of connectivity. Nutanix haven’t discussed this but I’m hoping they may read this article and think, he’s got a point. (Royalties greatly received)

Bonjour-based dynamic cluster grow


Increasing capacity shouldn’t mean impacting service provision regardless of whatever technology you’re using. If you’re familiar with VMware’s vCenter and it’s ability to add new ESX(i) hosts to HA Clusters without disruption to the running VMs then the same applies to a Nutanix Cluster. New Blocks are discovered once they’re introduced to the network and using the Management console simply added with a couple of clicks, provisioning time happens in minutes rather than hours. Zero downtime is something Nutanix are advertising with this feature but it’s really something I think should be standard for any new ‘building block’ style technology. Fortunately for them, I don’t have to labour the point.

Hypervisor agnostic





With this latest announcement comes word of support for another hypervisor, KVM. Does this echo the start of more hypervisor support? Only time will tell, but once it starts and demands increase there’s every possibility.

If you’re wondering quite what KVM has to offer in terms of Guest OS support, this page lists the current versions: . Once you’ve seen the supported list you’ll understand there’s scope for a broader reach of Linux variants that play more in and around the ‘Big Data’ arena. Think, ‘Hadoop’ as an example, which also plays nicely into the Compression section above, specifically point 2 where sequential workloads are handled.

My summary of all this

I’m hoping that having read this article you can understand my enthusiasm for their product. To be honest, I’ve never deployed an NX-2000 so I’m certainly not in an authoritative position to confirm ‘it’s the best thing since sliced bread’, but having spoken to some people at Nutanix, quizzed them on technologies and use-cases I’m on-board with their solution. Because Nutanix has this layer that wraps around but also can sit in-between the storage and hypervisor it means any parallel processes are feasible. There’s a lot of scope and potential for virus scanning, backup hook-ins, garbage collection and so on. Then there’s the building block approach Nutanix offer, I’ve waxed lyrical about this method with VMware since the day of just ESX. You really can just plug in another node, then another, then another… The physical reclaim potential from decommissioning silo’d SAN and Blade technologies makes their proposition, I believe, worth investing in.



Leave a Reply