Big Dawg

Hardware – the big dawg, but for how much longer?

It’s been a slow and drawn out process but I think it’s realistic to say the days of doubting the capability of virtual machines and virtualisation are finally over. In the dawn of x86 virtualisation sceptical IT departments were hesitant to recommend virtual machines were suitable (and indeed capable) to deliver services for “Live” and “Production” workloads. If push came to shove when virtualisation was introduced into a company it was immediately ring fenced for “Test” and “Development” environments, with doubt still looming like a cloud over the idea. Being a veteran of server infrastructure I’ve seen this first hand in my previous employer’s environments and in recent years in customer environments. Even today, I’m witnessing organisations taking their first leap from physical to virtual. It’s a bold leap for some especially where financial barriers need to be overcome as well internal mindset changing. The major barrier I’m hearing at the moment seems to be naive ‘box shifting value-add’ suppliers placing doubt in their customer’s mind of the capability of virtualising servers.

Today we know that all shapes and sizes of server applications can be virtualised. Applications that demand vast CPU and RAM are no longer a challenge and while IT departments continue to deliver larger services and demand more hypervisor capability this in turn fuels the battle of supremacy amongst the hypervisor vendors. Each year we see updated versions delivering greater capacity and scaling capability – primarily driven by customer demand.  So where does hardware fit into this? Everything I’ve discussed so far is driven by software evolution. Software has evolved to meet the need of the application requirement and hardware just provides the raw elements. Hardware vendors produce faster and, in most cases, smaller footprint solutions but in reality they’re flooding the market with dumb bits of tin. Marketing and promotion events will have you believe that new ranges of hardware have been designed to address the latest business problem – whatever they may be – and without this new product your existing infrastructure is useless, dog slow and legacy.

I’m not for one moment suggesting that we don’t need hardware; the need for physical tin and raw processing power will never go away. The point I want to make here is that hardware shouldn’t be overly complex and attempt to try and solve all the problems that clearly software is very capable of responding to. Immediately you may think of GPU cards and their purpose of delivering rich multimedia experiences and I’d agree with you, yes they have a place. But when I start to think about RAID controllers and disk layouts I wonder why this is still the main component to servers and SANs. “It’s for data distribution and protection.”, you may say. Only because years ago hard disks were slow so distribution of data backfilled the I/O requirement as well mitigating drive failures which happened to be more common back then. If we ditched RAID controllers and let software distribute data what’s the difference? Software vendors are currently using direct I/O calls to hardware today so surely it’s only a matter of time before this is the accepted norm?

Hardware vendors need to accept they exist to provide a platform for software to run on; hardware will always have intelligence to function – I’m certainly not advocating it’s distributed without any of its own operating code. My point is that providing ‘bespoke’ offerings with inherent architectural limits within a model range doesn’t support rapidly expanding data centres and undermines customer trust when buying into an architecture. Why not go back to grass roots and focus on providing the core product and let the software deliver functionality? The Tesla car is marvellous example, a large piece of tin that relies solely on software to function. The car gets new features when software updates are released and applied.

Tesla dashboard

Tesla dashboard

Putting software in control offers a wealth of opportunity for product enhancements. Enhancements that would be available upon installation – no need to wait for hardware upgrades – new features and functionality too. It’s something we’re familiar with in our smartphones. The device receives an operating system update delivering new application and features. In the example of Apple’s iOS, the camera flash became a torch (flashlight to the US readers) overnight after an OS update.  If the software is delivering services within the data centre and is architected correctly there’d also be no downtime.

Everyday we hear of vendors talking about the ‘Software Defined Data Centre’. It’s nothing new and as tiring as the use of the word ‘Cloud’ in social media and marketing, but there is a common theme here too. Hardware is moving further down the conversation layer and I believe we’re starting to care less and less about the bits of tin but more about what the “…software can do for my data centre.”. There’s much work to do with the loose use of the SDDC abbreviation as many vendors tout this phrase during their sales pitch to promote their brand but still have a dependency on specific hardware. But if there’s a bandwagon for a hardware vendor to ride in on, they certainly will give it a go.

Ultimately the drive to put software in front of hardware will be driven by the new companies delivering products that aren’t hardware bound and constrained. As more companies emerge contributing to the SDDC the hardware Big Dawg will be put back in its place by the quiet thinker that is the cat – and we all know how clever and scheming cats can be. :-)

 

 

Leave a Reply