With the arrival of virtualisation in the data centre, opportunities for owners of enterprise infrastructure to consider, evaluate and deploy solutions which virtualise components of their complete environments have taken on a new and valuable meaning.
Virtualisation promises greater agility, greater efficiency and lower operational cost, all of which may be true, but with it comes a wave of complexity that many IT buyers do not expect or foresee.
The enterprise buyer is under pressure. His CFO has read articles claiming significant fiscal advantages to a virtualised world, his MD is looking for greater agility and flexibility from his IT team, and his IT team are keen to get the word ‘virtual’ on their curriculum vitae.
However, setting aside all the hidden complexity, virtualisation does offer compelling advantages which is why the market for cloud services is growing so quickly in a time of global recession. It is also the reason there are so many solutions available to virtualise servers and applications, and why so many resellers are carrying VMware, Citrix, Microsoft or similar hypervisor platforms.
But are we really empowering the enterprise buyer of virtualisation to be successful? Are there gaps in the management and operational world that will cause challenge, confusion, outages and sleepless nights?
Go back six years or so to before virtualisation existed and consider a simple and often repeated diagnostic challenge. A user calls complaining that a web based application the IT team deployed is not working. Tracing the issue is a logical and physical series of steps from the user’s desktop through to the web server, on to the application server and then to the database server. The servers are labelled in the data centre; their IP addresses are available in DNS; the user’s desktop is still at their desk, therefore diagnosis with either a network management system or hands and feet is a logical and physical process.
In today’s virtualised world, it is a very different story. The user’s desktop is running on any one of three back-end virtual desktop infrastructure servers, the web server could be on one of 20 servers, the application server on one of two, and the database server on one of those big end-of-row servers that run core applications for the whole company. So where’s the problem? What is causing the issue?
Although the desktop and applications may be virtualised, the information flow and traffic between all of the components has to traverse a physical network. So the enterprise can realise the power and capabilities of the physical world diagnostic, monitoring and management technologies to undertake the diagnosis are required to understand the virtual world. Although the application may be ‘mobile’ around the data centre as the virtualisation infrastructure adjusts to demand and physical server responsiveness, the traffic flow from the application is visible on the physical network.
The opportunity is to leverage visualisation solutions to ensure the monitoring and diagnostic tools are able to see the traffic flows across the network. This opportunity has given rise to a new market – the traffic visualisation network market. This provides enterprises, governments and telecommunication providers in New Zealand and globally with the ability to use physical and static monitoring and management tools to diagnose and manage infrastructure in both the physical and virtual world.
Solutions from vendors in this market provide an architecture with tentacles that reach across the whole network and then intelligently filter traffic and communication flows to deliver the valuable and empowering visibility so essential to maintaining and supporting both virtual and physical worlds.
The answers do exist to support enterprise owners as they move into the virtualised, cloud world. They simply need to know whom to ask.