As a reseller, it’s often incumbent on you to help your clients achieve their business goals by helping them make informed buying decisions. In a competitive market, that’s not enough. You need to take the next step as strategic consultants and, in doing so, advise your clients on best outcomes even if that means straying from the product portfolios you sell.
Storage is one such area that falls squarely into the strategic category. The idea of centralising storage and moving it between different data centres for business continuity purposes would have been practically impossible not too long ago; today it’s a necessity. This has been heightened of late by the spate of recent natural disasters we’ve endured in New Zealand, to the point where business continuity is becoming a sub-category of product in itself.
Practically speaking, an effective disaster recover strategy requires that all data be stored redundantly at multiple physical locations. In the event of a disaster that destroys data stored at one location, there will always be a second, redundant copy of the lost data at a different physical location. Such a strategy ensures that a single cataclysmic event will never destroy all of the data, and the business can recover fairly quickly – often in real time.
Gone are the days of trucking redundant tapes from one location to another; not only is this approach antiquated, but it’s also more costly and time consuming than available alternatives. Businesses in general have outgrown the ‘nightly backup’ routine, and the risk of losing tapes in transit is almost as unsavoury as having to wait for hours to recover lost data from offsite tapes.
So with everything going digital, backups included, the clear path ahead is digital storage, both onsite (local) and offsite (via the WAN). But – and there’s always a but – large volumes of data inevitably put major strain on a WAN, and the limited bandwidth available to most organisations is constrained by the high cost of fat WAN pipes. Overloading the WAN results in poor network performance, not only for data recovery but also for the general applications that share the network.
The advent of cloud computing complicates matters further, and the efficient use of network capacity becomes even more important because cloud services rely on the Internet, meaning they are even more subject to congestion and latency than applications on the corporate WAN.
The solution to an overcrowded WAN in general – and latency issues in particular – is WAN optimisation.
This takes many shapes and forms in different products, hardware and software, but the most effective of these are so-called WAN optimisation appliances. These devices literally plug into different ends of a network and intelligently manage the information across the WAN to avoid duplication and reduce the amount of data applications need to send and receive. The end result is reduced reliance on WAN capacity, faster response times from applications and an efficient platform for digital backup and recovery.
Moreover, in highly virtualised and cloud computing environments, WAN optimisation can also be delivered as a software-only virtual appliance that can be loaded into a virtual machine like any other application.
Delving into the mechanics of WAN optimisation can be somewhat confusing given the sheer number of different protocols and file systems in play, which may or may not help you make the decision easier for your clients, but at least some knowledge of how it works goes a long way to explaining the ‘magic’ behind it.
The root of the problem lies in the network file systems that were specifically developed for local area networks (LANs).
These systems were designed to transmit files around buildings or across campuses, not around the world, and so are often unnecessarily ‘chatty’. This doesn’t make them very efficient for rapid-fire conversations between client and server systems, which may be fine when each transmission takes milliseconds on a LAN, but becomes a problem when exchanges are stretched over a longer connection, introducing significant delays.
Another oft misunderstood problem is latency. How often have you clicked on a website only to watch a spinning wheel on your screen representing your computer waiting for the information to be found and delivered. This is called latency, and latency is inherent to any long-distance transmission, regardless of how much bandwidth you have on your WAN.
The simplest solution to both of these bottlenecks to efficient digital storage is to deploy an optimisation appliance at either end of the WAN connection, which transparently streamlines the file transfer conversations and minimises the effect of network delays. The best of these devices can eliminate up to 98% of network round trips.
Think of it this way – with a network appliance installed at your local office, and one at your office in Europe, the information common to your organisation is constantly shared and stored locally between the appliances. So, when it comes time for the chief executive to download the latest market forecast from Europe – a 345MB PDF file – the file already resides at your local office, making the download equivalent to a LAN call.
From a product perspective, it’s important you source a solution, or solutions, that best complement your, and your clients’, existing and future requirements. Look for products that support a wide range of network file system and replication protocols, including those associated with specific network storage products. For example, users of EMC’s Symmetrix V-MAX and DMX storage systems will want support for SRDF/A. This is EMC’s protocol for asynchronous file replication, often used for replication between data centers or to disaster recovery facilities.
Product selection is only the first step. By clearly understanding the need for WAN optimisation, and being able to articulate it in the context of New Zealand’s recent history, you put yourself in the seat of trusted advisor to your clients. This is far more valuable to your business than even the best of WAN optimisation products.