How is software-defined storage an opportunity for resellers?
Over recent years, the rate of change in the IT industry has been exceptional. One reason is that start-ups and vendors are able to take traditional hardware products and turn them into software solutions that are typically easier to deploy, easier to manage, less expensive and more functional than the old hardware versions.
Consider some of the sectors: servers are now mostly virtualised; networking is software-defined; WAN connectivity is moving to software-defined WAN's, telephony is moving to VOIP; even the data center is now available as a software-defined data center. The one area that is ripe for change is the storage market.
Traditional SAN and NAS products were developed over 20 years ago for physical servers. They simply cannot deliver the performance, scalability, reliability, or value that the new enterprise requires.
As the chart above illustrates, SAN and NAS products will rapidly become obsolete as distributed and software-based storage solutions dominate.
Apps drive the business: data feeds the appsIn the new enterprise, data placement will be determined by the needs of the application. Data needs to move freely and ubiquitously across all your storage regardless of vendor, make or model. Storage hardware should not be the bottleneck to using corporate data. Storage should be invisible to your data.
Hot data is moved to RAM, warm data to flash, and cold data to high capacity inexpensive disks or to the cloud – again without regard to the storage vendor, make or model of their products. In most businesses, typically between 2 to 10 per cent of data is hot, 10% to 20% of data is warm, and 50% to 80% of data is cold. How much cold data today is sitting on a customer's expensive storage? Intelligent data tiering technologies move data depending upon the applications objectives using all your available storage. Data should not care what type of storage is being used.
For each application simply set the performance, capacity, and protection objectives required for that application and everything else is fully automated.
The new enterprise is typically heavily [server] virtualised, has critical applications, uses Big Data, IoT, is investigating/deploying containerisation, and Dev/Ops. It needs storage that is fast, reliable, available, scales up and out easily, always on and reduces cost and complexity.
Remember when customers first deployed virtual servers? Server virtualisation didn't care whether you had HPE, DELL, IBM or any white box server. The same can now happen to storage – the storage platform should not care what storage hardware is running underneath. The applications use of data should drive the business, not the storage hardware.
What's in it for me?Feedback from most customers is that they really don't like forklift upgrades every three to four years. They dislike storage silos as it creates vendor lock-in which in turn translates into increased cost, complexity, and lack of choice.
Even the 'new' products, including all-flash arrays and hyperconverged infrastructure products that are coming to the market today actually deliver many of the same vendor lock-in issues.
Ask your all-flash array or hyperconverged infrastructure vendor what vendor and model of disk drives or SSDs they use? Then ask if can you buy that exact same disk or SSD yourself [online] and save thousands or even tens of thousands of dollars? The answer will almost certainly be the same: "No, of course not. You must buy your hardware from us". Even though the disk drive is exactly the same, they charge a substantial premium for that drive. That is vendor lock-in. In the new enterprise, storage hardware should be open, with the software storage platform delivering the intelligence that historically has been included by hardware vendors.
The new enterprise expects, in fact is now demanding, solutions that deliver performance, value, and reliability that are affordable and easy to use. That can only be delivered via a open, hardware agnostic software platform.
Evolution of objective-defined storageCuriously, software-defined storage is not new. Almost every storage vendor has some degree of 'software-defined storage' embedded in their hardware and has done for years. The problem is they have historically locked it down to work only on their hardware.
A true software-defined storage 2.0 solution fully and completely decouples a customer's data from the hardware. Naturally storage hardware vendors dislike the new, open software defined solutions. Customers no longer need to rely on or buy their hardware!
The latest software-defined storage 2.0 solutions are called objective-defined storage, which goes beyond traditional software-defined products. Objective-defined storage is a software only, totally hardware agnostic platform that delivers the performance of an all-flash array, the reliability and simplicity of a hyperconverged infrastructure, the flexibility of software-defined storage, the mobility of data virtualisation, and the resilience that only multiple live instances of data in multiple locations can deliver. All at a fraction of the cost of individually buying even one these products.
Revenue opportunities for resellersAs with all new industry segments, some resellers are fast to adopt, test, prove and build new solutions, while others are slow to change. The latter typically continue selling the large expensive quarter million, half million or multi-million dollar 'enterprise' storage frames, even though the storage vendors are squeezing the partner margins lower and lower.
We are seeing that the partners who started introducing and selling server virtualisation, hyperconverged and all-flash arrays early on, are the ones who are looking to rapidly adopt objective-defined storage, as it allows them to differentiate themselves from other resellers and deliver a competitive advantage.
As a reseller, how many times have you presented a 'high end' SAN or NAS, an all-flash array or a hyperconverged infrastructure product to a prospective customer only to hear the customer say, 'Wow, this is amazing, exactly what we need - but wow, it is also terribly expensive…' Imagine going back to all those customers and offering them an affordable software-only solution that delivers more, at a fraction of the cost originally quoted. In fact, you are also able to go back to the customers who purchased, and offer them the ability to now share that product across their entire organisation and drive even better utilisation and value using an objective-defined storage platform.
Objective-defined storage delivers a compelling value proposition to the customer and is often a powerful door opener and a way to win valuable new customers. Additionally, we see partners generating incremental services revenue and of course, selling additional performance or capacity to customers. Solving the storage piece of the puzzle will enable customers to move more rapidly to a fully software-defined data center. The storage world is in a state of change. Software will absolutely define the new enterprise and deliver the performance, scalability, reliability and value that customers are now demanding.
Why are customers interested?Cost. Performance. Reliability. Scalability. Ease of use. The reasons are many and varied. The key customer benefits include:
- Interconnect all existing SAN and NAS products as a single storage pool, regardless of vendor, make or model (eg DELL-EMC, HPE, IBM, NetApp, HDS, Pure Storage, Tintri, Nimble). 100% storage agnostic with no certified lists or validated builds.
- Share all existing RAM, SSD, SAN, NAS, DAS, BOD and Cloud across all applications without changes to the application
- Provision storage in minutes at the application level by simply selecting the performance (IOPS or latency), the capacity (elastic scale up and scale out) and protection (how many live instances of data are required and in what locations) is required. Everything else is fully automated and orchestrated. Policies are created, deployed and adapted in real-time without user involvement.
- Centralised management of data from one dashboard, regardless how many and which storage products lie underneath
- Increase application performance exponentially using commodity SSDs and adding more RAM – as excess performance can be shared across hosts and applications, the paradigm of performance changes dramatically
- Automatically and with zero user involvement tier data across all storage devices depending on the required performance, capacity and protection required. Hot data is moved to RAM and SSD, warm data to SSD and your fastest spinning disks and cold data to high capacity, inexpensive disks or your preferred private or public cloud including AWS, Azure or Google.
- Hybrid Cloud out of the box, Objective-defined Storage enables customers to implement a Hybrid Cloud by moving their cold data – typically between 50 per cent and 80 per cent of their data to a private or public cloud such as AWS or AZURE.
Typically, the investment in an objective-defined storage solution, end user buy, should start at about A$50 per TB per month and scale down rapidly to about A$20 per TB, making it exceptionally affordable - especially when a customer can re-purpose all existing storage and never again need to buy an expensive SAN, NAS, All Flash Array product, as they can simply add commodity storage or cloud as needed for capacity, and RAM and SSD for performance.
Objective-defined storage delivers an always on, agile and affordable solution for customers of all sizes.
By Greg Wyman, vice president Asia Pacific at ioFABRIC