ChannelLife New Zealand - Industry insider news for technology resellers
Story image
Perfect storm warning: Look out for dramatic storage changes
Thu, 20th Apr 2017
FYI, this story is more than a year old

Year 2017 will see a perfect storm across the storage industry, in which three events will change the way companies purchase, deploy and manage their storage.

First, storage capacities will explode as 32TB, 60TB and 100TB SSDs are released that will change the way data centers are structured, and how storage is purchased and consumed.

Secondly, software-defined storage platforms will replace traditional SAN and NAS products, enabling incredible cost savings and substantial performance improvements.

Finally, hybrid (private/public) clouds will become the norm for cold, aged and stale data, while hot and warm data will reside on-premise for performance and control.

Exploding capacities

During 2017 the storage industry will undergo its biggest revolution – or rather evolution. Solid State Disks (SSDs) will change the storage paradigm fundamentally – forever.

At last year's Flash Memory Summit in Santa Clara, CA, Samsung announced plans to ship a 32TB SSD drive. Then Seagate flagged a 60TB SSD during 2017, while Toshiba disclosed the expected shipping of a whopping 100TB SSD later this year.

More interesting, Seagate said the price of their 60TB SSD will be the lowest per GB SSD. BackBlaze estimate the price will be in the region of $19,995 for the 60TB drive.

Just 17 of the Seagate 60TB SSD 3.5in. drives will give 1PB of high performance and low latency storage for about $340,000. So how much was that quote for your 100TB All Flash Array, or even that Hyperconverged Infrastructure product? How many shelves or racks does it take to provide 1PB of useable data with an incumbent SAN or NAS provider? What savings will there be made in floor/rack space, electricity, and cooling by eliminating spinning disks?

Will companies need multiple 1PBs of SSD? It depends on the amount and types of data each manages. As a rule of thumb, 2 to 10 per cent of data is HOT, and should reside in RAM or SSD and be as close to the bus as possible to minimise latency. Some 10% to 20% of data is WARM, and should ideally be stored on SSDs or on an organisation's fastest spinning disks, while typically 50 to 80 per cent of data is COLD. It will probably never be accessed again, and can reside on inexpensive high capacity drives, or in a private or public cloud. All data must be searchable and retrievable regardless of the tier of storage it is stored on.

It may be more prudent to deploy one of more SSDs of varying performance and capacities on each host. Increase the RAM for ultra-hot data and deploy an Objective-Defined Storage solution that is able to tier data across any storage hardware including RAM, SSD, SAN, NAS, DAS, JBOD, and Cloud regardless of the vendor, make or model. Excess performance on one host can be shared and utilised by applications and VMs on other hosts.

The introduction of supersized SSDs will make it feasible for an Objective-Defined Storage platform to eliminate the use of traditional spinning disks on main production systems, as hot and warm data can reside on RAM and SSD. Cold data can reside in private or public cloud, which will contain low cost, high capacity spinning disks.

As larger capacity SSDs become available, latency will become more important than IOPS for most applications. We will see SSDs directly attached to the bus with high performance 40Gbit, 100Gbit and faster switches moving data across the backbone network at lightning speeds to minimise latency.

What all this means is that the days of expensive SAN and NAS products are quickly coming to an end. Performance will be as cost effective and simple as adding more RAM and commodity SSD to a host. Capacity will be provided via public/private cloud offerings or high capacity, low cost 10TB drives. Storage costs will plummet as businesses move to the new enterprise with Objective-Defined Storage solutions. Storage silos and vendor lock-in will be eliminated completely. Commodity SSD and inexpensive high capacity disks will deliver performance and unlimited scalability for all sizes of companies.

Software to replace hardware

SAN and NAS products are old technology, developed over 20 years ago for physical servers. Their vendors use exactly the same disks as everyone else, then wrap add-ons around their hardware to deliver competitive advantage. Typically, the add-ons include replication, deduplication, snapshots, cloning, encryption and compression products. For this, SAN and NAS vendors charge a substantial premium.

Yet since these products are locked into their hardware alone, they cannot deliver the scalability, performance or ease of use required by the new enterprise. Virtualisation, agile computing, and Dev/Ops require storage that is flexible, affordable and easy to scale while delivering exceptional performance.

End user demands are forcing the storage hardware industry to transition to Software-Defined Storage and distributed storage solutions. In order to drive true benefits, including scalability, performance, affordability, reliability, and ease of use, it is critical that data is fully de-coupled from storage hardware. Moving forward, data must be independent of storage hardware, and able to move freely and ubiquitously across all storage types, regardless of hardware, vendor, make or model. Hardware vendor lock-in and storage silos must be eliminated.

 Objective-Defined Storage solutions work at the application level. Data resides on storage. Data drives applications. Applications drive business. Logically, applications will control data placement.

SAN and NAS products will be rapidly replaced by storage software platforms like Objective-Defined Storage due to business demands for more cost effective, higher performance, and easier to manage storage products.

In the new enterprise, the storage software platform will include intelligence to drive and control data. Applications will control the data as the platform moves data across available storage resources to achieve objectives set at the application level.

Markets and Markets estimate the Software-Defined Storage market will grow from US$4.72 billion in 2016 to a massive $22.56 billion by 2021. They also forecast the Hyperconverged Infrastructure market will grow from $US0.79 billion dollars to $12.6 billion by 2022.

Intelligence that needs to be included in the Objective-Defined Storage platform includes key capabilities such as self-healing, automatic data tiering across any storage hardware, snapshot and cloning technology, deduplication, encryption, compression, and a new uber-powerful replication technology called Live Instances.

Live Instances are exactly that – a live instance of company data. A business should be able to have between two and eight live instances, stored in multiple locations for high availability and remote resilience. An absolutely critical application may have three live instances locally, two in a remote data center and one each in AWS, AZURE, and Google. All live instances are updated in unison and able to serve up data if needed. In addition to incredible resilience, this allows multi-pathing to an application to deliver express performance. To deliver true affordability, all these capabilities should be included in the platform at no extra charge.

The result of an Objective-Defined Storage platform will be a far easier to manage storage environment where all current and future storage products can be managed from one dashboard. Data will move freely across all storage to deliver the performance, capacity, and protection objectives of the application at that specific point in time. Protection will improve by having true Live Instances of data in multiple locations. Exponentially reduced costs and complexity with improved performance and reliability of storage infrastructure as storage silos and vendor lock-ins are eliminated.

Hybrid cloud

Data storage will see rapid growth of the cloud, specifically the Hybrid Cloud. More and more new companies are being ‘born in the cloud' with minimal on-premise infrastructure. For those starting from scratch, this is the logical decision. However, most companies have a substantial investment in local infrastructure.

Using a Hybrid Cloud enables a fast and simple use case for deploying an Objective-Defined Storage platform. Let me define a Hybrid Cloud as hot and warm data stored locally on premise, and cold, aged and stale data stored in a private or public cloud.

The latest Objective-Defined Storage solutions deliver as standard the ability to migrate cold, aged and stale data to a preferred cloud infrastructure provider as part of built-in automated data tiering capabilities. This powerful capability frees up typically between 50 to 80 per cent of expensive SAN and NAS products by moving cold data off the array into the cloud.

Build vs buy a hybrid cloud

Some huge networks, including Google, FaceBook, and Amazon do not buy traditional SAN and NAS products. They have built their own highly redundant, highly available, highly affordable storage using commodity products. Objective-Defined Storage platforms deliver similar capabilities out-of-the box -- fast, easy, and cost effective storage platforms for all sizes of business from massive cloud scale environments down to small or medium businesses.

Should a business want to change its cloud provider, it can simply set up a new Live Instance destination, and let data migrate from its previous cloud to the new cloud. Once the dashboard goes green, all data has been migrated, so IT can turn off the previous cloud provider as the business is now live with the new provider. Simple, easy, and affordable.

The Perfect Storm looms

Organisations embracing Objective-Defined Storage platforms will dramatically cut storage costs, improve performance, and receive substantially improved reliability. The arrival of 32TB, 60TB and 100TB SSDs will enable companies to have hot and warm data in RAM and SSD to deliver incredible performance. High capacity 10TB spinning disks will enable exceptionally cost effective, high capacity storage for cold data. Objective-Defined Storage platforms will deliver a single dashboard and complete data tiering, protection, and capacity management of your data with minimal manual intervention.

For management looking to transform storage thus transforming their business, it is time to look at, pilot and invest in an Objective-Defined Storage platform that delivers the always on, agile, and affordable storage platform of tomorrow, today.

By Greg Wyman, ANZ Vice President, ioFabric