ChannelLife New Zealand - Industry insider news for technology resellers
Story image
Veeam eyes big data; pushes importance of applications and services
Wed, 12th Oct 2016
FYI, this story is more than a year old

Veeam is turning its focus on big data – and how to protect it – as companies increasingly seek to harness big data.

Clint Wyckoff, Veeam Software global technical evangelist for technical product marketing, says big data risks being an upcoming loss for any businesses if measures aren't taken to consider how their infrastructure can handle the volume of data involved.

“It's definitely an area Veeam is looking into from a data leaks perspective – being able to protect large amounts of object storage,” Wyckoff says.

Wyckoff, who spends much of his time talking with the IT professionals community to evangelise thought-leadership, says another key area of focus for Veeam recently has been the ITIL – IT infrastructure library – framework, and the importance of IT professionals – both in house and resellers – understanding that ‘the most important thing to any business is applications and services'.

“That's what enables the business,” Wyckoff says.

“IT needs to enable them and they need to understand that and be responsive to what the business' needs are, whether that is from deploying apps and services, spinning up VMs, making sure there is enough capacity in the environment to withstand, for instance, a peak holiday season.

“That's one of the first pillars of what cloud computing is about – the elasticity, the ability to expand out and contract back based on business demand.

The vendor has seen high uptake of its Veeam Cloud Service Provider offerings across Australia and New Zealand

Wyckoff advocates regular meetings with the business to ascertain what different departments have coming up in the future that IT needs to know about to ensure it is adequately prepared.

He says Veeam offers a holistic view of not just the physical environment, but the virtualised environment, providing users with deep, application monitoring.

“If you think about the purpose of [Microsoft] System Center – providing network operations with green light if the system is good, break fix type activities – we allow that relationship from the application level. Is it running SQL Server, what applications are running on here?

“And we can create that relationship down to the virtualised environment related to data stores and things of that nature,” he says.

“But the most important part of that is ensuring applications are available and putting SLAs around them. That's one of the large focuses of my background because I was successful in doing that as an end user.

When it comes to cloud, Wyckoff says companies need to look beyond the ‘low hanging fruit' of backup and disaster recovery and consider how data backed up off site can be utilised in different ways.

“Backup disaster recovery is easy to get off site because it is non disruptive to any business processes – you're just sending business data out to a secondary location.

“And there are a lot of different ways that can be utilised. I can use it for development environments, I can use it for test infrastructure, perhaps as a failover mechanism out to these secondary data centers, where I can test disaster recovery.

That testing is something he cautions IT departments and resellers to be focused on.

“You need to be not only making sure you're backing up and protecting things, but how often are you testing to make sure that the backups and recovery work.

However, he says IT still ‘really struggles with that off site piece'.

“It's really easy to back it up locally, I can do all that, but how do I get it offsite? Some people use tape, some people use a colocation in another data center where they have for instance a cage of equipment, others look to third party to do that.

“If I'm sending back ups offsite, is there an infrastructure there for me to restore so I don't have to pull it back down, do I have hardware sitting there idle ready to restore to? Or if I do need to pull data back down, what is the recovery time going to look like

“It's just making sure you understand what the recovery time objective, so if I'm sending off my most mission critical information to a site where there's no hardware and I have to pull it back down, that could take days if it's a large set of data. Is the business ok with that? If yes, then that meets my businesses requirements. If no, then maybe I need to figure something else out.