Story image

Veeam eyes big data; pushes importance of applications and services

12 Oct 16

Veeam is turning its focus on big data – and how to protect it – as companies increasingly seek to harness big data.

Clint Wyckoff, Veeam Software global technical evangelist for technical product marketing, says big data risks being an upcoming loss for any businesses if measures aren’t taken to consider how their infrastructure can handle the volume of data involved.

“It’s definitely an area Veeam is looking into from a data leaks perspective – being able to protect large amounts of object storage,” Wyckoff says.

Wyckoff, who spends much of his time talking with the IT professionals community to evangelise thought-leadership, says another key area of focus for Veeam recently has been the ITIL – IT infrastructure library – framework, and the importance of IT professionals – both in house and resellers – understanding that ‘the most important thing to any business is applications and services’.

“That’s what enables the business,” Wyckoff says.

“IT needs to enable them and they need to understand that and be responsive to what the business’ needs are, whether that is from deploying apps and services, spinning up VMs, making sure there is enough capacity in the environment to withstand, for instance, a peak holiday season.

“That’s one of the first pillars of what cloud computing is about – the elasticity, the ability to expand out and contract back based on business demand.”

The vendor has seen high uptake of its Veeam Cloud Service Provider offerings across Australia and New Zealand

Wyckoff advocates regular meetings with the business to ascertain what different departments have coming up in the future that IT needs to know about to ensure it is adequately prepared.

He says Veeam offers a holistic view of not just the physical environment, but the virtualised environment, providing users with deep, application monitoring.

“If you think about the purpose of [Microsoft] System Center – providing network operations with green light if the system is good, break fix type activities – we allow that relationship from the application level. Is it running SQL Server, what applications are running on here?

“And we can create that relationship down to the virtualised environment related to data stores and things of that nature,” he says.

“But the most important part of that is ensuring applications are available and putting SLAs around them. That’s one of the large focuses of my background because I was successful in doing that as an end user.”

When it comes to cloud, Wyckoff says companies need to look beyond the ‘low hanging fruit’ of backup and disaster recovery and consider how data backed up off site can be utilised in different ways.

“Backup disaster recovery is easy to get off site because it is non disruptive to any business processes – you’re just sending business data out to a secondary location.

“And there are a lot of different ways that can be utilised. I can use it for development environments, I can use it for test infrastructure, perhaps as a failover mechanism out to these secondary data centres, where I can test disaster recovery.”

That testing is something he cautions IT departments and resellers to be focused on.

“You need to be not only making sure you’re backing up and protecting things, but how often are you testing to make sure that the backups and recovery work.”

However, he says IT still ‘really struggles with that off site piece’.

“It’s really easy to back it up locally, I can do all that, but how do I get it offsite? Some people use tape, some people use a colocation in another data centre where they have for instance a cage of equipment, others look to third party to do that.”

“If I’m sending back ups offsite, is there an infrastructure there for me to restore so I don’t have to pull it back down, do I have hardware sitting there idle ready to restore to? Or if I do need to pull data back down, what is the recovery time going to look like”

“It’s just making sure you understand what the recovery time objective, so if I’m sending off my most mission critical information to a site where there’s no hardware and I have to pull it back down, that could take days if it’s a large set of data. Is the business ok with that? If yes, then that meets my businesses requirements. If no, then maybe I need to figure something else out.”

Proofpoint launches feature to identify most targeted users
“One of the largest security industry misconceptions is that most cyberattacks target top executives and management.”
How blockchain will impact NZ’s economy
Distributed ledgers and blockchain are anticipated to provide a positive uplift to New Zealand’s economy.
McAfee named Leader in Magic Quadrant an eighth time
The company has been once again named as a Leader in the Gartner Magic Quadrant for Security Information and Event Management.
Symantec and Fortinet partner for integration
The partnership will deliver essential security controls across endpoint, network, and cloud environments.
Review: Blue Mic’s Satellite headphones are good but...
Blue Mic’s newest wireless headphones deliver on sound, aesthetic, and comfort - but there is a more insidious issue at hand.
Is Supermicro innocent? 3rd party test finds no malicious hardware
One of the larger scandals within IT circles took place this year with Bloomberg firing shots at Supermicro - now Supermicro is firing back.
Forcepoint and Chillisoft - “a powerful combination”
Following Chillisoft’s portfolio expansion by signing on Forcepoint, the companies’ execs explain how this is a match made in cybersecurity heaven.
David Hickling in memoriam: “Celebrate the life and the music it made”
Dave was a well-respected presence in the IT channel and his recent death was felt by all the many people who knew him as a colleague and a friend.