Why backing up your data could still leave you vulnerable | Joviam - Cloud Computing Infrastructure

tech-features-banner

Our Blog

 

 

Our Blog

 

Why backing up your data could still leave you vulnerable

Posted by Joviam Administrator on 28 08 2017.

 

‘Backup’ is a somewhat umbrella term – the backing up of data should be a multi-layered approach. It’s no secret they play a critical role in business success, but reliance on one method does not a backup strategy make.

Backup adoption rates have increased globally, including a doubling of cloud-based solutions since 2016 (16% to 33%). Despite this, over a third of 1,000 surveyed Kroll Ontrack customers reported data loss[1], irrespective of whether cloud-based or not.

An oft-overlooked point is many backup strategies are reliant on end user diligence; alarming considering 20% backup data once a month or less, and 24% report never testing a restore. With manual intervention required in most recovery scenarios, factors such as time to respond and human error become further factors in how much data is ultimately saved.

These stats are also noteworthy as they highlight how much exposed data exists. Snapshot or incremental backups capture data at certain points in time, therefore changes between backups are missed. For example, weekly backups don’t safeguard against a server outage within the following week.

With cloud-based backup solutions, architecture plays a vital role. The Australian Tax Office (ATO) lost 1 Petabyte (1 million GB) of data in late 2016, after one of their newly acquired SANs[2,3] collapsed. Their blocking, single point of failure cloud architecture exposed major vulnerabilities and significantly reduced reliability. The total cost has yet to be determined, but data losses in 2014 cost Australian businesses $US55bn.[4]

 

So what’s the solution?

The gold standard for backup strategies is the 3-2-1 Rule: three copies, two different mediums, and one off-site. To achieve this, backups can be performed on three layers: block level, file systems, and within applications.

Synchronous data replication overcomes end-user dependence and potential architecture limitations involved in common backup procedures for block level backups. This process sees data written simultaneously to at least two nodes, thereby creating a mirror copy of data immediately.

To achieve this without introducing a performance penalty, a high-speed, low latency connection is mandatory, such as block storage replication over InfiniBand. Unfortunately, owing to the prohibitive cost and implementation complexities, this technology has thus far been limited to supercomputers and enterprise environments. Most large public clouds mitigate this by employing Storage Area Networks over Ethernet, but this only serves to introduce single points of failure and latency under load[5].

Another option is to architect synchronous replication within the block-layer of a cloud solution. Joviam does this by leveraging InfiniBand’s higher throughput and lower latency by exporting blocks across the cluster to a distributed pool,  which can be allocated by any node with a minimum n+1 redundancy. No user configuration is required, a critical point given the earlier discussion about minimising user intervention.

Above this, backups should be performed at the file system level. These can include additional intelligence about how often a file should be backed up, whether deltas (only modified file portions) are sufficient, and how restores are handled. For clouds such as Joviam that support live disk attach and detach, these can be performed to temporary disks, a hot standby to another virtual server over private networking, or to warm standbys that turn on or off based on certain triggers to minimise cost.

Applications such as databases, can also be configured to export their own data in real time, by shipping transactions across multiple installations, or to a remote warm standby. If an issue is encountered, data can be restored back to the database. Refer to our earlier posts about High Availability for another approach to this.

By leveraging a multi-layered approach to backups, a well-designed system can be architected to minimise and mitigate the effect of data loss, regardless of what level of the stack an issue occurs. For businesses relying on 24/7 operations, this is most critical.

[1] https://www.krollontrack.co.uk/resources/press/details/world-backup-day-2017-survey-shows-users-lose-data-despite-backup/

[2] http://www.theregister.co.uk/2016/12/13/hpe_3par_takes_ato_offline/?mt=1481623788162

[3] http://lets-talk.ato.gov.au/ato-systems-update

[4] https://www.arnnet.com.au/article/561295/data-loss-downtime-costs-aussie-business-us55-billion-2014/

[5] https://www.nextplatform.com/2015/04/01/infiniband-too-quick-for-ethernet-to-kill-it/

Share this post on:Tweet about this on Twitter
Twitter
Share on Facebook
Facebook
Share on LinkedIn
Linkedin
Email this to someone
email

Sign up for a FREE 14-day trial