Just because organisations have backup systems in place does not always mean their data is fully protected in the wake of a loss-causing event says Dr Johannes at SANS Technology Institute.
It is no secret that backup systems are critical to preserving sensitive data files from ransomware, theft, sabotage, and accidental loss. However, it is important to remember that merely leveraging backups is not the end-all-be-all solution to a challenge riddled with complexity. Just because organisations have backup systems in place does not always mean their data is fully protected in the wake of a loss-causing event. And amid sharp rises in the volume and velocity of attacks, the consequences of poor data backups are too severe to overlook.
For example, IBM’s 2022 Cost of a Data Breach Report found:
- Globally, the average total cost of a data breach increased by 13% YoY to a record-high $4.3 million in losses.
- The average duration of identifying and containing a data breach lasted more than 275 days, equivalent to over nine months of downtime.
As attackers have grown more skilled and sophisticated, they are now leveraging hard-to-detect tactics, techniques, and procedures that capitalise on backup system vulnerabilities to either steal data or disrupt recovery operations.
Remote access backups, for instance, are often reliant on password protections. Due to poor password hygiene or the absence of two-factor authentication, these backup systems can be easy targets for threat actors to utilise as attack vectors against protected systems.
When exploited, backup software vulnerabilities can also compound into giving attackers direct access to live system environments. Take the CVE-2022-36537 vulnerability that was publicised in early 2023 for example. Threat actors used it to access additional servers that were backed up on the same system, essentially surfing backward into live environments to exfiltrate data and distribute malware.
That very same scenario is impacting organisations of all sizes or sectors, heightening the criticality of effectively implementing safe and secure backup system storage to maximise protection and agility.
Organisations should consider data assets at risk if they are not backed up in at least three different locations. It does not help to have three copies of data using the same cloud provider, even if the data is in different zones. Organisations should aim to maintain an on-premises copy, a cloud or online-remote copy, and an offline remote copy of critical data.
Coined the 3-2-1 rule, this approach combines a diverse mix of cloud, on-premises, and offline-remote copies to ensure data can be preserved even if an online backup is disrupted. Sophisticated ransomware will attempt to disrupt recovery from backups, and any online backup, remote or local, is at risk.
Among all forms of backup systems, cloud-based backups are often the most vulnerable. In turn, organisations should be leveraging an on-premises backup that can drive rapid restoration at scale, especially in cases where there’s a high volume of critical data to recover. Controls used to monitor access to on-premises backups do not always translate one-to-one to cloud-based systems.
Designing a cloud-based solution, organisations need to consider how access is controlled, how requests to retrieve or store data are authenticated and how the backup live cycle from creation over retrieval to eventual deletion is managed.
One of the main reasons to invest in on-premises backups is to speed up recovery. Cloud and offsite backups will almost always be slower. In some cases, cloud backup providers may have mechanisms to accelerate the recovery of large amounts of data by shipping hard drives instead of using slower internet connections.
Recovery speeds should be tested ahead of time to better estimate how long it will take to retrieve large amounts of data. This provides an accurate barometer of how long it will take to recover sensitive files in the wake of a breach when extended downtime durations can translate to millions in financial losses.
Any data leaving the direct control of an organisation, whether it is physical backup files being shipped offsite or online backups migrating to the cloud, must always be encrypted before exiting the environment.
Encrypting backups adds an additional layer of security by converting sensitive information into an unreadable format, if attackers intercept data while in transit, they still couldn’t access it without a decryption key. Beyond transit data should also be encrypted while at rest at the secondary backup location as well.
In addition, organisations should allocate equal prioritisation to the three foundational components of effective data management:
Actively protect both primary and secondary data backups from loss, theft, compromise, and corruption with the ability to rapidly restore data after an incident.
Create a well-defined security architecture that promotes the safe storage of data backups both on-premises and in the cloud.
Ensure all backup systems and network users continuously follow access policies that are compliant with federal and industry compliance regulations.
It is still important to understand that primary and secondary backup systems were not initially designed to defend against cybercrime, especially not from expert threat actors who leverage encrypted malware, double extortion, and phishing campaigns, among others, as core competencies of their TTP framework.
At their inception, backups were made to preserve data in cases of file corruption or accidental removals, not ransomware. However, as cyber threats targeting data assets have intensified, they have emerged as a must-have tool within the enterprise data security arsenal. By implementing effective backup practices at scale, organisations can take proactive steps to strengthen their data security posture and safeguard sensitive files.Click below to share this article