Image Posted on Updated on
Use These Five Backup and Recovery Best Practices to Protect Against Ransomware
Reprinted from ITG’s January issue of Tech News
Analysts: Robert Rhame, Roberta J. Witty; June 8, 2016
Ransomware is on the rise, and its perpetrators are effectively evading countermeasures. I&O and business continuity management leaders should plan for the inevitable, limited or widespread, ransomware incident.
- Incumbent antivirus prevention techniques cannot be relied upon to detect and stop all ransomware.
- A single infected client can encrypt all file shares they have access to, potentially including cloud storage locations.
- Once files are encrypted, organizations have two choices: restore from a backup or pay up.
- Ransomware is generating huge revenue for criminals and it should be expected that these attacks will intensify in volume and sophistication.
- Ensure that your organization has a single dedicated crisis management team.
- Implement an enterprise endpoint backup product to protect user data on laptops and workstations.
- Build a list of storage locations that users can connect to that are inherently vulnerable, such as file shares.
- Evaluate the potential business impact of data being encrypted due to a ransomware attack, and adjust recovery point objectives (RPOs) to more frequently back up these computer systems.
- Align with the information security, IT disaster recovery and network teams to develop a unified incident response that focuses on resiliency, not only prevention.
Users are only a click away from a drive-by download of malware from a compromised web page, or [the] launch of a trojan attachment from a ransomware spam campaign. The rapid-release nature of the malware underground means that antivirus vendors are playing a game of catch-up. The ransomware authors only have to be successful in bypassing defenses once, and they change their tactics constantly in order to do so. Organizations must assume accidents will happen, and that their data will be held for ransom.
Ransomware is a form of malware where files are encrypted and then a bitcoin ransom is demanded in return for the decryption key. There are two types of attack mechanisms for ransomware:
- In the more common scenario, an end user is duped into clicking an attachment or visits the wrong web page resulting in his/her laptop or workstation and all connected file shares being encrypted.
- The less common scenario to date is a targeted approach where hackers get inside the organization and then use encryption of data as a tool to force payment.
So far, most ransomware authors prefer to cash out, so they immediately and prominently inform the victim that files have been encrypted. Some might use threats or scare tactics—such as setting a deadline after which the data will be permanently lost —encouraging a sense of urgency and keeping the victim off balance. Some ransomware may even use tactics to try to avoid detection long enough that backup retention expires before demanding a ransom.
Your first impulse might be to increase backup retention, but, on reflection, it is hard to imagine having to restore a backup that is older than 90 or 120 days. Instead of making these kinds of blanket changes, it is important for organizations to first understand what type of data storage is typically affected by a ransomware attack.
Typical Data Storage Affected
In most cases, the initial ransomware attack occurs on a user’s laptop or workstation. Therefore, locally stored data in files and folders, file shares, cloud storage via gateways, as well as any mapped network drives, is inherently vulnerable.
Data Affected Because of Replication
Enterprise file synchronization and sharing (EFSS) in and of itself is not vulnerable since an agent handles the communication with the on-site or in-the-cloud synch and share server. In this case, there is no mount point for the ransomware to traverse; however, the replication mechanism will replicate changes made locally as part of the functionality, thereby replicating the encrypted files (and, possibly in the future, also malware) to the shared directories. EFSS typically has versioning capabilities, but not bulk restore. A laptop restored using endpoint backup will replicate the last good versions as a new file change, but there may be scenarios where cleaning up the versions to a known clean state will be desired.
Not Vulnerable Today
SharePoint or any web application where end users’ access is through an authenticated web browser session is not vulnerable to a ransomware attack yet. As the countermeasures evolve, ransomware attackers might begin including a remote access trojan (RAT) in the malware in order to manually remote control the infected host and overcome limitations of an automated attack. A similar tactic was used with banking trojans when countermeasures began to reduce effectiveness of the automated approach. This is a very manual process for the attackers, requires a connection to the infected host and does not scale.
Follow the five backup and recovery best practices documented in this research to ensure that you are as protected as possible from ransomware attacks.
Step 1: Form a Single Crisis Management Team
An effective response to the ransomware threat must be a holistic and multilevel one — reducing the likelihood of a successful attack to the bare minimum, while simultaneously ensuring the ability to recover from an unprevented attack. IT operations and IT disaster recovery (IT DR) must work with their counterparts in information security to develop an integrated response and recovery approach, including a framework for responding to all new threats and a continuously updated risk assessment of the IT infrastructure vulnerable to a ransomware attack.
Step 2: Implement Endpoint Backup
Without a backup, years of locally stored files and folders on a laptop/workstation would be lost; that is, unless the organization wants to pay to release them, fueling the ransomware economy. Even without ransomware, complications and costs from potential disclosure resulting from loss, theft and hard drive crashes can quickly help build a compelling case for deploying laptop and workstation backup. Therefore, implementing endpoint backup solutions will ensure you have a safe copy of your data that can be restored once faced with the threat.
Depending on the endpoint backup product’s capabilities, backup schedules can be configured to run at intervals of several times an hour, several times a day, or during idle laptop/workstation cycles. The decision must be made as to what timeframe is an acceptable loss for the organization based on recovery requirements.
Endpoint backup can provide two key functions:
- Laptop or workstation restore — after the ransomware infection has been remediated, all files up to the last backup can be restored.
- EFSS upstream replication — once the restore is completed, the administrator can reconnect the user to his/her synch and share application. The restored files will synchronize from the local EFSS folder to the user’s directories, thereby replacing the encrypted files.
Endpoint backup solutions can be configured to back up mapped drives (such as home folders or file shares) to accelerate returning a single employee back to production, but they do not replace a centralized solution in the case of an overall storage failure or wider infection.
Justification for the investment in endpoint backup can be calculated using the following metrics: productivity loss per employee for all involved; aalaries of each employee involved; time involved to recreate content; and the number of estimated ransomware incidents, accidental deletions, hard drive crashes or laptop losses/thefts.
Refer to “How to Address Three Key Challenges When Considering Endpoint Backup” to learn more about this cost calculation algorithm.
Step 3: Identify Network Storage Locations and Servers Vulnerable to Ransomware Encryption
- Enumerate Obviously Vulnerable Storage Locations
The most important task is to revisit RPOs for potentially vulnerable storage locations. Following the laptop or workstation infection, the ransomware traverses all mount points configured in Windows Explorer in an attempt to encrypt everything it finds. A first assessment can be done by talking to the Active Directory and/or PC deployment group to find out what the standard Group Policy Mapped Drives are for each new laptop or workstation image. This task provides an inventory of servers for further investigation and audit for overly permissive inherited permissions.
- Don’t Forget the Not-So-Obvious Vulnerable Storage Locations
A single mapped drive could cause unexpected servers to be affected. It is common for database and application administrators to map drives to work with full system privileges at the file system level in order to perform installs, maintenance, upgrades or troubleshooting of the software/applications that they are working on. If an administrator has a drive mapped “persistently” (the box “reconnect at logon” is checked) and his/her workstation gets infected, then data on any mapped drive will also be encrypted. If cross-zone drive mapping is allowed, you must communicate to all privileged users that they should not use persistent mapping, and then disconnect these drives rather than leaving them open for their entire user session.
Step 4: Develop Appropriate RPOs and Backup Cadences for Network Storage and Servers
The next step is to re-examine your organization’s RPOs for appropriateness to the business function. It is likely that file shares are only backed up nightly; therefore, if they are actively used as an ad hoc collaboration system, then a loss could hurt the organization worse than expected because of the greater potential for losing new and modified data. There are two steps to this task:
- First, determine how much data loss the organization will accept. While never a comfortable exercise, the reality is that the greater your loss avoidance risk position, the more likely a solution will require more resources.
- Second, set the RPOs for each server deemed to be at greater risk to ransomware, and according to organizational requirements based on a data loss time frame that is acceptable to the organization.
The primary goal is to leverage newer backup methodologies to achieve more frequent recovery points. This may mean acquiring new technology, or simply fully deploying capabilities of the existing storage and backup solutions already in place. The goal here is backing up more often.
If available, leverage fast-scan capabilities to back up only changed files or changed block tracking for storage arrays and/or virtual machines (VMs) in order to schedule more frequent backups. This will allow for more frequent backups while requiring fewer resources, thus offering greater protection.
It is advisable to implement less predictable backup times with at least one RPO during the day, when new infections are most likely to occur. Rudimentary time-based encryption/decryption cycles have been observed in some ransomware attacks, most likely to masquerade the ransomware’s presence for as long as possible.
For selected workloads, tactically implement new technologies that can step backward to recovery points, such as continuous data protection (CDP), hyperconverged integrated systems (HCIS), hypervisor-based replication products, or DR replication that includes change journaling.
There have been a few reports that perpetrators are encrypting backed-up data before triggering the ransomware attack to encrypt production data. The result of this added step in the attack process could mean that the most current backups won’t be of value, and restore will have to be done from older or offline versions.
As an overall defense, Gartner’s best practice for backup is to have at least two copies of your backed-up data geographically dispersed to mitigate against a broad range of natural and man-made disasters. Ideally, at least one copy of the backed-up data is offline and off-site to reduce the impact of accidental or malicious destruction.
Step 5: Create Reporting Notifications for Change Volume Anomalies
For future ransomware attacks, there might not be a ransom demand immediately; therefore, it is imperative that the activity be noticed quickly. Combined with running select backups during the day, reporting on storage anomalies can help identify that an attack has occurred or is actively underway. Implementing such reports includes three tasks:
- Create a report in your enterprise backup application that will trigger an alert when a high number of changes occurring on servers results in a sudden and marked increase in storage.
- Create reports based on capacity thresholds for devices that use deduplication, such as backup target appliances and HCIS, since unexpected encryption will result in 100% change rate and a large increase in storage consumption.
- Examine the reporting capabilities available in your endpoint backup application and EFSS, and implement a storage anomaly report.
Additional research contribution and review by Pushan Rinnen and Dave Russell.
Image Posted on
Courtesy of BARC Research ©2016 BARC – Business Application Research Center, a CXP Group Company
The IT industry and the world at large have always been subject to technology and business trends, sometimes undergoing major changes, such as the development of the personal computer, client/server computing and the evolution of the Internet.
Over the last few years, new trends have emerged that have had an enormous influence on how organizations work, interact, communicate, collaborate and protect themselves. Eight IT ‘meta-trends’ influence organizations’ strategies, operations and investments in a wide variety of ways:
- Artificial Intelligence
These meta-trends can be considered as the main drivers behind a number of important trends either related to the usage of software and technologies for business intelligence/analytics (BI) and data management or to the way BI is organized. They generally shape the future of business intelligence and – more specifically – the BI and data management trends we analyzed.
BARC’s BI Trend Monitor 2017 reflects on the business intelligence and data management trends currently driving the BI market from a user perspective.
In order to obtain useful data for the BI Trend Monitor, we asked almost 2,800 users, consultants and vendors for their views on the most important BI trends. Their responses reveal a comprehensive picture of the future of BI as well as regional, company and industry-specific differences, delivering an up-to-date, objective perspective on the business intelligence market.
The Most (and Least) Important BI Trends in 2017
Data discovery/visualization, self-service BI and data quality/master data management are the three topics BI practitioners identify as the most important trends in their work.
At the other end of the spectrum, data labs/data science, cloud BI and data as a product were voted as the least important of the twenty-one trends covered in BARC’s survey.
This shows that ‘hyped’ topics or initiatives in early-moving companies cannot win a greater mindshare as important business intelligence trends than more mainstream topics like data discovery and self-service BI, or fundamentally important topics that have been around for a while like data quality and master data management.
Our View on the Results
Overall, there are no significant changes in the ranking of importance of BI trendscompared to last year. This is a good indicator that our survey participants are not seeing any major market shifts or disruptions impacting their work.
Data discovery, self-service BI and master data/data quality management are currently the top business intelligence trends. While self-service BI and data discovery increased moderately in importance, master data and data quality management decreased slightly.
Self-service BI has been on organizations’ wish lists for a long time as IT departments struggle to satisfy steadily growing demand from end-users for faster changes and new developments to meet their BI needs. Enabling the business user community through ‘self-service BI’ is a good idea. Data discovery and visualization, as well as predictive analytics, are among the typical functions users want to consume in a self-service mode. However, an agreed data and tool governance framework is paramount to avoid losing control over data.
End-users recognize the need for data quality and master data management and, in our experience, initiatives in this area are often announced with a fanfare before quickly moving down the list of priorities for a variety of reasons. But at least organizations seem to be aware that the best-looking dashboard is worth nothing if there are flaws in the data it is based on. Business intelligence will not work without comprehensive data integration and data quality initiatives, but these have to be backed up with the right level of attention, resources and funding.
In the next few weeks, we will post a series of articles looking at each BI trend in more detail. You will learn how different regions, industries, user types, company sizes and best-in-class companies rate the various trends and how their views have changed since last year. Sign up for our newsletter below and we’ll keep you informed about the latest articles.
Click here to download the full BI Trend Monitor 2017 report.
Image Posted on Updated on
This is the time of year that we all tend to clean things out, spruce things up and get ready for the months ahead. While we all concentrate on our closets, garages, and gardens, are you looking at your computers?
There’s an annual check-up for your automobile’s health, one for your physical health, and one for your pet’s health. Why don’t we schedule a check-up for the item that we probably spend more time with than we do our cars or our pets (very sad to say!).
Your home and work computers, tablet, and smartphones are probably the first things you turn on every day and the last thing you turn off. We just assume that they will be there when we need them. But can you remember the last time you had an issue with one of these devices and didn’t have access for hours, or maybe a day? It seems like our entire life is thrown off balance. In a work setting, hours of time are lost, most often resulting in lost revenue
Scheduling an annual review of your business computer systems just makes sense. For those of you not using an automated managed services platform, are you certain that all of your employees are performing updates as they should, or are you on top of those for your servers? When did you actually buy that server that runs your company everyday—might it be time for an upgrade before it dies in the middle of a work-day?
You’ve probably been using the same technology to manage your emails and your spam for some time now, but are you aware of more efficient and perhaps more cost effective ways to handle these? Are your employees accessing your work computers from home or on a tablet or smartphone? Are you aware of the new file sync and share services which are not only easy to use but increase productivity and security?
So, as we jump ahead to Spring you may want to meet with your business technology provider to review exactly what is running your business every day! Such a meeting can save time down the road, prevent lost productivity, and perhaps reduce your costs due to more efficiency.