At the turn of the 20th century, coal miners were in a tough spot – their livelihoods and the continued operation of their respective employers required work in dangerous conditions and where exposure to invisible toxic gasses killed large numbers of miners every year. Then, Scottish physiologist John Scott Haldane proposed the use of sentinel species – small animals with fast metabolisms who were more sensitive to the atmospheric conditions and potential contaminants – to give miners an earlier warning that toxic gasses were present so they could escape. This is the origin of the well worn canary in the coal mine idiom.
“But wait, didn’t the miners already have a warning? If their buddy falls over dead, then they know they should get out. Why do they need a canary?” [queue appalled silence]
The miners are the thing being protected, so we’d never accept the loss of one as an early warning system for the rest; the goal is to protect all of them. But what if the thing being protected wasn’t individuals? What if it was sensitive data distributed across the computers on a corporate network? A really accurate indicator that you have a ransomware infection is if you have a computer with all its contents encrypted and it’s asking you for a ransom. Of course, at that point, you’ve already lost at least one of your miners. This clearly isn’t an ideal approach, but this is effectively how many organizations become aware of ransomware infections today.
Ransomware is arguably one of the biggest challenges facing security teams today. Just a few years ago a potential malware infection would be somewhat slow moving as the attacker navigates the environment looking for the crown jewels while also trying to remain undetected. However, in the case of ransomware, endpoints, servers, and file shares are attacked and encrypted more indiscriminately and with very little delay. This problem is amplified by the relatively low barrier to entry for creating ransomware and the proliferation of ransomware-as-a-service offerings. With the change in attacker tactics, security teams must also evaluate their detection and prevention systems to ensure they are prepared for the new landscape of ransomware attacks.
When looking at security infrastructures for addressing ransomware, an obvious place to implement detection solutions is on the endpoints themselves. After all, it’s the scene of the crime and it’s where antivirus protection traditionally sits – what better place to detect ransomware? Turns out, the obvious approach is less than ideal.
To begin with, detection of ransomware on an endpoint has the same challenges as detecting other types of malware. Namely, that through custom repacking of the malware, attackers can easily circumvent even the most up to date signature-based detection. This evolution of AV avoidance over the past decade has dramatically impacted the efficacy of AV solutions and has resulted in many declaring that “AV is dead.” It’s hard to not see endpoint detection of ransomware in the same light.
The second challenge to endpoint detection of ransomware is that new strains of ransomware are not strictly file-based. For example, the encryption of the host files can be easily achieved using PowerShell to download malware directly into memory. Without a file to analyze and check against known signatures, ransomware detection and prevention tools are at a significant disadvantage as process or active memory analysis is extremely challenging.
The third, and arguably the biggest, challenge to detecting ransomware on the endpoint is that once an attacker is able to establish a presence on it, it’s largely game over for that endpoint. As the attacker is on the same endpoint as the data intended to be encrypted and the tools designed to protect that data, a savvy attacker will first attempt to disable the protections before continuing to encrypt the data. Some may argue that ransomware delivery and execution is a largely automated process, so disabling any local protections is unlikely. This would otherwise be true, but attackers have learned their lessons from previous malware development and are able to leverage their automated AV avoidance in similar fashion to attack ransomware solutions.
So what would a better ransomware detection approach look like? It should be independent from production workloads so that when an attack occurs, no critical data is impacted, it should be able to generate high fidelity alerts so that security teams don’t waste time chasing down false positives, and it should provide broad detection coverage across the environment so that infections can be detected reliably. Deception technologies, though not often thought of as a prevention mechanism for ransomware, can provide capabilities for early detection without impacting production workloads.
Deception solutions implemented at the network, rather than running on the endpoints themselves, can create the illusion of additional endpoints, file shares, and other vulnerable services across the environment. By creating additional “fake” endpoints that contain no production data, any ransomware attacks on these systems are a win for the organization’s security teams as they are able to identify, quarantine, and remediate the ransomware prior to it impacting real production workloads and data.
As deception solutions are designed to lure attackers outside the realm of legitimate network traffic, any communications intercepted by the deception can immediately be classified as suspicious, if not outright malicious. In addition to identifying suspicious activity where no traffic should ever be, network-based deception solutions are also isolated from the inherent noise of endpoint processes, further increasing the accuracy of any resulting alerts. Deception-based ransomware detection approaches also have an advantage over many other types of detection systems as they are able to monitor for changes to the deception file system with total disregard for the contents of the data. This provides further certainty to the ransomware alerts generated, still without impacting any production data or workloads.
Lack of broad coverage has historically been an inhibitor to more widespread adoption of deception technologies as although they were isolated and accurate, it was challenging to ensure an attacker/infection would stumble upon the deception. In recent years however, deception vendors have been introducing various technical solutions to increase the breadth of coverage provided by their respective solutions. In some cases, these technical solutions are achieved without significant increases in the solution’s existing footprint.
Like computer worms, botnets, and remote access trojans, ransomware has come to the forefront of threats facing organizations in recent years, and although new approaches are required to address this threat, there are solutions available to better detect and respond to ransomware without impacting production workloads or data. By diverting attackers away from production endpoints, deception technologies are able to better protect the organization with broad coverage, high fidelity alerts, and broad environmental coverage. Although traditional anti-malware approaches tend to focus on looking deep within each endpoint for suspicious activity, in the case of ransomware particularly, this equates to monitoring the coal miners and when an event happens, you’ve just lost a miner. Ransomware is an aggressive and fast-moving threat facing organizations of all sizes, so rapid, accurate, and isolated identification of potential infections is a necessity in today’s fast-moving threat landscape.
This post was originally published on http://www.infosecisland.com/rss.html.