Any engineer or physisist will tell you that Entropy is like Gravity - there's no fighting it, it's the law! However, they can both be used to advantage in lots of situations. In the IT industry, a file's entropy refers to a specific measure of randomness called "Shannon Entropy", named for Claude Shannon. This value is essentially a measure of the predictability of any specific character in the file, based on preceding characters (full details and math here: http://rosettacode.org/wiki/Entropy). In other words, it's a measure of the "randomness" of the data in a file - measured in a scale of 1 to 8 (8 bits in a byte), where typical text files will have a low value, and encrypted or compressed files will have a high measure. How can we use this concept? On it's own, it's not all that useful, when you consider that many data filetypes (MS Office for one) are highly compressed, and so already have a high entropy value. So using just entropy, there's no telling a good MS Office file from one encrypted by ransomware. However, most data files have a specific file header, which includes a set of identification bytes (called "magic bytes") that identify what the data file is. For instance, those bytes for PKZIP files are "PK", and PE32 executable files use "MZ". You can identify these files by using the "files" program. While this command is native to Linux, there are nice ports of this for Windows. If "files" can't identify a file type, we' can then use the entropy value as a second check. If a file is encrypted, it will have a higher entropy value than one that isn't. In this case you are looking for a file where the character distribution is random, or at least much more random than a "normal" data file. You need both of these checks to identify suspect files - as mentioned today's complex data files are becoming much more compressed - Office files are actually PKZIP compressed these days, so will identify as PKZIP when checked with "files". Using these two checks to identify suspect files depends on a couple of things:
Using these two checks, I started with a simple powershell script that copies a subdirectory from one location to another, but leaves all of the suspected infected files behind. A log file is created that lists all files copied and all files that are suspect and should be looked at. I used Sigcheck (from sysinternals) to compute the entropy value. If the entropy is above 6 (8 is the maximum) and the magic bytes are unknown, I flag the file as "suspect". And for a test "ransomed" file, I'm using Didier Steven's sample file: ransomed.bin (see the bottom of this story for links). I used Powershell because most Ransomware affected shops that I've worked with have been infected in MS-Office files and other Windows data files, so Powershell seemed simpler than adding "install python" into a customer's crash IR situation. You could certainly take this same logic and code an equivalent python script that would cover Windows, Linux and OSX. After finishing my replication script and taking a step back, I realized a few things:
So I re-wrote the script to be more of a "single-minded". The final script simply lists suspect files rather than copies them. And I wrote a short C program to compute the entropy value (thanks to Rosettacode for the starting point on this!), which simply spits out the numeric value, rather than lots of other stats I don't need for this job. The final output list of suspect files can be used in a few ways:
From this script, you can see that "cleaning" ramsomware infected files isn't an insurmountable problem. A simple script like this feeding rsync can be used to create a "clean" copy of a datastore, and identify suspect files. Just be SURE to keep up with evaluating that "suspect" file list - as noted, depending on your data store there might be lots of clean files in that list at the moment (give me a few weeks to improve this). If you run the script as-is to blindly feed rsync, you won't have a complete copy of your datastore. As always, this was a 1 evening coding effort, so I'm sure that there is more "elegant" Powershell syntax for one thing or another. Also, you'll see that my C code chooses readability over efficiency in a few spots. For either piece of code, if you find any errors, or if you identify better syntax to get the job done, please do use our comment form and let me know. Of more interest, if you find this code useful in your environment and you want to see a "version 2" - let me know in the comments also! As I work on this code, you'll find the most up-to-date version at: https://github.com/robvandenbrink/Ransomware-Scan-and-Replicate Didier's diaries on Ransomware and Entropy can be found here: =============== |
Rob VandenBrink 579 Posts ISC Handler Aug 8th 2016 |
||||
Thread locked Subscribe |
Aug 8th 2016 5 years ago |
||||
Im trying to get this to work, how do I set it up and run it? Am I just suppose to be running identify-ransomed-files.ps1?
|
Anonymous |
||||
Quote |
Aug 9th 2016 5 years ago |
||||
Thank you Rob for such a detailed implemtation. I did something similar for scanning network file shares.
Instead of powershell, I used Ent and a Bash script. From http://www.fourmilab.ch/random/ There is a Windows version as well. Using a combination of numbers such as Entropy, Chi Square, and the Arithmetic Mean... I was able to consistently differentiate all Microsoft Office files and their encrypted versions. So much better than using only Entropy. The only file types that still look too random, are PNG and GIF files. Very compressed images. JPGs haven't been a problem. I also have had some trouble with false positives with compressed archives like gzip, 7zip, winzip, etc. For those, I do use the *nix "file" command to get the file signature, and just ignore. I only rely on the magic bytes for these very few false positives. I don't want to use magic bytes or any other "expected signature" as a first check. I would rather it be a last check, for a few false positives. One reason for this is because I don't know when Ransomware authors will eventually start leaving file signatures alone. Many already have stopped changing file extensions. Although it may be quicker to just encrypt the whole file, it may be worth it to skip the header in an attempt to be stealthy. Also, you should keep in mind that reading the entire file is not necessary. I perform the entropy check only on a couple of select 1 kilobyte blocks inside the file. This means it is scalable and just as fast on large files, as small files. My continued concern which will evade this type of detection, is Ransomware that merely uses a common archive program and its built in crypto, to simply "zip" up group of files with passwords. This encryption will be detected by both our methods, but will be ignored becuase of the check for common compressed file type. At that point we'll need behavioral analysis to see if this could be a slow human vs. a fast Ransomware program. If the Ransomware is written to behave, and zip/encrypt as a human would... then this cat and mouse may have a dead end. Perhaps if we had a way to determine whether the encryption is done with a public key (common for Ransomware) or shared key (common in archiving software). But I've heard of fast Ransomware encrypting with a shared key (symmetrical), and then encrypting that shared key with a public key (asymmetrical) to be ransomed. |
Anonymous |
||||
Quote |
Aug 16th 2016 5 years ago |
Sign Up for Free or Log In to start participating in the conversation!