If you find yourself in such a situation, like I recently did, here is a throwback to the forensics methodology from two decades ago: Creating a disk image, and getting a forensic time line off an affected computer. That the computer is a VM in the Cloud makes things marginally easier, but with modern disk sizes approaching terabytes, disk image timelining is neither elegant nor quick. But it still works. Lets say we have a VM that has been hacked. In my example, for demonstration purposes, I custom-created a VM named "whacked" in an Azure resource group named "whacked". The subscription IDs and resource IDs below have been obfuscated to protect the not-so-innocent Community College where this engagement occurred. If you have the Azure CLI installed, and have the necessary privileges, you can use command line / powershell commands do forensicate. I recommend this over Azure GUI, because it allows you to keep a precise log of what exactly you were doing. First, find out which OS disk your affected VM is using:
Then, get more info about that OS disk. This will show for example the size of the OS disk, when it was created, which OS it uses, etc
Create a snapshot of the affected disk. Nicely enough, this can be done while the VM is running. All you need is "Contributor" or "Owner" rights on the resource group or subscription where the affected VM is located
Take note of the "location" parameter, it has to match the location of the disk, otherwise you'll get an obscure and unhelpful error, like "disk not found".
Next step, we create a temporary access signature to this snapshot.
This allows us to copy the snapshot out of the affected subscription and resource, to a storage account that we control and maintain for forensic purposes:
Take note of the The "Account-Name" that I removed is the name of your forensics Azure Storage Account where you have a container named "images". The copy operation itself is asynchronous, and is gonna take a while. You can check the status by using "az storage blob show", like this:
Once both numbers match, all bytes have been copied. In our case, the disk was ~127GB. Next step, create a new disk from the image. Make sure to pick a --size-gb that is bigger than your image:
Then, attach this new disk into a SANS SIFT VM that you have running in Azure for the purpose. In my example, the VM is called "sift" and sits in the resource group "forensicdemo":
Once this completes, you can log in to the SIFT VM, and mount the snapshot:
Looks like our image ended up getting linked as "sdd2". Let's mount it
Once there, you can run Plaso / log2timeline.py, or forensicate the disk image in any other way desired. If live forensics is more your thing, you can also re-create a VM from the snapshot image (az vm create --attach-os-disk..., with --admin-password and --admin-username parameters to reset the built-in credentials), and then log into it. Of course doing so alters any ephemeral evidence, because you actually boot from the affected disk. But if there is something that you can analyze faster "live", go for it. After all, you still have the original image in the Azure Storage Account, so you can repeat this step as often as necessary until you got what you need. If the VM was encrypted with a custom key, there is an additional hurdle. In this case, $disk.EncryptionSettingsCollection will be "not null", and you additionally need access to the affected subscription's Azure Key Vault, to retrieve the BEK and KEK values of the encrypted disk. If this is the case in your environment, I recommend to take a look at the Microsoft-provided Workbook https://docs.microsoft.com/en-us/azure/architecture/example-scenario/forensics/ for Azure forensics, which mostly encompasses the commands listed above, but also supports private key encrypted VM disks. |
Daniel 385 Posts ISC Handler Feb 25th 2021 |
Thread locked Subscribe |
Feb 25th 2021 1 year ago |
Anonymous |
|
Quote |
Feb 25th 2021 1 year ago |
Sign Up for Free or Log In to start participating in the conversation!