Published: 2011-10-29

The Sub Critical Control? Evidence Collection

In CC 18 we discussed incident handling that encompasses planning for and implementing Incident Response procedures. Fortunately, or unfortunately depending on perspective, there is a large body of both experience and material that exists. [1]

The quick win list [1] provides a great initial roadmap to success for this control some of which I would like to call out but first, evidence handling procedures. 

A couple of employers ago, I was tasked, along with a couple of other talented Security Engineers, with updating the evidence handling procedures for the company. It is important to understand that during an incident that evidence collection is just as critical as getting to the bottom of what happened.

One rule that we adhered to, even when we were sure that an incident was downgraded to an “Event”, is treat it as if it was going to be reviewed in a court of law.

Interesting that there is also an RFC you can follow in this regard [2]. RFC 3227 outlines some guidelines for Evidence collection and archiving. 

I would like to call out section 2.4 of RFC 3227 and show this as some basic things to think about when doing incident handling:


2.4 Legal Considerations

   Computer evidence needs to be

      -  Admissible:  It must conform to certain legal rules before it
         can be put before a court.

      -  Authentic:  It must be possible to positively tie evidentiary
         material to the incident.

      -  Complete:  It must tell the whole story and not just a
         particular perspective.

      -  Reliable:  There must be nothing about how the evidence was
         collected and subsequently handled that casts doubt about its
         authenticity and veracity.

      -  Believable:  It must be readily believable and understandable
         by a court.


So, in honor of our critical controls month, I would like to know what you do for evidence handling.

[1] http://www.sans.org/critical-security-controls/control.php?id=18

[2] http://tools.ietf.org/html/rfc3227

Richard Porter

--- ISC Handler on Duty


Published: 2011-10-28

Critical Control 20: Security Skills Assessment and Training to fill Gaps

There's two parts to this control - one focuses on users, the other on security and IT staff.

Keeping your users abreast of current threats and how to steer clear of these dangers is definitely important. But in today's compliance-driven corporate world, the average staff member already has to sit through many trainings and e-learnings on topics ranging from corporate records management to HR policies to anti bid-rigging rules, etc. Hence, the first hurdle that every security training has to overcome is to actually get the initial attention of the audience.

If you had the choice between attending a "Security Awareness Training", and a presentation called "How to keep your kids safe on the Net" .. which one would you join? The latter can impart just about the same lessons as the former, but hardly anyone in the audience will catch on to the fact that you are teaching them to be careful on the Net just as much as you empower them to watch their kids.

In other words, as in all marketing endeavors, packaging is everything. Once you have the users' initial attention, the easiest way to keep them interested is by using real life examples from your own company or institution. Even if the audience happens to be already aware of a certain attack or threat, and would otherwise be bored, they will always be interested in what REALLY happened, close to home.

You might find out that users come with three levels of security clue:

1. Those who just don't know better
2. Those who do know better, but take shortcuts, don't care, or have an "it won't happen to me" attitude
3. Those who do know better, and stick to being careful

For Group #1, train them, patiently and repeatedly
For Group #2, make a gory example out of one or two trespassers. The others will catch on. If you can't get away with gory examples "pour encourager les autres", then patiently treat Group#2 like Group#1.
For Group #3, thank them for every risk that they spot and report, and empower them to act as coaches for Group #1 staff in their team

SANS Control #20 http://www.sans.org/critical-security-controls/control.php?id=20 and the SANS "Securing the Human" project (http://www.securingthehuman.org) are two good starting points for further information.

Now, for training of security and IT staff. For most readers of this ISC diary, this will mean yourself, and maybe also people that you manage in your team. With training budgets for 2012 currently getting drawn up in many companies, and the economic situation making it unlikely that the budget will be a brimming bucket of money, now is a good time to honestly assess where the gaps are and how to most effectively fill them.

Ask yourself:
- Do I have the know-how to oversee the implementation of some or all of the 20 critical controls? Where are my gaps?
- Would I have the know-how to actually implement, hands-on, some or all of the 20 critical controls? Where are my gaps?

If you are a manager of a security team, I'd recommend you do the above assessment for each of your staff members. Not everyone can be an expert in everything. But, sadly, the recent years of paperwork compliance (SOX, the old FISMA, etc) have bred a large caste of security staff whose main and only competency seems to be "to track open issues". In the past couple months though, senior executives have definitely started to catch on to the surprising delta between what the "security compliance report" suggests, and what the reality is.

SANS training is doing a great job teaching people (and even managers :) hands-on security skills of value. But this isn't a SANS training commercial. Just an encouragement with emphasis to all security specialists out there to make sure that you keep your skills up to snuff. And to all managers of security specialists, that you make sure to have the right people for the job on the team.

Because one thing's for certain: The job ain't gonna get any easier anytime soon.



Published: 2011-10-28

Critical Control 19: Data Recovery Capability

 Incident responders may not always keep the business continuity planning (BCP) or management (BCM) team on their speed dial but I can tell you it’s worthwhile to do so in consideration of Critical Control 19: Data Recovery Capability.

Successful data recovery is as much a part of reliability as it is security, so embrace the process as paramount to successful response. Whether it is a significant outage from operational data loss (the SQL server ate that data) or that moment that leaves as all shuddering and queasy (attackers have tweaked our data and it is no longer reliable) you have to know you can recover.
This control does mention testing restorations from backups twice, once in the measurements section and once in the procedures and tools section, but I humbly submit that every possible measurement and procedure should be tested quarterly at a minimum. 
Much as one might with incident response, drilling the recovery/restoration process is critical. And not tabletop exercises; I mean real data to real systems in real scenarios that mimic your production environment. Clearly testing the process directly in production may be difficult but a staging (or dev/test) environment is ideal for this testing.
Unfortunately for them, you need someone expert in the restoration/recovery process on-call as part of your incident response planning. 
Here’s a scenario to chew on. Imagine responding to a reported incident where critical system configurations have gone missing (operational snafu, not malicious). The next day, you respond to another incident where a particular configuration has put an environment at risk and the extent of exposure needs to be identified. As a result, you ask for the offline configuration only to learn that it went missing in the incident from the day before, and that restoration was not immediately possible due another unrelated systemic shortcoming. Aargh!
How to avoid this? Short answer: test, drill, validate. Regularly. More than regularly on critical systems. 
Another ugly problem that comes out of incident response but is directly affected by or is subject to data recovery practices is the "when did we get pwned?" scenario. This is where backup design is so important. 
As the control mentions, you have to factor for operating system, application software, and data recovery. Yet each of these three is influenced by full, differential, and incremental methodology, depending on need, scheduling, and planning as well as the retention period.
Can you conduct a successful, relatively painless recovery today if you found out you were compromised two weeks ago and all data since is suspect? If no, keep working towards that goal. There is a light at the end of that tunnel, and it may not be a train. ;-)
Been through this? Succeeded? Failed? Let us know via the comment form.
Russ McRee - russ at holisticinfosec dot org - http://holisticinfosec.blogspot.com - Twitter: @holisticinfosec



Published: 2011-10-27

Software Update Potpourri

A couple of updates were released recently that are worth calling to your attention. 

  • Quicktime - APPLE-SA-2011-10-26-1 QuickTime 7.7.1

This patch addresses critical several issues affecting Quicktime running on Windows.   


More information is available at the Apple Security Updates
web site: http://support.apple.com/kb/HT1222

  • Chrome 15.0.874.102 to the Stable Channel for Windows, Mac, Linux, and Chrome Frame

This patch addresses a number of issues including XSS, Origin Policy violations, cookie theft and more. Chrome users should look at the details here:   http://googlechromereleases.blogspot.com/2011/10/chrome-stable-release.html

  • Java 5 Update 30 prelease

A preprelease version of Java 6 update 30 is now available to Java Developers.   This is a prerelease and not recommended for production systems.    Java developers can check it out here http://jdk6.java.net/6uNea.html

Thanks to Dave and Jim for bringing these to my attention.

Mark Baggett - @markbaggett



Published: 2011-10-27

Critical Control 18: Incident Response Capabilities

Some time ago I was brought in to help an organization create their Incident Response Team. Working together we defined an incident response procedure, assigned the various roles and responsibilities, worked with executive management to ensure the appropriate supporting policies and controls were in place and 'let her rip'. A few people in the management team had initially commented that they didn't see a need for all of the formality as the organization had never experienced even a minor breach in security. The first few months went by and everything seemed to be working fine. The head of the Incident response team wrote up a summary of all the incidents for the last thirty days and distributed it to all employees at the end of each month. Most of the incidents were pretty innocuous. A virus infection here or there, a targeted web attack that was thwarted by mod_proxy, or other malicious but minor deviations from the norm. Then, on the third month, someone reported a corporate laptop was lost by an employee. I'm told that after that report was distributed to all employees, the email account that was designated for reporting incidents got two separate emails asking if they should be reporting lost or stolen company assets. They were told yes. The following month 5 laptops were reported lost or stolen. The next month 6 laptops were reported lost of stolen. Over the first year of the incident response teams existence 2.5 laptops were reported lost or stolen every month. Prior to having an incident response team the organization had "never had a laptop lost or stolen". So was the creation of an IRT (Incident Response Team) responsible for the theft of all those laptops? Of course not! If you don't measure risk you can't manage it. If you don't have a formal process for capturing and responding to incidents you will not know they are occurring. No matter your size, you should have internal incident response capabilities. As they say, a failure to plan is a plan for failure. Here are some tips for ensuring the success of your organizations incident response capabilities.

  1. Write down your incident handling procedures. If it is written down then it is easier to explain to the business why your are doing what you are doing in the heat of battle. If you don't have a written procedure you can use the NIST guideline as a framework. http://csrc.nist.gov/publications/nistpubs/800-61-rev1/SP800-61rev1.pdf

  2. Document the roles and responsibilities of people on your incident response team. This will often include representatives from Legal, Human Resources, Public Relations, Compliance your Executive Sponsor and the usual suspect in the networking and information technology engineering groups along with your security team.

  3. Management support is critical to the success of most business initiatives. It is especially important when dealing with potentially politically explosive issues that are often associated with security incidents. Maintaining excellent and frequent communications with your executive management is critical to the success of your team.

  4. Establish requirements for all personnel to report suspicious incidents to the incident response team.

  5. Generate a regular report that summarizes the incidents that have occurred and how you handled them. Distribute the report to all employees in the organization.

  6. Require all incident responders to report in within a predefined amount of time once an incident has been declared. Periodically test the team to make sure everyone can be reached in a timely manner. Once you have your team together, conduct training exercises with various scenarios that test the teams ability to access and identify evidence on various systems throughout the networks they are responsible for protecting.

If you would like more information here are some helpful resources:





Mark Baggett - Handler on Duty

Twitter @markbaggett



Published: 2011-10-26

The Theoretical "SSL Renegotiation" Issue gets a Whole Lot More Real !

For years, we have been taught (warned?) that establishing an SSL session consumes much more in the way of CPU resources than the actual sessions do, once established. We've also been warned that there is a theoretical vulnerability in SSL Renegotiation in many web server implementations.  Combined, they make for a nice "it'd be bad if someone wrote such a tool" story in many security classes.

These two situations are evident in the specifications for SSL offload and Load Balancer devices, which are typically rated in "sessions established per second" rather than a total session count or data throughput value. It's also very much "in our face" when doing vulnerability assessments, when web server after web server comes back with a vulnerability named something like "SSL Renegotiation saturation" (or similar).   We've been told, over and over, that there is a "theoretical" problem here, waiting for an exploit to happen.

Since there hasn't been much in the way of exploits in this area, efforts towards resolving the SSL Renegotiation problem haven't been on anyone's front burner. That's all changed now - THC (The Hackers Choice), has released another tool - THC-SSL-DOS. This tool targets the problem of SSL Renegotiation. With very limited bandwidth, a single host can DOS almost any vulnerable web server.  Even offload devices such as load balancers are vulnerable (though more attacking hosts are required). In their release notes, THC makes the excellent point that the SSL renegotiation feature has never been widely used, and arguably should be simply disabled on almost all webservers.

Unfortunately, SSL Renegotiation is enabled by default on many servers, and we all know what happens with defaults - systems get installed with default settings, then NEVER get changed.

Just to emphasise the point, THIS IS NOT A NEW SECURITY EXPOSURE, it's simply a handy proof of concept tool to demonstrate a problem that's been hanging around for quite some time, hopefully with the goal (and with luck, the result) of getting this setting changed on vulnerable systems. 

Take a peek at this new tool.  Hopefully it will serve as a catalyst, proving that this is one setting that should be changed post-install.  It'd be nice if the developers of affected web server applications would take this as a cue to modify their installation scripts to change the default value of this setting as well.




Rob VandenBrink


Published: 2011-10-26

Critical Control 17:Penetration Tests and Red Team Exercises


Another diary compliments of Handler in training Russ McRee:
Penetration testers and red teamers rejoice! We have a control for you: Critical Control 17: Penetration Tests and Red Team Exercises (hereafter referred to as PT & RT for brevity).
A few thoughts in support of your efforts:
1)     Before taking on this activity, formalize it with management (in writing) to include vision, mission statement, and statements of work (SOW) in order to set clear expectations (and keep you from being fired or jailed). Prepare to write reports, and present findings. PT and RT activity is only as good as the dissemination of results and the subsequent remediation. Sure the fun part is going after systems and resources with permission, but the documented follow-up is just as important.
2)     A formalized process inclusive of best practices and documentation also supports PT & RT on behalf of compliance requirements (PCI, etc.). Trust me when I say, it’s a lot easier to win the argument for a PT & RT program when you can tell your leadership that it supports meeting compliance requirements. Yes, compliance is often a “min bar” but if it helps get your program underway, you’re winning right?
3)     A great resource and good starting point: Open Source Security Testing Methodology Manual 3.0
4)     If you’re going to red team, then blue team while you’re at it. A well-devised, concerted offensive engagement against your enterprise is also an ideal opportunity for your defenders to validate their monitoring and hardening practices.
5)     While it’s nice to have resident expertise, it’s hard to imagine that every organization has the resources to dedicate personnel exclusively to PT & RT, much as may be the case with dedicated IR resources. Often these duties fall on network engineers and systems administrators with a penchant for security. If so, great; how better to tune red team/blue team chops.
6)     The social engineering (SE) aspect of PT & RT activity inevitably includes an organizational political component you should be sensitive to. I’ll cut to the chase, people fall for SE tactics all the time and there is always shame associated with it. Making enemies will not help your cause. Devise SE tactics (educational intranet sites, metrics generators) that don’t necessarily automatically relegate people to the wall of shame/sheep. If you must actually compromise someone, dot your I’s and cross your T’s. Non-invasive recon for likely or ideal targets for whom management signs off before total pwnzorship is in your best interest. Again, your get out of jail free card is very important here. Malfeasance or anomalous behavior from systems belonging to your “victims” can then potentially be attributed to you.
7)     Virtual environments, while not ideal, make for an inexpensive test bed for PR & RT activity. Build attacker VMs (largely done for you; BT5 or Samurai WTF anyone?) and victim VMs (unpatched Windows, vulnerable LAMP, etc)
Have fun, but be careful.
What successes have you had structuring penetration testing and red teaming as a repeatable, sustained activity in your organizations? Let us know via the comment form.



-- Rick Wanner - rwanner at isc dot sans dot org - http://namedeplume.blogspot.com/ - Twitter:namedeplume (Protected)


Published: 2011-10-25

Recurring reporting made easy?

Most of us have to generate recurring reports for the state of security, system uptime or general performance at our respective work places.
Solid, clear reports many not appear to be one of the foundations that security is built on, but many voices would strongly disagree and management is usually one of them. You have to clearly report on data, trends, issues and events from reasons ranging from simple best practice diligence to justifying the reason IT security is critical the your business and everything in between. Good reporting won’t get you fame and fortune, but can provide a historical context, trending and a clear value of your efforts. Developing these mystic skills comes down to defining the what, why, when and who these report are being read by, having conversations with those receiving the reports to tailor them to requirements and finally plenty of practice.

Recently I've had to review a fair number of monthly reports from different sources and I'm always on the lookout for great ways to steal ideas from others and claim it as my own, er, learn and adapt from. I thought I'd share some observations on what I liked and what worked for me.

One quick note; these were all from the private sector, so I haven't reviewed any government or public sector reports but the theory is hopefully the same.

Rather than suggest some sort of universal template to fit every situation, I've observed applying the concepts of these four words to reporting. Using these concepts correctly just made being able to effectively understand and absorb the information so much easier:


  • Clarity
  • Consistency
  • Concise
  • Colourful

Here’s my interpretation, from hours of wading through those reports, to avoid your next report being used to test the speed of the office shredder.


Know you audience and write the report for them so apply the correct level of terminology, detail and as general groups think: your peer group, your boss, management or the general public.
No weird or wacky fonts. I have no issue with standard bolding, italics or large cases but gothic or super fancy fonts are distracting and slow the flow of reading.
Try to avoid jargon and the dreaded, unexplained three-letter acronym (TLA) as well. I spent an hour attempting to work out what ARE, ARM, POP and HIS meant and despite some pretty good guesses was completely wrong. The author had some team-only terminology of bespoke systems, so I may have well played on a lottery instead.
Avoid complex language and stick to plain language and terms, if possible. I do enjoy attempting to slip defenestration in to certain reports, but if the reader has to look up the word it loses that “wit” Reports aren’t about showing off, so keep the language practical and uncomplicated.


How can someone review previous or future reports without the same points of reference? Using the same template, with the same headings for the same recurring report makes it a breeze to see trends and the reader gets use to what to expect to see.
If you are using tables or charts keep using the same data, not random facts and figures. Keeping people guess what might be on next month report is an interest approach, but one that much force them to form a “that doesn’t make any sense” and “but, but the last report does mention this…” lynch mobs.


As technical people we love the details, but those recurring reports many not want every bit of detail, so summarise. You can refer to data, but sixteen pages of eight point, densely compresses text of server alerts messages is hard to read, let alone understand.
Keep away from using opening statements that could come from a novel “It was a dark and cold night. The winds were howling. Lightening forked wickedly across the skies, spearing the landscape surrounding the datacentre. With a final, silence choked gasp the UPS failed, mere seconds before the haggard night shift crew took their seats.” UPS failure at datacentre X 23:45 really covers that more effectively.


This isn’t a reference to fruity language, although a smattering of four letter words will get the reader’s attention but for all the wrong reasons. I’m, of course, meaning those eye catching images stuffed in to reports to make assimilating complex data simple and quick.  Have to be a bit careful with this, but good use of colour in tables or charts can draw in the reader. Well executed charts and tables can make absorbing complex data much easier and get across points very quickly. Use clashing colours or, just as bad, tones that blend in together making impossible to work out what’s happening are annoying and distract from the reader’s ability to understand the report or drain the will to continue through it.


Wrapping up, take pity on those you’re writing a report for and try to avoid making it one of those government forms Ph.D’s struggle to comprehend. A book that I really enjoyed reading was from Jon Moon [1], an English author, who is determined to rid the world of confusing, poorly written paper work of all ilks.
For those wondering well what should they could be writing these reports on have a look through SANS critical security controls [2], which my fellow handlers have spent this month expanding on and turn the points in to items to report back on about your environment.

As always, if you have any suggestions, insights or tips please feel free to comment.


[1] http://www.jmoon.co.uk/book.cfm
[2] http://www.sans.org/critical-security-controls/

Chris Mohan --- Internet Storm Center Handler on Duty


Published: 2011-10-24

Critical Control 16: Secure Network Engineering

We are now down to the last 5 controls, which are also labeled "Additional Controls". The reason they are labeled "additional" is not because they are less important. However, these controls are more processes that are harder to measure and automate. Controls 1-15 focused on issues that may be automated. 

Control #16 illustrates the automation problems pretty well. Secure Network Engineering is a process that relies on qualified humans designing and maintaining a network with security in mind.

Many issues we discussed before are easier if the network was designed securely. For example the last control, data leakage prevention, works best if egress points in your network are clearly defined and regulated. A good network design will also make it easier to block access to devices if they are found to be infected with malware, and it will make it harder for malware to spread internally.

Another problem that has come before: How do you apply secure network engineering to an existing network? I have run into this many times before. A network is supposed to be "re-designed" on the fly without interrupting current operations. Usually I have to say that this is just not possible without immense costs, and in some cases, it may be simpler and cheaper to build a new network from scratch.

There are some possibilities to automatically monitor at least part of this process. For example, if we receive an alert about a new server or a change to the network configuration, we may be able to automatically compare this to a change control system to ensure that the change was properly approved and went through a process reviewing out network design. In short: Make sure your actual network matched the network design and don't allow the actual network to deviate from the secure design.


Johannes B. Ullrich, Ph.D.
SANS Technology Institute


Published: 2011-10-23

tcpdump and IPv6

I have been experimenting with IPv6 and tcpdump libpcap over the past several weeks and here are some of the filters that I have found working for me to look for certain types of IPv6 traffic. tcpdump and IPv6 still has some limitations but it is still able to zoom in on some of the data you might be looking for. Here is the list of libpcap filters:

IPv6 and TCP
tcpdump -nr ipv6_traffic.pcap ip6 proto 6
tcpdump -nr ipv6_traffic.pcap ip6 protochain 6

IPv6 and UDP
tcpdump -nr ipv6_traffic.pcap ip6 proto 17
tcpdump -nr ipv6_traffic.pcap ip6 and udp

IPv6, hostIPv6 and host fec0:0:0:bebe::2
tcpdump -nr ipv6_traffic.pcap ip6 host fec0:0:0:bebe::2

IPv6, host fec0:0:0:bebe::2 and TCP port 22
tcpdump -nr ipv6_traffic.pcap ip6 host fec0:0:0:bebe::2 and tcp port 22

IPv6, host fec0:0:0:bebe::2 and everything except TCP port 22
tcpdump -nr ipv6_traffic.pcap ip6 host fec0:0:0:bebe::2 and not tcp port 22
tcpdump -nr ipv6_traffic.pcap ip6 host fec0:0:0:bebe::2 and protochain 6 and not tcp port 22

IPv6, host fec0:0:0:bebe::2, and all traffic to destination port TCP 22
tcpdump -nr ipv6_traffic.pcap ip6 host fec0:0:0:bebe::2 and tcp dst port 22

IPv6, host fec0:0:0:bebe::2, and all traffic from source port TCP 22
tcpdump -nr ipv6_traffic.pcap ip6 host fec0:0:0:bebe::2 and tcp src port 22

If you have tested other libpcap filters not listed here and would like to share them, post them in the comment form or email them via our contact form.


Guy Bruneau IPSS Inc. gbruneau at isc dot sans dot edu


Published: 2011-10-22

Oracle Java SE Critical Patch Update

We released this week a diary on Oracle Critical Patch Update, we would also like to emphasise that Oracle also released a Java SE critical patch update that patches multiple vulnerabilities (also includes non-security fixes) with the complete list here.

[1] http://www.oracle.com/technetwork/topics/security/javacpuoct2011-443431.html
[2] http://www.oracle.com/technetwork/topics/security/javacpuoct2011-443431.html#AppendixJAVA
[3] http://isc.sans.edu/diary/Oracle+Critical+Patch+Update/11839


Guy Bruneau IPSS Inc. gbruneau at isc dot sans dot edu


Published: 2011-10-21

Critical Control 15: Data Loss Prevention

Ever wondered if events like wikileaks are pertaining only to government agencies or large companies? Information is a precious commodity. Many institutions regardless of its size have information of interest to many people and those people are willing to pay large sums of money for it or even make major criminal acts to get it.

How can anybody  get access to information in an unauthorized manner? There are attackers at all times seek to exploit the vulnerabilities of information systems, but there are also users that, once they have been authorized to access a specific information asset, may have unrestricted access to the information and carry out actions such as copy and steal through removable storage media, email, dropbox, among others.

This means it is necessary to place a type of controls that allow the user has been authorized to access the information to manipulate it in the terms allowed by the information asset classification. This is known as Data Loss Prevention (DLP). Under what criteria can we classify information? We can use the classic: Confidentiality, integrity and availability, and can also add other important as traceability and non-repudiation. Traceability is the property of information that helps determine the operations performed on it at all times and non-repudiation is the feature that ensures that a transaction has been for the person whose user ID made ​​and no other. Depending of the classification on each variable, the operations allowed to the information asset can be defined as read only, e-mail transmission, shared resource copy, among many others.

 Data Loss Prevention Software allows monitoring of the following:

  • Data in motion: When you have a network security perimeter in place, just before traffic reaches the firewall you can put the DLP device to monitor incoming and outgoing traffic and then realize which users are violating information security rules by performing unauthorized transmission of information assets.
  • Data at rest: Information Assets are stored into servers located inside datacenters. DLP software can be installed into servers to learn about sensitive information stored in unsecure locations as open windows shares and unencrypted storage devices. 
  • Data in use: DLP software can be installed in endpoint devices to control the transmission of information assets like instant messaging,  desktop e-mail clients and web transmissions.

DLP implementations are very challenging because of information identification. If information is not correctly identified, false positives arises and can be very painful as they can stop the information flow inside the whole company. That is why you should perform several accuracy tests with the information asset classification and solve problems before deploying.

Please keep in mind that business needs are first and needs to be satisfied. You cannot implement controls that will make the company operation slow and painful. Check the control 15 implementation tips for more information.

Manuel Humberto Santander Peláez
SANS Internet Storm Center - Handler
Twitter: http://twitter.com/manuelsantander
Web: http://manuel.santander.name
e-mail: msantand at isc dot sans dot org


Published: 2011-10-21

JBoss Worm

A worm is making the round infecting JBoss application servers. JBoss is an open source Java based application server and it is currently maintained by RedHat. 

If you do run JBoss, please make sure to read the instructions posted by RedHat here:




Johannes B. Ullrich, Ph.D.
SANS Technology Institute


Published: 2011-10-21

New Flash Click Jacking Exploit

Feross Aboukhadijeh posted a blog post about a vulnerability in Flash that allows for a click jacking attack to turn on the clients camera and microphone. The attack is conceptually similar to the original click jacking attack presented in 2008. Back then Flash adjusted the control panel. 

The original attack "framed" the entire Flash control page. To prevent the attack, Adobe added frame busting code to the settings page. Feross' attack doesn't frame the entire page, but instead includes just the SWF file used to adjust the settings, bypassing the frame busting javascript in the process. 

Update: Adobe fixed the problem. The fix does not require any patches for client side code. Instead, adobe modified the control page and applet that users load from Adobe's servers. 

Details from Adobe: http://blogs.adobe.com/psirt/2011/10/clickjacking-issue-in-adobe-flash-player-settings-manager.html


Johannes B. Ullrich, Ph.D.
SANS Technology Institute


Published: 2011-10-20

Critical Control 14: Wireless Device Control

Mobility is one of the biggest challenges for information security professionals. Now we are in our offices with many customers that use wireless technology and not only laptops, but phones, tablets and other devices for corporate use. How can we provide access to the company's wireless network to devices that have staff members and third people?

We have to select a proper authentication and cypher mechanism for the wireless network. Known authentication schemes are:

1. PreShared Key (PSK): This is known as the standard "personal network" authentication scheme. The client must supply the PSK to gain association and connectivity to the wireless network.

2. Certificates | Username/password: This is known as the "Enterprise" authentication scheme. The client must supply valid credentials to log-in, including but not limited to username and password and certificates. RADIUS is mandatory for this type of authentication and it must include the appropiate dictionary to interact smoothly with the network equipment you have in your company. 802.1X is the best option you can use to enforce secure authentication to the wireless network. To determine which level of security you want to implement in the authentication level, there is a wide range of authentication protocols within the Extensible Authentication Protocol standard to choose from like:

  • Lightweight Extensible Authentication Protocol (LEAP): This is a propietary Cisco protocol which sends the authentication information using MS-CHAP, which makes it vulnerable to password cracking attacks. I have seen this implementation in my country widely deployed because it is easy and fast to implement. I mention this option because it should not ever be used in corporate production environments.
  • Protected Extensible Authentication Protocol (PEAP): This is a protocol that encapsulates the authentication information (Username and password) in a TLS tunnel so it travels secure to the authentication server. It is an interesting alternative with a reasonable degree of complexity for implementation, because it is not necessary to deploy certificates on all clients that connect to the network, which easily allows mobile devices like phones and tablets connect to the network without major trouble.
  • EAP-Transport Layer Security (EAP-TLS): This is a protocol that provides great authentication security to the wireless network, because apart from the username and password it requires that each client has a valid certificate issued in the certification authority's domain. One of the cons it has is the difficulty of implementation in mobile devices, since not all operating system versions support it and in some cases require additional software to work. This protocol is vulnerable to man-in-the-middle attacks.
  • EAP-Tunneled Transport Layer Security (EAP-TTLS): The difference with the previous protocol is the way that clients can authenticate, because is discretionary for the client device  to present a valid certificate from the domain certificate authority. In this case, the server is the one that authenticates to the client with a valid certificate within the domain certificate authority. Once the secure tunnel is established, the client authenticates sending the username and password. This protects the information against eavesdropping and man-in-the-middle attacks. Many operating systems would need as well additional software to sucessfully authenticate to the wireless networks using this protocol.

How can we protect the WLAN traffic against eavesdropping? Known protection mechanisms are:

1. Wired Equivalent Privacy (WEP): It's a weak security algorithm that uses the RC4 stream cipher for confidentiality and the CRC-32 checksum for integrity. The vulnerability of this protocol lies in the stream cipher algorithm used, as the same key for encryption of traffic can not be used more than once. Because in practice there is no such scheme implemented for this protocol that allows different keys for each packet, you can get the encryption key for the network by monitoring wireless network packets. There are several documented attacks about this protocol and many tools as aircrack and kismet that implements them. This protection mechanism is deprecated and should not ever be used in production environments where unauthorized access is critical.

2. Wi-Fi Protected Access (WPA): This protocol is part of the IEEE 802.11i standard. The encryption key problem is solved by using Temporal Key Integrity Protocol (TKIP) generating 128-bit key per packet transmitted on the network. This protocol was deprecated by IEEE in January 2009.

3. Wi-Fi Protected Access 2 (WPA2): This protocol is also part of the IEEE 802.11i standard. As TKIP is insecure, WPA2 replaces it with Counter Mode with Cipher Block Chaining Message Authentication Code Protocol (CCMP). It combines the Counter-Mode block cipher mode (CTR) for data confidentiality and Cipher Block Chaining Message Authentication Code (CBC-MAC).

Which combination of authentication and encryption scheme should you choose? It should be done according to the level of risk to which you are exposed. I always recommend Enteprise PEAP authentication with WPA2 because it is not  difficult to implement and provide a good level of security with a broad level of interoperability for devices that want to connect to the network. If you are paranoic, you can always use enteprise authentication with EAP-TLS/EAP-TTLS with WPA2.

Please don't forget to review the quick wins list for this control. They are really helpful when developing a plan to implement a Wireless Device Control Architecture.

Manuel Humberto Santander Peláez
SANS Internet Storm Center - Handler
Twitter: http://twitter.com/manuelsantander
Web: http://manuel.santander.name
e-mail: msantand at isc dot sans dot org


Published: 2011-10-20

Evil Printers Sending Mail

A reader reported receiving the following e-mail (modified to anonymize):

From; support@example.com
To: iscreader@example.com
Subject: Fwd: Scan from a HP Officejet #123456

A document was scanned and sent
to you using a Hewlett-Packard HP Officejet 28628D
Images: 4
Attachment Type: Image (.jpg) Download

I do not have a printer like this, but it is possible that a multifunction device will send scanned documents as an e-mail in this form. In this case, the links, which I simulated above using a blue underlined font, both lead to a now defunct URL: http://freebooksdfl (dot) info/main.php . The domain is marked as "suspended for spam or abuse" in whois. One of our handlers reports seeing similar e-mail but not being able to capture any of the content on related links so far.

Johannes B. Ullrich, Ph.D.
SANS Technology Institute


Published: 2011-10-20

Critical Control 13: Limitation and Control of Network Ports, Protocols, and Services

Observing never ending port scans against my systems was one reason I started DShield.org back in 2000. Still today, DShield shows that these scans continue to happen today. It is the goal of a port scan to find vulnerable services. Later, the attacker will use this recognizance to exploit these services.

In order to protect yourself, two basic measures need to be taken:

1 - limit listening services.

As part of your standard configuration, you should turn off all unneeded services. A service that is not running can not be attacked. Of course, you will also need to monitor any changes to this standard configuration. The control of listening services should not stop at controlling services commonly installed on the particular host, but the control should include rogue services as well.

Here are a few ideas to review listening services on hosts:

  • review the output of "netstat" regularly. Netstat will show any listening services. Of course, in the case of rogue services, an attacker may use root kits to mask these services from tools like netstat.
  • review ephemeral port usage. If a port is used by a listening service, it can not be used as an ephemeral portal for outbound connections. You will see a "gap" if you plot all used ephemeral ports on a system.
  • regular port scans. Periodically scan your systems for listening ports. However, be aware that an attack may have masked the use of the port and will only respond to requests from a particular source
  • Network monitoring: Tools like "pads" are able to detect new services on a network passively. This may enable you to detect hidden services as soon as the attacker connects to them. 

2 - applying firewall rules.

Back in 2000, firewalls were a lot less common then they are today. Today, systems arrive with host based firewalls. Many times, the firewall is already enabled to block all inbound traffic by default. In addition to host based firewalls, a well designed network should include network firewalls and take advantage of capabilities in devices like switches to further limit network traffic. 


Johannes B. Ullrich, Ph.D.
SANS Technology Institute


Published: 2011-10-19

House for rent! Observing an Overpayment Scam

About a month ago, my wife posted a "House for Rent" ad on Craigslist. (real nice house in a great area btw... in case someone is moving to Jacksonville ;-) ). A couple responses came in, among them, one from a person in England. Odd, but there are actually a couple British living in the neighborhood, so she responded:

From: C M [*** names altered ***]
Subject: Rent Inquiry
Hello - 
  I'm inquiring about the rental property, I will like to get some more details about the property,
I'll like you to give me the below detail ...
[*** questions about property ***]
Certainly not a native speaker of English (the questions I omitted where normal questions someone would have about a house. Cost, when will it be available, utilities included, address...). Some where answered already in the Craigslist ad, but ok. If you deal with prospective tenants, that isn't unusual. As this point, we didn't know that we dealt with someone who isn't local.
My wife's response:
From: H
Subject: your inquire about ...

Hi C

thanks for your interest. Please see the answers to your detailed questions below. 
Please feel free to call my cell phone *** if you would like to see the property 
in person

... answers to questions removed ....
And another email from the prospective renter. Again, sort of routine questions. At this point, the renter identifies he lives in England:
From: C M

Subject: Re: your inquire about ...

Hello H -

      Thanks for your respond, firstly I would want you to know that the property 
is OK with me and I would like to rent the property. I will be staying in the 
property for 1 year after which I will extend my contract on the property if OK 
with my need. 

I work with '*** ENGINEERING LIMITED' in England as a CNC 5 axis machining centre 
setter/operator/programmer and I'm on transfer to the USA. 

I will be moving with my wife, I'd like to know how far is the place from bus station, 
police station and gas station. 

At this point I want you to know that my company will handle the first month 
and the deposit which is ($2470) after which other payment for the property will 
be handle by me in person. 

I would also want you to know that all application and lease papers will be sign 
by me in person when I arrive. 

If this is OK with you, kindly send me the following details listed below ...

'Full Name that will be on the check'
'Mailing Address where you can receive the check'
'Home Phone'
'cell phone'

Once I receive these details from you, I'll send it to my employer, so that the
payment can be issued out to you immediately. We'll be moving in on the 1st of 
November 2011. Looking forward to your reply.

Best Regards


my wife responded (PO Box address she uses for the rental business, and she did not provide a home phone number). This was WAY too easy. A person being so fast signing up for a house unseen? We must have been too cheap!

And a few days later, the check arrived:


The check was written in the name of a person that is listed as an accountant / notary public in the town of Temecula, but the number I found is now used by a different company. The bank, Temecula Valley Bank, failed in July 2009 (http://fdic.gov/bank/individual/failed/temecula.html) and has since been acquired by First Citizens. It is not clear if the check would be honored (if it would be real). We didn't try to cash it.

It didn't take long to find out why we got such a "generous" check. First month rent + depost was only around $2,000. Instaed, we got almost $7,000!! An e-mail arrived essentially the same day the check arrived, apologizing for the overpayment, asking us to split the overpayment and send it via Western Union to two different addresses in the UK.

Luckily no damage has been done to us. I am still trying to figure out if the person named as origin of the check actually exists and got harmed. I have no reason to believe that this person, if they exist, are aware or profiting from this scan. We did report this to http://www.ic3.gov .

According to the FBI's Internet Crime Complaint Center (IC3), 3.6% of the complaints relate to overpayment fraud. 


Johannes B. Ullrich, Ph.D.
SANS Technology Institute


Published: 2011-10-19

Oracle Critical Patch Update

Those of you that are Oracle product users will be used to the quarterly Critical Patch Update. In case you missed it, it was released on the 17th.  There is a patch out for most of the major products.  Detailed information can be found here http://www.oracle.com/technetwork/topics/security/cpuoct2011-330135.html 

The appendix of the above note shows the affected CVEs and the associated CVS scores.  The criteria for the scores are shown, so you should be able to determine the local impact for your organisation. 

If you are running Oracle I suggest you start looking at these sooner rather than later, especially if you need to comply with PCI DSS and your onsite audit is getting near.

Mark H


Published: 2011-10-19

The old new Stuxnet...DuQu?

Yes, the tittle probably makes no sense at first, but keep reading...:)

Today was a pretty good day if you like malware and RE...

Symantec, McAfee and F-Secure, to name a few security vendors, released information about what they are calling "DuQu"...yes, I agree that it is a terrible name, but it is because this malware creates some files on the user's temp folder, that starts with ~DQXXX.tmp (where the XXX can be any number)...

There are several common aspects between DuQu and Stuxnet that leads to the conclusion that they were written by the same group.

While the original Stuxnet was focused on Industrial systems, aka SCADA, this DuQu malware is mostly used on a recon process, and being used as an advanced RAT (Remote Administration Tool).  Forget about Gh0st RAT or BlackShades RAT, just to name two "famous" ones...those are totally amateurs when compared to DuQu.

DuQu received commands via an encrypted config file, and seems to download a password stealer that is able to record several behaviors from user and machine and send to a Command and Control IP in India.

Like some of the components of the original Stuxnet, this one was also able to decrypt and extract additional components embedded into other PE files...fantastic! 

Oh, and like Stuxnet, some components had a VALID digital signature...:)

And before I forget, according Symantec report, new samples with compilation time of October 17th were discovered and are still being checked...

Agree that it is a good day for Reverse Engineers?


Pedro Bueno (pbueno /%%/ isc. sans. org)
Twitter: http://twitter.com/besecure


Published: 2011-10-18

Critical Control 12 : Malware Defense

This diary has been posted on behalf of Russ McRee.

Those of you who are regularly committed to the task of protecting your enterprise from malware are
well aware of the pain points. Critical Control 12: Malware Defenses offers nine prospects for success in
the battle against a continuous and pervasive challenge.

Amongst the quick wins are easier methods such as preventing auto-run content; in the context of share
jumping worms such as Harakit/Renocide this will definitely help, but there are additional tactics that
supplement the list found in Critical Control 12.

1) In general, is there really a need to allow initiated outbound sessions from the likes of
production web servers? Preventing web browsing from production environments will definitely
cause whining but it can reduce the attack surface significantly

2) Commercial SIEMs clearly support #6 (automated monitoring tools should use behavior-based
anomaly detection) but correlation needn’t be limited to expensive solutions. The likes of
LogParser for Windows users or some strong grep, sed, and awk kung-fu for *nix users can be
utilized to create simple correlation tasks.

3) As part of #8 (continuous monitoring on outbound traffic) consider monitoring DNS and making
use of blocklists. While this entails much work (Advanced), particularly at scale, the ability
to blackhole hosts making requests for malicious domains has clear value. Guy Bruneau just
provided an update for this tactic a few days ago. Also, in support of correlation activity, if at all
possible, some semblance of a network baseline (known good egress) can be very useful even if
only utilized in high value networks.

4) To supplement #9 (an Incident Response process that allows for malware sample handling)
implement a very clearly defined process; deviation from established, tested standards risks
further outbreak. There are somewhat dated resources to draw from to help define initial
process, including NIST and CERT but there’s a critically important component related to the
overall malware incident response process for you to consider. DRILL! If you don’t practice
this activity (actual response as well as transport and analysis) on a regular basis you can’t
know what you don’t know. Operate under the premise that “no battle plan survives contact
with the enemy” every time you conduct a drill, improve via lessons learned, and implement
enhancements; you’ll be more likely to “survive contact.” Undertake this activity with other
teams upon which you have dependencies. Trying to quarantine an outbreak on specific VLANs
without the help of your network team, or deploying an emergency hotfix or patch without your
systems admins won’t make for a very good drill or tabletop exercise. Varying scenarios (worms
vs. Zeus vs. APT) will help test the boundaries of your skillsets as well.

What’s working for you in the fight against malware? Let us know via the comment form.


Published: 2011-10-17

Critical Control 11: Account Monitoring and Control


Both Account Monitoring and Account Control are things that "slide by" in many organizations, and come up over and over (and over) again in security assessments.
Things that get often missed or overlooked:

Too many Administrative Accounts. 
All to often, we see everyone in the IT group has "Administrator" equivalent rights in Active Directory.  If you are an application developer, you don't need Admin (every).  If you mainly reset passwords, you also do not need Admin rights.

Using the Administrator or Root Account directly.  To add to the first point, everyone who needs admin rights should have a named account that has those rights.  So, for instance, Jane Doe might have an account "jdoe" for day-to-day application use, but and admin account of "admin.jdoe".  If people use the administrator accounts directly, then there is no way of ever finding out "who did what" in the event that you need that information (and believe me, someday you will need that information).  If you can do this with a single admin account for multiple platforms (for instance, an Active Directory account) , it also means that when an admin leaves the company, you can revoke their access by deactivating their account from a single location.

Using an Admin level account for day-to-day tasks.  Let's paint a scenario - if you check your email with an admin level account, and some malware gets past your SPAM filter (like that doesn't happen every day), the malware now has admin rights in your domain.  If it's a keylogger, and you now SSH to a router or fire up vCenter to admin your VMware Infrastructure, they've now got credentials and access to a whole lot more of your Datacenter.  Really, use sudo or su in Linux, or use "run as administrator" in Windows to flip back and forth.  Or if you really need admin, keep a VM running that has that right so you can flip back and forth easily!

Work with HR for account creation and deletion.  In all too many cases we see dozens of accounts (sometimes hundreds) that haven't been used in months, only to find that people have left the organization and the IT group wasn't told.  Even if their account data needs to be kept around, create a "data transition procedure" to move data to the person who needs it next after someone leaves.

Shared accounts are EVIL (really). 
Too many times we see clerical accounts that are shared between dozens of people in a group.  These folks generally have direct access to customer information and to data input that affects prices.  I've seen one example where a temp wasn't sure what a field was, so they put "1" in to close out orders.  Unfortunately, it was the "dollars per square foot" value for the material selected - it took accounting weeks to untangle that mess!  Without named accounts, it would have been impossible to figure out who was making this error !  Shared email accounts can create similar problems with accountability.

Password Complexity is a must-have anymore.  While we can have a flame-fest about if complex passwords or passphrases are better (I'd lean towards passphrases, but it's not workable in every environment), you simply can't have people use "password", or their kid's names anymore for access - it's simply too easy to crack. 

Account Lockout is a must have. 
If someone is trying to brute-force your CEO's webmail account, yes, you do want the account locked until you can speak with them.  Better they lose access for an evening, as opposed to having their account compromised and confidential information be disclosed (next years products, mergers or acquisitions, salaries etc).

If you don't have a Password Policy (or have it covered in your Acceptable Use Policy), it's probably time that you put one together that covers all of these issues, as well as enforcement of periodic changes.  Make sure that whatever is in the policy, that you can enforce your policy it in the OS (it's not a bad thing to mirror the default Windows Password Complexity setup, that way enforcement and audit are built into the OS)

While you are at it, try to put one-way encryption into your password policy.  We recently had a lively discussion about user passwords for an application being stored in a database, "in case someone needed it".  You should never need a user's password.  If you do, you need to revisit how your application is being written.  If you keep users' passwords, they immediately have deniability for anything that happens.  This could mean that system administrators could then be suspected or found liable in the case of illegal activity.  So, really, get with the 90's and use the OS passwords wherever possible, or, second choice, use hashes and salts to govern your app accounts.

You can and enforce and monitor for all of this with issues with native logging and controls in the Operating System of most popular OS's (Windows, Linux, Unix).  If you have a legacy system that does not do this, it's probably a system that should be revisited.

As always, any tools you might use, solutions you may have found or "war stories" you'd like to share are welcomed - please use our comment form !

PS - a handy "su" for Windows (if you have a neater "su" or "sudo" solution, please share via our comment form):

========== su.cmd ==============

@echo off
if "%1" == "" goto HELP
f "%1" == "?" goto HELP
runas /env /user:%1 "cmd"
echo ===========================================================
echo SU.CMD - start a shell as another user (usually admin)
echo Usage:   su USERID
echo Where USERID is the target user
echo It is recommended that you do NOT SU to or login as native
echo Administrator accounts
echo ===========================================================



Rob VandenBrink


Published: 2011-10-15

DNS Sinkhole Parser Script Update

Those using the DNS Sinkhole ISO that I have made available on the Whitehats.ca site can now download the most current version of sinkhole_parser.sh script between new ISO releases. The script contains new lists that were not part of the 7 July 2011 release. The script is available on the handler's server here with the MD5 here.

DNS Sinkhole using your own BIND Server

I have posted all the necessary scripts use in the ISO if you want to use your own BIND setup. The tarball is available here with the MD5 here. Follow the instructions posted on this page to get started.

[1] http://handlers.dshield.org/gbruneau/


Guy Bruneau IPSS Inc. gbruneau at isc dot sans dot edu


Published: 2011-10-14

Critical Control 9 - Controlled Access Based on the Need to Know

For the full description, please see: http://www.sans.org/critical-security-controls/control.php?id=9

Whenever we are talking security, and assigning access control lists, the principle of least privileges comes up. Our firewalls should block all ports, but the once we need to do business. The same is true for file access control lists (ACLs). We should only allow read, or write, access to files as needed. 

The principle of least privileges is very fundamental to information security, and closely related to the idea of "the need to know". This term tends to be used more in government and military contexts, but it is very valid in commercial networks as well. 

For example, in order to obtain certain information, a user needs a certain "clearance" (usually a position in the company) AND a need to know the information. In a hospital setting for example, all nurses likely are considered trusted enough to read any patients information. However, they still only should access information for patients they deal with.

Fine grained access controls like this are critically linked to the correct labeling of data. In most cases I have seen, the labeling of data is actually the main problem. Consider a spread sheet with patient data in a hospital. In order to provide proper access control, the access control system needs to take into account which patients are listed in the spread sheet, then later it will compare that list to a list of patients a nurse is associated with before providing access. Realistically, this is not going to happen. Data needs to be properly segmented and once data of various classifications ends up in the same spot (like an Excel spreadsheet), it is usually too late.

As a start, one should probably first define different rolls in the organization, and figure out what each roll needs to know to get their work done. Later, the rolls may be refined and access control may be further restricted. The same is true for data labels. Initially, you may break data down in rough categories and as your system is refined, you may want to come up with closer categories.

But don't rush this. Nothing is more frustrating then security getting in the way of normal business processes and this is probably the fastest way to loose steam for your initiative. This control should be considered a control for a more mature organization that already covered most other controls. Start this one slowly, and consider implementing detective controls first before implementing enforcement.

For example to go back to our hospital case. If you come into the emergency room bleeding, your priority is that the nurse will have fast and proper access to your medical record. You getting proper help fast is more important (at least at that time) then your patient record confidentiality. Instead of focusing on enforcing access controls, a hospital may deploy log analysis to monitor nurses who accessed more files then others, or for example to review who accessed the records of  a celebrity visiting the hospital.



Johannes B. Ullrich, Ph.D.
SANS Technology Institute


Published: 2011-10-13

Critical Control 10: Continuous Vulnerability Assessment and Remediation

This control, Continuous Vulnerability Assessment and Remediation is an important mechanism to detect known vulnerabilities, if possible patch them or use additional host or network controls to prevent exploitation until a patch or update is released. Preferably, the assessment tools should categorized the discovered vulnerabilities using industry recognized standards such as CVE to correlate and classify the data obtained with other network devices such as a SIM, to detect attempts or successful exploitation of the vulnerability.

There are a large number of vulnerability management tools available on the market (free and commercial) which can be used to evaluate system configuration on a continuous basis. A first step would be to run a daily discovery scan against network devices and run a full audit of the systems with credentials on a weekly basis, taking into consideration the impact on the network (i.e. when the network devices are the least busy). This would ensure that new found vulnerabilities are taken care of in a timely manner soon after they have been discovered. Whenever possible, it is important the patch be tested in an environment that mimics the production system before being pushed enterprise wide. If the patch fails the tests, other mitigating controls should be tested and put in place to prevent exploitation.

In order to put in place an effective continuous vulnerability assessment plan, the enterprise scanner should be able to compare the results against a baseline and alert the security team when significant changes are detected. This can be done via a ticketing system, with email, etc.

All system identified in CC1 should be scanned for known vulnerabilities and should alert the security team upon the discovery of new devices. To ensure CC10 is effective, the security team must conduct a periodic review that the daily and weekly assessments are working as configured and have completed successfully.

There are many more audit tools out there than those posted below, let us know what have been the most effective in your environment.

Commercial Audit Tools

Retina: http://www.eeye.com
GFI LanGuard: http://www.gfi.com
nCircle: http://www.ncircle.com
Nessus: http://www.tenable.com
Qualys: http://www.qualys.com

Freeware Audit Tools

IPScanner: http://www.radmin.com/products/ipscanner/index.php
PSI: http://secunia.com/vulnerability_scanning/personal/
Nmap: http://insecure.org
OpenVAS: http://www.openvas.org

[1] http://www.sans.org/critical-security-controls/control.php?id=10


Guy Bruneau IPSS Inc. gbruneau at isc dot sans dot edu



Published: 2011-10-13

Dennis M. Ritchie (1941 - 2011)


The news that Dennis M. Ritchie, the creator of the C Programming language and well known for contributing to the creation of the UNIX Operating System, died on October 8, 2011, hit the Internet headlines today. 

Also very well known to all UNIX/C Programmers for his co-authoring of the book The C Programming Language [1].  I will not profess to know much of Dennis M. Ritchie to speak here.  I do recognize his contribution to my career and all the UNIX that flows through my blood stream.  

I have read many stories today covering the life of Dennis M. Ritchie.  The one I found most credible and interesting to read, was ironically an autobiography [2]. Take a moment of appreciation and read through it when you have a chance.  Bell-Labs also hosts a page for dmr [3]. Those pages are my recommended reading for the day.

The loss of Steve Jobs last week is recognizably an enormous loss to society and the world.  A few days later, we have lost Dennis M. Ritchie.  It is an understatement that Steve Jobs and all like him have been standing on Dennis M. Ritchie's shoulders for years. Dennis M. Ritchie was a giant and can be recognized as such.  

Simply put, this world is a better, more productive and richer place because of Dennis M. Ritchie.  We all owe a bit gratitude.


#include <stdio.h>

int main () {

   printf("goodbye, dmr. RIP.\n");



[1] http://cm.bell-labs.com/cm/cs/cbook/index.html
[2] http://cm.bell-labs.com/cm/cs/who/dmr/bigbio1st.html
[3] http://cm.bell-labs.com/who/dmr/   


Kevin Shortt
ISC Handler on Duty


$ gcc dmr.c
$ ./a.out
goodbye, dmr. RIP.


Published: 2011-10-13

Critical OS X Vulnerability Patched

With today's focus on the release of iOS 5, and people worldwide refreshing the UPS shipping status page to check if the iPhone 4S left Hong Kong or Anchorage yet, a patch released for OS X Lion (10.7) came in under the radar. In addition to bringing us iCloud support and a good number of other security related patches, one issue sticks out as SUPER CRITICAL, PATCH NOW, STOP THAT iOS 5 DOWNLOAD.

The exploit can be implemented in a line of javascript, and will launch arbitrary programs on the user's system. It does not appear that the attacker can pass arguments to the software, which may make real malicious exploitation a bit hard, but I am not going to wait for an improved proof of concept to proof me wrong.

That said: It is our policy not to link to exploit code. Search twitter and other outlets for links. We may reconsider if we see the code used maliciously. At this point, I am only aware of the PoC site. Please let us know if you spot it anywhere else.

NB: My Macbook failed to boot after applying the update. Still debugging why :(

Update: In my case, the Macbook boot failed because I had Symantec's PGP software installed. I didn't use the whole disk encryption, but PGP still installed drivers that turned out to be the problem. My recovery process:

- hold command+R during boot to boot into recovery mode (if you got a recovery partition
- if you are using filevault2, launch the disk utilty to unlock the disk
- remove the following files from your system disk (which is now mounted under /Volumes )


This did it for me. The next reboot went fine. For more details see the following sites that helped me get this working:


Johannes B. Ullrich, Ph.D.
SANS Technology Institute


Published: 2011-10-12

Critical Control 8 - Controlled Use of Administrative Privileges


Next up, Critical Control 8.  This one shines a spotlight on the need to place tight controls around the use of Admin or any Powerful Privileges on all of your systems.  Essentially, what this means is Admin access (root/Administrator accounts) should be tightly controlled and monitored for use and abuse.

Exploiting the Control 

The Admin privileges can always be exploited when controls are not present. Here are some quick examples of why these controls are important.

  1.  When Admin accounts are used regularly, they can be exploitable...

         - when a malicious email is opened.
         - when a malicious file is downloaded and opened.
         - when the user visits a site that can exploit the browser.
             (these exist whether unwittingly or inadvertently)

        …which gives enough access to own your system and your data.

   2.  When user accounts with Admin privileges are configured with standing access to privilege escalation and little accountability, they can be exploitable…

         - by exploiting the user account through one of the methods above…
         - by password guessing the user account with standing privileges…

    …then escalating access to own your system and your data.


The definition of Critical Control 8  identifies 8 QUICK WINS.  I will not cover them here.  Read through them, and do not be shy about sharing ideas in the comments on how to implement them.  We all want to read more!

One example...

I can provide some detail on the use of sudo to assist in mitigation of risk and the use of the root account on your UNIX servers.  
One method is to implement the following controls to the root account in order to minimize its use and abuse of privilege.

  1. Automate the changing of the root password on a regular basis. Daily is my recommendation.  There are many ways to accomplish this, so please share your ideas.
  2. Limit access to the operational staff to an “as needed” basis. When crisis/incident/support needs arise, provide a mechanism for them to “check out” or “look up” the root password.  Again many ways…share your ideas.  
  3. A way to keep the revolving need at bay and minimize the exposure to root for any ops support team is to create a list of common commands the systems administrator staff use daily.  Take this list and configure sudo to provide standing access to an exhaustive but limited command set.  This mechanism provides two things:  

               - Lessens the opportunity for the abuse of privilege.
               - Provides accountability to the user that executes the commands.

A brief example of implementing this sudo rule set can be:

(NOTE: this is NOT an exhaustive list, it is brief only to illustrate.)

Place this rule set in to your sudoers file and create and add all of your system admins into the "admins" group, and they will have the ability to us
e sudo to execute these commands as root.

     User_Alias SYSTEM_ADMINS = %admins

     Cmnd_Alias ADMIN_COMMANDS =       \
                       /bin/date,      \
                       /bin/kill,      \
                       /bin/mount,     \
                       /bin/umount,    \
                       /sbin/ifconfig, \


I have in the past been involved with efforts of this nature in sizeable shops.  It is difficult at first, but it can provide good efficiencies and it always keeps the auditors happy when it comes to US SOX laws and the like. 

Please feel free to share what other ideas are being used out there.

Implementation, Metrics, and Testing

Controlling the use of Admin Privileges is no small task, and only gets harder as your environment continues to grow.  So if your shop is small, get to it.  It will never get easier than it is today. 

The control definition on the URL above provides some insight on Metrics, Testing and Monitoring the use of admin privs.  Read through it and please use the comment button to provide some ideas and feedback on the following:

Any other examples of gaps that this control proposes to mitigate? 
(I offered two above).
What can be used to accomplish the 8 QUICK WINS?
What controls can be used for Windows Powerful Privilege?
(Many of us want to hear what you do!)

Any variations of sudo that can provide some good control?

What are you using for Sensors, Measurement, and Scoring?
(See CC8 definition: http://www.sans.org/critical-security-controls/control.php?id=8)

The more we share these ideas the safer all of our systems and data will be.

Kevin Shortt
ISC Handler on Duty


Published: 2011-10-11

Microsoft Security Intelligence Report (SIR) - Volume 11

Microsoft released today volume 11 of its Security Intelligence Report covering the first half of 2011.

Swa Frantzen -- Section 66


Published: 2011-10-11

Apple iTunes 10.5

Apple released iTunes 10.5 for Windows and Mac OS X. For those following Apple this comes as no big surprise as there are functionality changes expected due to the imminent release of a new iPhone model. What is however a bit surprising is that they also released an impressive list of fixed vulnerabilities in the windows version of iTunes.

Even more interesting is that that list also mentions that  e.g. "For Mac OS X v10.6 systems, this issue is addressed in Security Update 2011-006" or "For OS X Lion systems, this issue is addressed in OS X Lion v10.7.2". And those are respectively a security update and an OS update that are not yet released at the time of writing.

Swa Frantzen -- Section 66


Published: 2011-10-11

Microsoft Black Tuesday Overview October 2011

Overview of the October 2011 Microsoft patches and their status.

# Affected Contra Indications - KB Known Exploits Microsoft rating(**) ISC rating(*)
clients servers
MS11-075 A vulnerability allows random code execution with full system rights through loading a hostile library from a WebDAV network share. Related to SA 2269637.
Active Accessibility

KB 2623699 No publicly known exploits. Severity:Important
Critical Important
MS11-076 A vulnerability allows random code execution with full system rights through loading a hostile library from a network location. Related to SA 2269637.
Media Center

KB 2604926

Exploits are trivial to find on the Internet

Critical Less Urgent
MS11-077 Multiple vulnerabilities in windows drivers allow Denial of Service, privilege escalation and random code execution.
Replaces MS11-054.
Windows drivers

KB 2567053

No publicly known exploits

Critical Important
MS11-078 A vulnerability in .NET (XAML Browser applications) and silverlight allows random code execution with the rights of the logged on user. Also affects IIS server configured to process ASP.NET pages.
Replaces MS09-061, MS10-060 and MS10-070.
.NET framework

KB 2604930
No publicly known exploits Severity:Critical
Critical Critical
MS11-079 Multiple vulnerabilities in Forefront Unified Access Gateway allow Denial of Service, privilege escalation and random code execution with the rights of the logged-on user. It affects both the client and server components, the impact is greater on the clients.
Forefront Unified Access Gateway (UAG)

KB 2544641 No publicly known exploits Severity:Important
Critical Important
MS11-080 An input validation vulnerability in the afd.sys driver allows privilege escalation.
Replaces MS10-046.
Ancillary Function Driver (AFD)

KB 2592799 No publicly known exploits Severity:Important
Important Less Urgent
MS11-081 The usual monthly collection of vulnerabilities in Internet Explorer. Cumulative patch. All versions of IE6 to IE9 are affected.
Replaces MS11-057.

KB 2586448 No publicly known exploits Severity:Critical
Critical Important
MS11-082 Vulnerabilities in host integration server allow denial of service. The host integration server listens to udp/1478, tcp/1477 and tcp/1478.
Host Integration Server

KB2607679 Both vulnerabilities are publicly known. Severity:Important
Less Urgent Important
We will update issues on this page for about a week or so as they evolve.
We appreciate updates
US based customers can call Microsoft for free patch related support on 1-866-PCSAFETY
(*): ISC rating
  • We use 4 levels:
    • PATCH NOW: Typically used where we see immediate danger of exploitation. Typical environments will want to deploy these patches ASAP. Workarounds are typically not accepted by users or are not possible. This rating is often used when typical deployments make it vulnerable and exploits are being used or easy to obtain or make.
    • Critical: Anything that needs little to become "interesting" for the dark side. Best approach is to test and deploy ASAP. Workarounds can give more time to test.
    • Important: Things where more testing and other measures can help.
    • Less Urgent: Typically we expect the impact if left unpatched to be not that big a deal in the short term. Do not forget them however.
  • The difference between the client and server rating is based on how you use the affected machine. We take into account the typical client and server deployment in the usage of the machine and the common measures people typically have in place already. Measures we presume are simple best practices for servers such as not using outlook, MSIE, word etc. to do traditional office or leisure work.
  • The rating is not a risk analysis as such. It is a rating of importance of the vulnerability and the perceived or even predicted threat for affected systems. The rating does not account for the number of affected systems there are. It is for an affected system in a typical worst-case role.
  • Only the organization itself is in a position to do a full risk analysis involving the presence (or lack of) affected systems, the actually implemented measures, the impact on their operation and the value of the assets involved.
  • All patches released by a vendor are important enough to have a close look if you use the affected systems. There is little incentive for vendors to publicize patches that do not have some form of risk to them.

(**): The exploitability rating we show is the worst of them all due to the too large number of ratings Microsoft assigns to some of the patches.

Swa Frantzen -- Section 66


Published: 2011-10-11

Critical Control 7 - Application Software Security

[the following is a guest diary contributed by Russ McRee]

Given the extraordinary burst in headlines over the last six months relating to "hacktivist "exploitation of web application vulnerabilities,  Critical Control 7: Application Software Security deserves some extra attention.

The control describes WAF (Web Application Firewall) use, input validation, testing, backend data system hardening, and other well-defined practices. Not until the 6th suggested step does the control state: “Organizations should verify that security considerations are taken into account throughout the requirements, design, implementation, testing, and other phases of the software development life cycle of all applications.
For your consideration: it can be argued that, as a canonical principle, strong SDL/SDLC practices woven into the entire development and deployment process leads to reduction of attack vectors. Reduce said vectors and mitigations provided by enhanced controls become less of a primary dependency. Long story short, moving SDL/SDLC practices to the front of the line, while not a “quick win,” can be a big win. That’s not to say that SDL/SDLC replace or supplants controls, but a reduction in risk throughout the development process puts the onus on secure code where controls become an additional layer of defense rather than the only layer of defense.
One of the advantages to a strong SDL/SDLC practice is the prescription of threat modeling where classification schemes such as STRIDE or DREAD help identify issues early as part of the development lifecycle rather than reactively or as part of controls-based activity.

OWASP offers excellent resources to help with SDL/SDLC efforts.

As you take a look at testing “in-house-developed and third-party-procured web applications for common security weaknesses using automated remote web application scanners” don’t fall victim to vendor hype. Test a number of tools before settling on one as some tools manage scale and application depth and breadth very differently. If you’re considering monthly or ongoing scans of applications that may serve thousands of unique “pages” but with very uniform code, you’ll want a scanning platform that can be configured to remove duplicate items (same URL and parameters) as well as items with media responses or certain extensions.
There is a wide array of offerings, commercial and free/open source, so test well and consider that you may want to consider more than one particularly if you’re considering inexpensive or free. Static code analysis tools are more often commercial but there are some free/open source offerings there as well. Plenty of search results will get you pointed in the right direction but again, test more than one. The diversity of results you’ll receive from different tools for both dynamic and static testing will surprise you.
Always glad to share experience with some of the tools in these categories should you have questions via russ at holisticinfosec dot org.


  • A strong SDL/SDLC program reduces dependencies on controls.
  • Test a variety of dynamic and static web application testing tools.


Published: 2011-10-10

What's In A Name?

"What's in a name? That which we call a rose
By any other name would smell as sweet."
– Juliet, Romeo and Juliet (II, ii, 1-2)

"A good name is more desirable than great riches; to be esteemed is better than silver or gold." – Proverbs 22:1 (NIV)

A rose is a rose is a rose

What if I could hack your organization and abuse your company’s reputation – and what if I could do it without your firewall, IDS, IPS, or your host-based badware detection making a peep?

What if I could use your organization’s good name to sell ED drugs, questionable Facebook "apps," shady online "personal ads," or to distribute porn that would make a sailor blush?

What if I did all of that, and you didn’t know? What if the hack itself took place on a machine you didn’t directly control and only accessed rarely?  And what if the hack was so subtle, so obscure, and so difficult to find that once I had it in place, it might be years before you ever stumbled across it – if you ever stumbled across it?

This nightmare scenario is, unfortunately, reality for at least 50 organizations – ones that I’ve been able to uncover – and I'm certain that there are many, many more.  Each of these organizations has been a victim of a malicious alteration of their domain information – an alteration that added new machine names to their existing information, and allowed bottom-feeding scam artists to abuse their good reputation to boost the search-engine profile of their drug, app, "personal ad," or porn sites.

Take a look at the following table:

These sites... Resolve To While the main site... Resolves To
buy-viagra.4kidsnus.com www.4kidsnus.com
drugs-1501.abingtonurology.com www.abingtonurology.com
payday-loans.accessbank.com www.accessbank.com
cialis.advancedsynthesis.com www.advancedsynthesis.com
cialis.apptech.com www.apptech.com
buy-cialis.asfiusa.com www.asfiusa.com
facebook.blueagle.com www.blueagle.com
buy-cialis.boothscorner.com www.boothscorner.com
24-buy-cialis.campsankanac.org www.campsankanac.org
viagra.cccsaa.org www.cccsaa.org
buy-cialis.cfi.gov.ar www.cfi.gov.ar
mg-drugs.chesarda.org www.chesarda.org
viagra.cranehighschool.org www.cranehighschool.org
buy-cialis.dollardiscount.com www.dollardiscount.com
buy-cialis.eap.edu www.eap.edu
buy-cialis.ejercito.mil.do www.ejercito.mil.do
buy-cialis.elbertcounty-co.gov www.elbertcounty-co.gov
cheap-viagra.ellerbecreek.org www.ellerbecreek.org
cialis-buy.esad.org www.esad.org
buy-cialis.fabius-ny.gov www.fabius-ny.gov
1-facebook.fwbl.com www.fwbl.com
facebook-i.georgetownky.gov www.georgetownky.gov
rx-drugs.golocalnet.com www.golocalnet.com
mg-drugs.goodhope.com www.goodhope.com
buy-cialis.hamwave.com www.hamwave.com
buy-cialis.haskell.edu www.haskell.edu
cialis.hiwassee.edu www.hiwassee.edu
buy-viagra.hothouse.net www.hothouse.net
buy-cialis.iiehk.org www.iiehk.org
buy-viagra.karen.org www.karen.org
facebook.lisboniowa.com www.lisboniowa.com
cialis.medpharmsales.com www.medpharmsales.com
buy-cialis.menalive.com www.menalive.com
buy-viagra.mvas.org www.mvas.org
buy-cialis.nywolf.org www.nywolf.org
buy-cialis.okgolf.org www.okgolf.org
loans.omill.org www.omill.org
cialis.onyvax.com www.onyvax.com
drugs-1501.pattywagstaff.com www.pattywagstaff.com
1-payday-loans.qunlimited.com www.qunlimited.com
1-facebook.rivcoems.org www.rivcoems.org
buy-cialis.sacmetrofire.ca.gov www.sacmetrofire.ca.gov
buy-cialis.santafeproductions.com www.santafeproductions.com
cialis.saturdaymarket.com www.saturdaymarket.com
buy-cialis.seabury.edu www.seabury.edu
buy-cialis.symspray.com www.symspray.com
buy-cymbalta.tcsys.com www.tcsys.com
buy-viagra.ubf.org www.ubf.org
drugs-1801.uhsurology.com www.uhsurology.com
buy-cialis.uniben.edu www.uniben.edu
buy-cialis.viethoc.org www.viethoc.org
drugs.williamson.edu www.williamson.edu
payday.yanceycountync.gov www.yanceycountync.gov
Note: These IP addresses can (and should) change.  The above information was gathered 10-7-2011 13:00 UTC

Over 150 "new" entries have been created in the zone information for these organizations.  Each of these new "sites" inherits whatever good reputation the parent domain may have accumulated, and is, therefore, valuable as a means of search engine optimization (SEO).

The following table shows that these hacks occurred at multiple DNS providers with a few being somewhat more "popular" than others:

Domain DNS Provider
4kidsnus.com dnsexit.com
ejercito.mil.do hostmonster.com
apptech.com ipage.com
qunlimited.com justhost.com
advancedsynthesis.com lunariffic.com
compliancemedical.com myhostcenter.com
menalive.com nocdirect.com
fabius-ny.gov pipedns.com
chesarda.org powweb.com
nywolf.org wiredtree.com
karen.org yourhostingaccount.com
Down the Rabbit Hole

Finding these sites was a matter of luck and perseverance.  Initially, I happened across a single, odd-sounding site name while looking for organizations that had been compromised by the bad guys for SEO purposes.  Using tools that attempt to list all of the domain records pointing to a particular IP address led me to more.  Google searches for sites linking to these domains led me further.  Unquestionably, there are more of these types of sites out there – some not currently in use.   However, because there is no good way to truly search DNS information, attempting to find these from the "outside" is difficult and frustrating.

"Round up the usual suspects..."

How did this happen? Unsurprisingly, no one I talked to about this was standing at the front of the line, ready to take the blame for these issues: Domain owners swear they used good passwords and are sure that the DNS providers were hacked, DNS providers are certain that the Domain owners used lousy passwords on their accounts... 'round and 'round we go. 

My gut tells me that the truth lies somewhere in between: bad passwords combined with poor account lockout controls on something like a cPanel-type web interface probably led to successful brute force attacks on most of these... I could, however, be completely wrong. Unfortunately, I just don't have the time to chase every one of these to ground.

Don’t Let This Happen To You

  • Check your DNS zone file information periodically, just to make sure nothing has been added without your knowledge.
  • Choose passwords wisely, especially on interfaces where brute-force attacks are likely (i.e. pretty much anything accessible from the internet).  Never use dictionary words.  And remember: while "qwertyuiop" may not be in your dictionary, it IS in mine...
  • Periodically take a look at your website how Google sees it (Google search: "site:yoursite.com" – NOT www.yoursite.com, and look through the pages for anything out of the ordinary.  Toss a few choice keywords in as well ("Viagra," "Cialis," "drugs," "personals," etc...).  This kind of search can help you discover many different types of issues with your site.


Tom Liston
ISC Handler
Senior Security Consultant, InGuardians, Inc.
Twitter: @tliston



Published: 2011-10-10

Critical Control 6 - Maintenance, Monitoring, and Analysis of Security Audit Logs

The next of our critical controls for Cyber Security Awareness Month is log management/monitoring/analysis.  This has been a interest/passion of mine for a long time. As Eric Cole (among others) is fond of saying in SEC 401, prevention is ideal, but detection is a must.  If you aren't logging as much as possible, how will you ever know when something bad happens? 

As mentioned in a couple previous diaries this month, one of the keys for this control is that all of the log generating devices (routers, switches, firewalls, servers, workstations, ...) be synchronized, so NTP is your friend.

The third key is to collect the logs somewhere other than the device that generates them, our "central log server."  This server should be one of your most locked down, best protected servers in the enterprise.  This way, even if the bad guys breach one of the servers and are able to modify the logs on the server to hide their tracks, there will still be the unmodified copy of the logs on the log server.

All of this does you no good if you aren't actually looking at the logs and this is where you need both some software to automate things and an experienced analyst.  The software is going to be necessary because sheer volume can quickly overwhelm an analyst.  This doesn't necessarily mean you need to spend a lot of money though.  While the commercial SEIM packages are good, you can accomplish a lot with a free software like awk and grep.  In 1997, Marcus Ranum introduced the notion of "artificial ignorance," the idea of using software to remove the "known good" entries to let the analyst concentrate on the new/unusual stuff.  For a number of years, I used his nbs (never before seen) software on my home system (though I recently tried to recompile it and ran into an issue that I haven't taken the time to track down yet).  Just last week I saw announcement of some new software, called LogTemplater, that implements a similar idea.  I've just started looking at it, but it looks like it has some promise. 

Once you've cut the logs down to a manageable volume, the analyst is also still crucial.  Analysis is an area where I personally think you are doing your enterprise a disservice by making this the job of the newbie.  An analyst who knows the environment and has developed a feel for what is normal can much more quickly hone in on where the real problems are.  On the other hand, if the newbie can work with an experienced analyst, this is a good way to quickly learn the environment.

There is no point in me repeating everything that is already at the SANS critical controls page linked below, so please check out the page linked below.

So, what do you use for your log analysis?  Let us know either in the comments section below or via our contact page.





Jim Clausing, GIAC GSE #26
jclausing --at-- isc [dot] sans (dot) edu


Published: 2011-10-07

Critical Control 5 - Boundary Defence


The next control on the list is boundary defence. It has been recognised by many organisations that protecting the perimeter, whilst important, is no longer what it is all about.  Many organisations have what what we generally consider a hard crunchy outside and a soft squishy centre. The "internal" network is expanding into people's homes via VPN, onto mobile devices, into partner organisations and more. So boundary protection is nowadays more appropriate than perimeter protection. This is reflected in some of the standards that are around (think PCI and various government specific standards). A few years ago internal network segmentation was not very common.  Today we are starting to see more network segmentation within organisations and people are exercising more control over traffic that flows through the network.

Many of the more spectacular breaches in the past year or two have been traced back to client side attacks.  This is where good boundary defences can help reduce the risk.  For example an organisation that has thought about the different types of uses for their network, the location of their data and how that data is to be accessed can start segmenting the network. They can implement measures to control the traffic or monitor it at the different boundaries.  Client side attacks may still work, but the exfiltration of data may be detected and the impact of the breach is reduced as the infected machine no longer has full access to whole network.

When thinking about boundary defence it also pays to think about how traffic is supposed to flow through the environment.  As part of this make sure you have policies in place that help you enforce this flow, e.g. no direct connections to the internet, all traffic must flow through a DMZ, etc. Once you have the architecture straight and you understand how information flows within the environment and how people access it, then it is time to start adding controls.

To control flows between network segments:    

  • Firewalls, external facing and internal.
  • Routers with ACLs (Ok for certain internal uses, but you might want to steer clear of using this as you only defence at the perimeter).
  • Intrusion Prevention System (IPS)
  • Consider jump servers for management of sensitive network segments.

Controlling specific Traffic flows:

  • Web traffic - Web filter to detect malware, filter access to malicious domains, perform URL filtering.
  • Mail - Mail relay in DMZ, Implement Sender Policy Framework (SPF) and/or DKIM to help others identify your authorised mail senders. Use AV/Malware and Anti SPAM filtering in the DMZ. (you might want to do the same on the internal mail filter)
  • Remote Access - Use 2 factor authentication, and control network traffic


  • DLP solutions - Monitor all traffic for information regarding your crown jewels.
  • Intrusion Detection - look for threats in traffic flows on the network or use a host IDS to identify specific host threats.
  • Central logging and review (e.g. SIEM).

There are many other ways of defending the boundary, let us know what you have found to be effective.



Published: 2011-10-06

Apache HTTP Server mod_proxy reverse proxy issue

The reverse proxy feature (mod_proxy) has a new vulnerability.  If pattern matching is used, a crafted attack (using invalid inputs - even though this does not involve SQL the "Little Bobby Tables" XKCD comes to mind again, for like the 3rd time this week ! ) can expose information on internal hosts.

Full details (and remediation) here ==> http://seclists.org/fulldisclosure/2011/Oct/232

Patch is available for 2.2.21 here==> http://www.apache.org/dist/httpd/patches/apply_to_2.2.21/

the CVE is pretty sparse, but look for more content soon ==> CVE-2011-3368

Rob VandenBrink


Published: 2011-10-06

Critical Control 4 - Secure Configurations for Network Devices such as Firewalls, Routers, and Switches


Hardening network infrastructure is an often overlooked step.  For some reason, switches and routers often fall into the category of "it works, we must be done".  Or, if it was hardened when installed, it'll be checked off as "done" (as in "done forever"). 

If you think about it, your routers, switches and firewalls touch *everything*.  We really should put a sustained effort into securing these devices as vital parts of the infrastructure. Don't limit yourself to routers, switches and firewalls in this - be sure to include Fiber Channel switches, Load Balancers, IPS servers and appliances (yes, i see these get missed all the time! ) in this category also

This sustained effort should have all the usual suspects:

Change Control
Logging and Time synchronization
Name user accounts (often using a back-end directory for authentication)
Encrypted administration protocols (no more telnet ! )
Verify boot Images before installing, and periodically after
Periodically update to remediate security exposures
Harden the device using a public or custom Benchmark (yes, even Firewalls are not hardened out of the box)
Audit the final configs against the Benchmark

Hardening Steps
We've had numerous diaries on this, on this, including (please let me know if i've missed any, I've only included the ones I could remember)

  • Logging  ( https://isc.sans.edu/diary.html?storyid=6100 )
  • Implementing ARP inspection to prevent Man in the Middle Attacks (use with caution) ( http://isc.sans.edu/diary.html?storyid=11650 , http://isc.sans.edu/diary.html?storyid=7567 )
  • Implementing DHCP Snooping to prevent Rogue DHCP Servers (usually these home routers gone bad, but they can be real attackers too) ( http://isc.sans.edu/diary.html?storyid=7567 , http://isc.sans.edu/diary.html?storyid=8233 )
  • Implementing encrypted management protocols - ie stamping out telnet and http, and migrating to sshv2 and https ( http://isc.sans.edu/diary.html?storyid=11434 )

There have also been some recent papers in the reading room on scripting capabilities on routers, which can also be exploited:

  • Using routers for port scanning and reconnaissance (IOSMAP) ( http://www.sans.org/reading_room/whitepapers/tools/iosmap-tcp-udp-port-scanning-cisco-ios-platforms_32964 )
  • Bypassing or hijacking firewall functionson routers (IOSCAT) ( http://www.sans.org/reading_room/whitepapers/tools/iosmap-tcp-udp-port-scanning-cisco-ios-platforms_32964 )
  • Using routers to host malware (yes, really !! ) ( http://www.sans.org/reading_room/whitepapers/malicious/iostrojan-owns-router_33324 )

(note that forcing signature of scripts is the remediation for all of these)

Looking for specific documents that you can use as Benchmarks to Audit or as Guides in hardening your infrastructure?  The most common ones referred to are:

  • CIS Router Benchmark ( http://benchmarks.cisecurity.org/en-us/?route=default )
  • CIS Switch Benchmark ( http://benchmarks.cisecurity.org/en-us/?route=default )
  • CIS Firewall Benchmarks ( http://benchmarks.cisecurity.org/en-us/?route=default )
  • RFC 3871 - Operational Security Requirements for Large Internet (http://www.faqs.org/rfcs/rfc3871.html )

The CIS Benchmarks have an advantage here, in that they also have an assessment tool to compare, audit and score a captured configuration against a benchmark.

Don't neglect vendor documentation in your efforts (Cisco, Brocade, Extreme, Juniper and all the rest).  Vendor docs will include their own security and hardening guides and documentation - in many cases the same recommendations are covered, but the specific commands will of course vary from vendor to vendor.  In other cases, they'll have security guidance that is specific to that vendor's features, platform or technology (fiber channel for instance will have quite different security guidance compared to ethernet)

Some example config lines for common recommendations (cisco syntax is covered, most other vendor's syntax is pretty similar, check your documentation and assess for impacts in your environment before implementing any of these blindly).  Note that these examples do not constitute a complete hardening guide (use the links above for that), they fall into that low hanging fruit category, things that are easy to change that will make a significant difference.

NTP (Network Time Protocol)
On most gear, setting up NTP for time sync is dead easy.  In most environments, you'll have a redundant pair of routers or switches that you can set up as the main NTP servers for the infrastructure (other sites might use Linux or Unix hosts, or dedicated timeserver appliances).  Normally these get 2 "reliable" NTP time sources (often we'll pick 2 unrelated, reliable NTP servers on the 'net - dedicated NTP servers will often have an atomic clock on board).  Everything else in the environment will point back to these hosts for their time.

ntp source GigabitEthernet0/0.1           ! setting the source is optional but recommended
ntp server
clock timezone EST -5
clock summer-time EDT recurring

Similarly, logging everything back to a common syslog host is usually a one-liner (or close to it)

service timestamps log datetime localtime show-timezone   
logging buffered 8192 debugging

logging source-interface GigabitEthernet0/0.1      ! also optional but recommended

Setting the source interface for NTP, syslog and the rest is important, if you don't and a backup link is activated, the source ip address for these will change and potentially mess up any log management you may have in place.  Note that the souce interface should either be a loopback, or some interface that will always be live (in this case it's a WAN router, I used the inside interface)

Also, the decision about what timezone to use in logging can also differ from company to company.  If the entire network resides in a single company, normally local time is used (as is seen above).  However, in larger organizations the span multiple timezones, a single timezone for all equipment can make troubleshooting a lot simpler.  In cases like this, it is common to see all the network gear log in GMT (Greenwich Mean Time), with the SYSLOG server perhaps adding a local timestamp to make it easier for the admins to find things in the Gigs of logs.  Using GMT is the recommendation in most hardening guides.  In other organizations, all gear will be sync'd to the timezone that "head office" is in.  This accomplishes the same thing, but troubleshooting individual gear can get complex, especially if you are also factoring in information from end users who are in that timezone.  Because of these varying perceptions of "what time is it?", it's generally best to operate in GMT, and have the gear report its timezone in the log entry (Thanks Don for highlighting this omission in the original story)

Tieing back to an external authentication source is a bit more complex, but it's usually only a few lines on the infrastructure gear - If your back-end is AD, your authentication server is probably Microsoft IAS or NPS, and your config lines will look like:

First, set up a RADIUS host that you have already configured for this:

radius-server host auth-port 1645 acct-port 1646 key <some key>
radius-server key <some key>
ip radius source-interface GigabitEthernet0/0.1           ! optional

Note again the source-interface thing.  If you miss this in RADIUS and another interface gets used (backup route activated or whatever), RADIUS will break unless you have all possible IPs defined on the RADIUS host - it's just way easier to use source interface commands.

Next, set up AAA (Authentication, Authorization, Accounting):

aaa new-model
aaa authentication login default group radius local

once the login part is done, let's secure the remote admin access

access-list standard ACL-VTY-IN  ! define authorized mgt stations and subnets

hostname routername
ip domain-name domainname.com     ! need an FQDN to define RSA keys
crypto key generate rsa general-keys modulus 2048      ! be sure to set a decent length
ip ssh version 2                      ! be sure to force SSH v 2

line vty 0 15
transport in ssh                    ! force SSH only
access-class ACL-VTY-IN             ! enforce mgt station ACL

Let's say you want to backup your routers or switches daily (or Fiber Channel Switches, or Firewalls, really anything that has a decent CLI).  While we're at it, let's back up the version info, and also the md5 hash of the OS image to verify it's integrity (yea, I know all about md5 collisions, but md5 hashes are what we have to work with on this platform).  We'll use plink (the text based putty client) to collect the data via SSH:

plink –l %1 –pw %2 %3 “sho ver”  >inventory\%3_inventory.txt
plink –l %1 –pw %2 %3 “sho config” >> inventory\%3_inventory.txt
plink –l %1 –pw %2 %3 “verify /md5 flash:/c2800nm-advipservicesk9-mz.124-8.bin” >> inventory\%3_inventory.txt

%1 is the userid
%2 is the password
%3 is the ip address or resolvable name of the device
%3 is also used for the filename.  You could get fancy and include a date / time stamp in the filename as well.

I tend to use plink to collect input for RAT, as opposed to the SNARF method in RAT (which uses telnet).  This example is on Windows (putty / plink), but you can certainly write  a similar script on Linux or OSX. Note also that there are tools out there that do exactly this (CATOOLS, RANCID).

Let's combine all of this (NTP, SYSLOG, AAA Back-end Authentication, config backups) to illustrate why this is all important, and how it all inter-relates:

Now that you have your configurations backed up, you can run DIFF reports to see any changes from yesterday.

  • Where the changes made approved in your change management procedure (oops, that's a good thing to have too!)
  • Did the changes happen in the approved time? Did the change run long, or was it just plain applied outside the change window?
  • Did the approved changer make the change (the person who made the change should show up as a named user in the log, both when they logged in and when they made the changes)
  • If it's NOT an approved change, who made it (ditto, you'll see their name in the log)
  • And finally, now that the change is complete, does RAT indicate that you may have weakened your overall security posture?  Note that RAT audits you against the CIS Benchmark(s) - if you use a different hardening standard, you'll need either a different tool or script, or some manual translation to make the final call on this.  Note also that RAT is for audit, it's not a full security assessment tool - you'll need different tools for a full security assessment.

Again, this diary is all about catching the easy stuff that we see gets missed all the time - for complete set of things to consider, including procedures, use the Hardening Benchmarks, or define your own benchmark for a more complete picture that encompasses your organization's business requirements, policies and procedures.

If I've made any errors or especially typos, please use our comment form to set me straight !  More importantly, please use our comment form to let us know what you do in your environment - any tips or war stories are very welcome !  As always, our comment form is open 7x24.


PS - As an FYI, all IP addresses in this story are formatted as per RFC 5737 ( http://tools.ietf.org/rfc/rfc5737.txt ), which reserves specific IPv4 address spaces for documentation

Rob VandenBrink


Published: 2011-10-05

VMware Advisory - UDF file system handling

VMware has released security advisory VMSA-2011-0011 which describes a remote code execution vulnerability in VMware Workstation 7.1.4 and earlier, VMware player 3.1.4 and earlier, and VMware Fusion 3.1.2 and earlier.  Note, VMware released Workstation 8 and Fusion 4 late last month, so if you have upgraded to the bleeding edge, you are not affected.

Jim Clausing, GIAC GSE #26
jclausing --at-- isc [dot] sans (dot) edu


Published: 2011-10-05

Cisco Advisories - FWSM, ASA, and NAC

On the heels of the 11 bulletins (mostly IOS) that they released last week, Cisco has released 3 more today.

 The FWSM bulletin covers 4 DoS issues and one authentication bypass.  The ASA bulletin covers 2 of the same DoS issues, the same auth bypass, plus 1 additional DoS.  The NAC bulletin covers a directory traversal issue (by an unauthenticated user) against the (HTTPS) management interface.



Jim Clausing, GIAC GSE #26
jclausing --at-- isc [dot] sans (dot) edu


Published: 2011-10-05

Adobe SSL Certificate Problem (fixed)

Tuesday morning, we received a number of reports from readers indicating that the SSL certificate used for "settings.adobe.com" was out of date. Initially, we had a hard time reproducing the finding. But some of our handlers in Europe were able to see the expired certificate.

The expired certificate was valid from Oct 6th  2009 to Oct 6h 2010. Which is somewhat unusual. Typically, we would expect a certificate that "just expired yesterday" and someone forgot to renew it. In this case, it looked more like someone installed an older certificate instead of the new one.

The correct certificate was pretty much exactly a year old and valid for another year. Everything indicated that the Adobe certificates indeed expire in the first week of October.

In the end, we narrowed the affected geography down to Europe and contacted Adobe. Adobe responded promptly and as of this evening, the problem appears to be fixed. Thanks everybody who helped via twitter narrowing down the affected geography and thanks to the readers reporting this initially.

Johannes B. Ullrich, Ph.D.
SANS Technology Institute


Published: 2011-10-04

Critical Control 3 - Secure Configurations for Hardware and Software on Laptops, Workstations and Servers


Like the two prior controls, this is all about gaining control of your network. Control 1 and 2 identify all the hardware and software you own. With control 3, we now start configuring this software (and hardware) securely.

In my opinion, there are really two problems you have to solve here:

- establish a baseline configuration

There are a number of well respected organizations that publish standard configurations. For example the Center for Internet Security, the NSA and DISA hardening guides and of course guides provided from vendors like Apple and Microsoft. In most cases, these configuration guides will serve as a starting point, and you will have to adjust them to your local preferences and needs. Usually you will need a couple different configuration templates for different roles. A laptop traveling with a sales person from customer to customer needs to be configured differently then a server or a desktop in the IT department.

One you decided on a benchmark, and customized it, you can build standard images used to build new machines. If you are a large enough customer, you may be able to convince your vendor to deliver systems already preconfigured to your specifications. If you decide to go this route: You still need to verify that the vendor followed your guidelines.

Hardened configurations are known to cause problems with patching and some advanced software features. The closer you stick to one of the well established guidelines, the more likely you are going to find help in working around these problems.

- maintain the baseline configuration 

 Nothing is static, in particular in IT. Configurations will change, patches need to be applied and new threats will require you to reconsider some of the choices you made when originally setting your default system configuration. However, all changes made to systems need to be carefully controlled and need to be applied consistently. Configuration management  tools will help getting this job done. The configuration needs to be monitored continuously with tools like Aide or Tripwire to identify unauthorized changes quickly.


Johannes B. Ullrich, Ph.D.
SANS Technology Institute


Published: 2011-10-04

Critical Control 2 - Inventory of Authorized and Unauthorized Software

From a support point of view, when someone calls the Helpdesk with a "there's something going on with my pc" question, very early in the process you'll want to know what is installed on that computer, and then what versions of each installed application.  It's also handy to know *when* things were installed - if things just started to go wrong, knowing what was just installed is a must-know.  Of course, the person making the call will always say "I didn't install anything", but once you have that list, the hasty "oh, except for that" is generally quickly forthcoming.

So a software inventory is useful for support, but why is it second on the Security Critical Controls list?  Well, if one of  your users has rights to install their own software, they will.  As time goes by is it likely that they'll install patches and updates?  What about version updates?  Did they pay for that app at all?  This highlights the big gaping hole in the "I'll admin my own machine" end-user argument.  6 months after they are given rights to administer their own computer,  their software will be 6 months out of date, and the machine will have 6 months worth of security vulnerabilities on it (and most likely the exploits to match them).  Not a good thing to plug back into the head office LAN.

So, in a Windows environment, how can you easily get a listing of installed software?  Luckily, you can script all this stuff, and better yet, script it to run daily, from a central location, and store the results centrally.

Note, all the script examples are pretty much lifted from a semi-recent GIAC Gold Paper of mine ( http://www.sans.org/reading_room/whitepapers/auditing/admins-documentation-hackers-pentest_33303 ).  I won't put it forth as the be-all and end-all reference for this (I didn't invent any of this stuff), but it was a handy place for me to go to for these stuff, since it's all in one place.  Another great place to go for LOTS of information on scripting is the Command Line Kung Fu site, at http://blog.commandlinekungfu.com/

We'll use WMIC (Windows Management Interface Command Line) for the Windows software inventory.  No surprise there - years ago, I would have used VBS files (and I still keep them around, just in case), but these days WMIC is just too easy for this type of reporting.  What you'll also find is that many of the fancy-dancy, for-sale-for-real-dollars inventory applications out there are simply a collection of WMI calls with a tuxedo on (a cool menu and pretty reports) - so you can save yourself some budget dollars and run these reports yourself, using your knowledge of your environment and a little bit of script development time.

To get all installed software in a Windows Domain or subnet range,  you'll only need a few commands:

wmic os get name
Get the OS installed on the station
wmic os get servicepackmajorversion
Next, the Service Pack
wmic qfe list brief And the list of patches and updates (QFE = Quick Fix Engineering)
wmic service list Next, list the services installled
wmic product get vendor, name, version, installdate, packagecache, description, identifyingnumber
Finally, for all installed applications, list the information we might find useful in this context

For all of these commands, we'll tack on a formating option to pretty up the output, presenting the report in an HTML table:

Now, we have the bare bones of a script.  Let's put it all together, in a short script we'll called inven.cmd (short for inventory)
inven will run all the associated reports, and drop them into a separate subdirectory named for each hostname being inventoried.  (note from the environment variables that I pulled this little script out of a much larger one).

=========== inven.cmd =========

set HOST=%1
set DIRNAME=%1
set UID=%2
set PWD=%3


wmic /output:%DIRNAME%\patches.htm /user:%UID% /password:%PWD% /node:%HOST% qfe list brief /format:htable
wmic /output:%DIRNAME%\os.htm /user:%UID% /password:%PWD% /node:%HOST% os list full /format:hform
wmic /output:%DIRNAME%\products.htm /user:%UID% /password:%PWD% /node:%HOST% product get vendor,name,version,installdate,packagecache,description,identifyingnumber /format:htable
wmic /output:%DIRNAME%\services.htm /user:%UID% /password:%PWD% /node:%HOST% service list /format:htable


Wait, but I said we'd run that for the entire Windows AD Domain - how do we do that?

First, get a simple list of all computers in the domain - we'll use DSQUERY for that.  The easiest way to run this is from a Domain Controller.  Note that I'm using "cut" to only give me just the names of the computers in a list - you can get "cut" by installing Microsoft Services for Unix (SFU), or use GNUTILS like throwbacks like me (I'm still getting around to installing SFU everywhere I need it, whereas the GNU utilities are all self-contained exe's)

dsquery computer  -s DCname  -u domainname\administrator -p adminpassword -limit 10000  | cut -d "," -f1 | cut -d "=" -f2 >>hostlist.txt

(you could use "dsquery servers"  to inventory server class computers)

Now that we have inven.cmd and the list of hosts in hostlist.txt, let's combine them and get the full report, by creating DOMLOOP.CMD, which will contain a single line (Note that USERID and PASSWORD will need rights to login to the remote hosts and run the WMIC commands)


Now, you say, what about malware?  Oddly enough, lots of the malware out there (many of the FAKE-AV packages for instance), actually use the Windows installer and register themselves.  You can any apps that *don't* register using variations on  "dir c:\*.exe /s", or, if you are looking for hidden and/or system files, you can use variants on "attrib c:\*.exe /s" (or whatever file type, not just exe's).

What's that, you have Linux stations and servers?  Even easier, there are only about a dozen ways to get the same info out of Linux.  I'll hit one method for each of the variants I normally see

============ Redhat ==============

For startup services (and when they are configured to start), use chkconfig –list

[robv@pt01 ~]$ chkconfig --list
NetworkManager     0:off    1:off    2:on    3:on    4:on    5:on    6:off
abrtd              0:off    1:off    2:off    3:on    4:off    5:on    6:off
acpid              0:off    1:off    2:on    3:on    4:on    5:on    6:off
atd                0:off    1:off    2:off    3:on    4:on    5:on    6:off
auditd             0:off    1:off    2:on    3:on    4:on    5:on    6:off
avahi-daemon       0:off    1:off    2:off    3:on    4:on    5:on    6:off
bluetooth          0:off    1:off    2:off    3:on    4:on    5:on    6:off
btseed             0:off    1:off    2:off    3:off    4:off    5:off    6:off
... and so on ...

to list all installed packages:
rpm -qa

... (and so on)

for more information on a specific package, use rpm -qi

[robv@pt01 ~]$ rpm -qi python
Name        : python                       Relocations: (not relocatable)
Version     : 2.6.4                             Vendor: Fedora Project
Release     : 27.fc13                       Build Date: Fri 04 Jun 2010 02:22:55 PM EDT
Install Date: Sat 19 Mar 2011 08:21:36 PM EDT      Build Host: x86-02.phx2.fedoraproject.org
Group       : Development/Languages         Source RPM: python-2.6.4-27.fc13.src.rpm
Size        : 21238314                         License: Python
Signature   : RSA/SHA256, Fri 04 Jun 2010 02:36:33 PM EDT, Key ID 7edc6ad6e8e40fde
Packager    : Fedora Project
URL         : http://www.python.org/
Summary     : An interpreted, interactive, object-oriented programming language
Description :
Python is an interpreted, interactive, object-oriented programming
language often compared to Tcl, Perl, Scheme or Java. Python includes
(and so on)

For more information, on all packages (perhaps too much), use rpm -qia

================ Debian, Ubuntu and the like ===================
To get a list of installed applications:

dpkg --get-selections
to get more information in the list:
dpkg -l

Name                                 Version                                         Description
acpi-support                         0.136.1                                         scripts for handling many ACPI events
acpid                                1.0.10-5ubuntu2.1                               Advanced Configuration and Power Interface e
adduser                              3.112ubuntu1                                    add and remove users and groups
adium-theme-ubuntu                   0.1-0ubuntu1                                    Adium message style for Ubuntu
adobe-flash-properties-gtk                              GTK+ control panel for Adobe Flash Player pl
.... and so on ....

to get startups, I often just install chkconfig, and run it as on Redhat variants:
sudo apt-get install chkconfig
chkconfig --list

alternatively, sysv-rc-conf is an alternate package to do this (this will often also need an install)
sudo apt-get install sysv-rc-conf
sysv-rc-conf  --list



Finally, we're interested in how you tackle the software inventory problem.  Please use our comment form - let us know of any cool tools you use, or post any scripts you may have written to help out !



Rob VandenBrink


Published: 2011-10-03

Security 101 : Security Basics in 140 Characters Or Less

It was one of THOSE gigs: an internal penetration test against a client that, considering the amount of personal information they held on their customers, should have been well prepared.  And yet, we went from "you-can-plug-your-laptop-in-over-there" to "Domain Admin" in... well, let's just say a "shockingly small" number of hours.  And it just went downhill from there...

For me, writing up the resulting report, triggered what I could only describe as a "crisis of faith." While, as a security community, I don't fool myself that we have it all "figured out," I had – up until now – strongly believed that we were making progress.  And yet, I had just spent a week immersed in a corporate culture that seemed to have focused itself on so many higher-level security issues that the basics – the "Security 101" stuff – was just plain overlooked.

The more I thought about it, the more it bothered me.  It wasn't some fancy-schmancy 'leet h@x0r 0-day that let us take down this organization from the inside: it was stupid-simple low-hanging fruit.  I spent a bit of time chatting over Twitter with the ever-insightful Brian Honan (@BrianHonan) and came to the conclusion that the security community may have reached an awkward age at which we're grown up enough to be focusing on the golly-gee/whiz-bang/cool stuff (vis-à-vis the "APTification" of all that passes for security discussion) and, as a result, we're neglecting the basic, "Security 101" stuff that raised the bar in the first place.

Think about it: Over the past year, how many high-profile hacks have been the result of awesome cutting edge skillz?  How many have happened because someone just flat-out did something dumb?  Take a quick gander at back issues of SANS NewsBites and I think you'll be convinced as well: We truly are neglecting the basics.

Since October is "Security Awareness Month," a few weeks back, I sent out a call on Twitter for folks to submit pithy, 140 character-long, chunks of Security 101 wisdom.  Below, I've compiled together the resulting list, along with the Twitter name of the submitter.

If you're feeling a little shaky on your security knowledge, then heeding this advice might just save your behind.  Even if you're confident that you "know it all," a quick review might have you discovering stuff you've inadvertently overlooked.  Either way, I heartily recommend that you read (and heed) this advice.  Also, if something particularly strikes your fancy, you might consider following the author on Twitter... you never know – you might learn even more.

One last "housekeeping" note: I lightly edited these to remove some of the more blatant "Twitterisms" used to stuff big thoughts into limited character lengths.  If anything got messed up, I'll take the blame.

@ChrisJohnRiley If you can guess where PHPmyAdmin is installed, then so can attackers.
@DavidJBianco You are already pwn3d. The question is, "What will you do about it?"
@Keldr1n Don't leave default passwords on the administrative interfaces of your 3rd party web applications.
@Keldr1n Know your network - and all devices in it - well enough to spot unusual activity.
@Keldr1n Users are almost always the weakest link. Make it a priority to educate them. Do most of yours even know what phishing is?
@averagesecguy Security 101: If you don't need it, turn it off.
@bowlesmatt Passphrases are the new passwords. Make a sentence that is long, hard to guess, and easy to remember. ihatepasswordsseewhatididthere?
@bowlesmatt Patch your systems and disable any unused services to reduce attack surface.
@bradshoop Never trust a host you can't trust.
@bradshoop Computers remember a lot. Even more if you contact security personnel before you reboot.
@bradshoop Dedicate personnel to prevention AND detection. Preferably the same personnel in rotation to breed familiarity and contempt.
@connellyuni It's more important to know what you don't know than it is to know what you do know.
@cutaway Try to avoid saying "We are investigating... why equipment that we have a destruction certificate for was... sold online" to the media.
@cutaway Assets using secure authentication are directly and adversely impacted by your assets using plain text authentication.
@cutaway Complacency: 1) Self-satisfaction especially when accompanied by unawareness of actual dangers or deficiencies. 2) You will be hacked.
@cutaway Default SSL Certs for internal management interfaces should be replaced with valid certificates associated with the organization.
@cutaway Don't be afraid of your incident response plan. Conducting investigations will give your team experience and eventually reduce costs.
@cutaway How do you "Find Evil" in your organization? Seriously, go "Find Evil" and report back to me.
@cutaway IT environments are complex systems. They require a System Development Life Cycle to effectively manage AND secure.
@cutaway If your product allows remote connections somebody WILL write a python/perl/ruby script to connect to it and send whatever THEY want.
@cutaway Monitor and alert to new accounts and accounts being added to Domain Administrator, SUDO, or root groups.
@cutaway Product certification does not mean it has been deployed correctly. Review placement, logging, access, input validation, etc...
@cutaway Service accounts should adhere to corporate password policies and be monitored for modifications including lockout.
@eternalsecurity Make sure you're protecting the right thing. A belt AND suspenders doesn't help if you're not wearing pants.
@hal_pomeranz "A backup is not a backup until you do a restore." #sysadminkoan
@hy2jinx Attack vectors and regulatory requirements change. "That's how we've always done it" is a poor and lazy excuse.
@hy2jinx Scanner "infos" can turn up bigger issues than you'd guess. Look at overall results, not just singles.
@hy2jinx Five missing patches across 100 devices does not equal "five vulnerabilities."
@hy2jinx It's cheaper to consult a security professional from conception than mere days before "go live."
@hy2jinx Security professionals should be empowered to point the business towards good decisions and reserve the power of "No" for a last resort.
@itinsecurity In your encryption system, your key is the weakest link. If it isn't, you're doing it wrong.
@itinsecurity Security is not a box you buy or an app you write. It's an emergent property, a sum greater than its parts.
@jarocki "Dear User: Millions of $$ of software won't keep you from clicking that link. Only YOU can prevent link clicking."
@jarocki When it comes to security controls, Trust But Verify... nah, forget the Trust... just Verify.
@jimmyzatl If you don't log "accepts" in your FW logs for admin protocols you will have no way of knowing when those accounts are abused.
@jimmyzatl An encryption algorithm that has to be hid from the public is by definition a weak algorithm...
@ken5m1th That successful PCI DSS Report On Compliance will not save you from Zombies.
@kentonsmith When setting up any new system, Step 1: Change default admin password.
@kill9core Security through obscurity, or the practice of hiding flaws hoping they won't be found, has proven time and time again not to work.
@mattdoterasmus Just because your security teams work from 9-5, doesn't mean attackers aren't looking the rest of the time.
@omegadefence The attitude that "it won't or can't happen to us" because "we're too small/big/have nothing to offer" is dangerous.
@omegadefence The attitude that "I can't do anything about it so I won't even bother with security or reporting" is also dangerous.
@omegadefence Analyse your logs in detail, it is those with their heads buried in your logs that hold the key to prevent, detect and recover.
@omegadefence Give only the permissions required to do the normal daily duties, nothing more. Special logons for special occasions.
@omegadefence Best: using high-speed trend analysis with custom searches as well as automated reporting AND followup.
@rob_bainbridge Security teams that work in isolation and without transparency will fail. Collaborate with other risk mgmt - audit, ops risk, etc...
@tccroninv Those that store passwords in plain-text invite catastrophe.
@tliston "We can't implement strong passwords/two-factor authentication. Our users aren't capable," says more about your competence than theirs.
@tliston Developers: Never roll your own encryption, authentication or session management schemes. You're not that smart. Trust me.
@tliston If you don't have written authorization to perform security-type testing in your organization, don't. You're too pretty for prison.
@tliston If you're not putting as much thought into your outbound firewall rules as you are for your inbound rules, you're doing it wrong.
@tliston If you're not supporting a legacy Windows OS, for the love of all that is Holy, turn off LANMAN hashes.
@tliston If you've never tested restoring from your backups, then you don't have backups - you have a crapload of data and hope.
@tliston If your internal security posture is based on,"our employees wouldn't know how to do that," then you're likely already 0wned.
@tliston Remember: As an attacker, I exploit misplaced trust. There's nothing mystical or magical about it.
@tliston Run scans against your network. It's the only way to really know what's out there. I've yet to see a fully accurate network diagram.
@tliston Sanity check security spending. A $500 lock on a cheap wood door doesn't buy security. It just gives a thief something to laugh at.
@tliston Security isn't just about preventing compromise. It's about maintaining confidentiality, integrity & availability despite compromise.
@tliston Security-through-obscurity doesn't work against anything with intelligence, but there's lots of dumb sh*t out on the 'net.
@tliston Taking nude photos of yourself? Don't store them on an always-connected device with little-to-no security. #forscarlett
@tliston Teach your users not to click on unknown links. DON'T send links to your users in email. More info: http://t.co/bdNTRI3O
@tliston Web developers: Give the exact same answer whether you're given a bogus username or password on logins. EXACT. SAME. ANSWER.
@tliston WebApp Devs: Just because you have a <SELECT> with A, B, C, & D as options doesn't mean you'll only ever get A, B, C, or D back.
@tliston Webhosting Companies: Web servers shouldn't be making many *outbound* connections. TCPDump is your friend.
@tliston Your organization's AUP should explicitly prohibit Copyright abuse. You do HAVE an Acceptable Use Policy, right?
@tliston Centralize your logging - you have no idea how helpful it will be.
@tliston Companies who use the same Windows Local Admin password on large numbers of machines are ripe for picking by malicious insiders
@tliston Developers: Input, even data you think you control, can never be trusted. Consider all input a threat and process accordingly.
@tliston Diligent change management practices have saved more asses than a Beverly Hills plastic surgeon.
@tliston Ensure that user accounts are disabled as part of your termination process. Audit all accounts at least semi-annually for "misses."
@tliston High privilege level accounts should be used only for administrative functions, not for day-to-day activities.
@tliston High privilege level accounts should have kick-ass passwords or two factor authentication. Or both.
@tliston If at all possible, disable password authentication for SSH. SSH is a huge brute force target. Keys are your friend.
@tliston If it plugs into your network, know why. The last thing you ever want to hear an admin say is, "That thing has a web interface?!?"
@tliston Learn how to manipulate text files. Learn how to use sed, cut, wc, and grep as a minimum. Text is your friend.
@tliston Logging authentication failures is NOT enough. Log successes and failures.
@tliston Mr. CxO: Your employees are not a "family." Some are untrustworthy. FYI: Some of the people in your real family are pretty sketchy too.
@tliston Never rely on the fact that you "own" anything: data, a communication path, etc... If you do - I 0wn it, I 0wn you. Trust nothing.
@tliston Nothing is more important to the long-term survivability of your organization than a fully functional backup process.
@tliston Packets to or from RFC-1918 addresses should not be allowed to traverse your border firewall in either direction.
@tliston Passwords are no longer security measures. They are merely speed-bumps. Treat them accordingly.
@tliston Physical access trumps most security measures.
@tliston Remember to always think in terms of "defense in depth." A belt AND suspenders is always better than a belt OR suspenders.
@tliston Shared accounts are never a good idea.
@tliston Telnet, FTP, and any other clear-text protocol developed in simpler, more naive times has no business on a modern network.
@tliston There is no excuse - NONE - not to use full disk encryption on laptops. Data breaches due to lost/stolen laptops are inexcusable.
@tliston Unencrypted WiFi is never secure. WEP = Unencrypted WiFi. Trust me. Stop using it. Now. Really.
@tliston Web Developers: Remove comments from your production website code. They serve NO purpose and can give away too much info.
@vaudajordan Total loss of Sony Breach $171M, I wonder how many salaries, code reviews, software, hardware that could have bought.
@zanis1 Assign only those privileges that are required to do the job.

Also, I want to extend a great big "thank you" to all of the people who submitted these tweets using the #sec101 hash tag.  I tried really hard to grab them all... If I missed anyone, I apologize.

Tom Liston
Senior Security Consultant, InGuardians, Inc.
Handler: SANS ISC

Note: Matt (@0xznb) has kindly made a fortune-mod zip file available here of the #sec101 wisdom.



Published: 2011-10-03

Beauty and the BEAST

(This is a bit longer diary – if you are just interested in conclusion and recommendations, skip below to the “Is SSL broken?” section. I recommend that you read the whole diary – and let us know if you have any comments).

Unless you’ve been hiding on a deserted island, you heard about the latest attack on SSL, named BEAST. We wrote several diaries (first, second, third) on this topic. I got interested into the attack a lot and finally had some time to go through all the details.

So, first of all – big props to Duong and Rizzo for implementing this in practice. While the idea itself is really cool (a bit more about it below), the implementation is what really impressed me, and all the effort they invested into the research here.

Some basics about the attack

As has been already written on million places, the BEAST attack attacks SSL 3.0 and TLS 1.0, in particular their implementation  of the Cipher-block chaining (CBC) block encryption algorithms.

This is probably the most widely used mode for block encryption algorithms today, so it is obvious that any attack on this (and SSL/TLS overall) can have huge impact.

In a nutshell, BEAST very cleverly uses predictable IV (initialization vector) values in order to set up particular input vales for SSL. By very carefully modifying these input values, the attacker can exploit BEAST to guess what value 1 byte in an encrypted block had. Block encryption algorithms fragment input messages into blocks, usually 8 or 16 bytes long.

The IV is initially a random number and then every next block uses the previous cipher text as the IV. The IV is XORed with the input plain text – this produces input for the encryption algorithm. So normally, in a block encryption algorithm, encrypted block C4 = encryption ( C3 XOR P4), where C is an encrypted block and P is a plain text block.

According to this, the last blocks (CN) IV will be CN-1: CN = encryption (CN-1 XOR PN). Doung and Rizzo cleverly used this so they leave the channel open and add the next block (N+1) whose content will be of one of the previous blocks. Imagine that we supply P4 (with only 1 byte modified), XORed with CN and it’s original C3:

CN+1 = encryption ( CN XOR ( CN XOR C3 XOR P4 ))

This results in:

CN+1 = encryption ( C3 XOR P4)

For which we know the result as it is C4! Now, if we can influence P4 to give us the opportunity of guessing one byte (by supplying, for example, 7 known bytes) we can try to guess what the last byte was: if CN+1 is equal C4, we guessed the byte, otherwise we didn’t.

This is just a brief overview – for more information read the leaked paper – it is written very well.

Guessing HTTP values

As you’ve seen above, the attacker can now guess byte by byte. With HTTP creating this boundary is actually simple since we know what each HTTP request will look like:

GET /AAAAA HTTP/1.1<cr><lf>

If we want to guess the first character of Header, we can make the previous line 23 bytes long (if blocks are 8 bytes each):

GET /AAAAAAA HTTP/1.1<cr><lf>H

Notice how only H will make it into the 3rd block. The attacker now knows the content of the first two blocks and can try to guess the first character by using the attack described above.

Attack prerequisites and implementation

As if the attack itself was not impressive enough, Doung and Rizzo managed to actually do all this in the browser. Let us revisit what they have to do for this attack:

  1. They need to pull a MITM attack on the victim. This is needed for two things: first, they need to monitor the network traffic in order to guess bytes. Second, they need to somehow influence the browser to make it issue requests such as the one shown above that will let them do the guessing. For the demo they used a Java applet, but there are other ways of exploiting this (more below).
  2. Once they injected the Java applet into the victim’s browser, they wait for the victim to log in to the target site. Now the Java applet will open an SSL connection to the target site and send a specially crafted request as above (i.e. GET /AAAAAAA …). The SSL connection must stay opened so they can feed new blocks in real time, as they monitor network traffic. This will allow them to guess content of bytes encrypted by the browser. So, their Oracle in this case is the browser itself – the web server that they are attacking is irrelevant, it is the victim’s browser that let’s them guess encrypted content.

As you can see from 2), the crucial requirement is that the SSL connection is open (so they are able to append the data and use the last block as the IV). This proved to be very difficult to do (and is one of the things in Doung’s and Rizzo’s research that impressed me the most).

There are many ways that can be used in a browser to open a new connection. The easiest way is to use JavaScript’s XmlHttpRequest (XHR). There are some limitations here though. First, Internet Explorer does not support XmlHttpRequest level 2 (which is needed in order to send cookies) and instead has an XDomainRequest object. XDomainRequest will never send cookies so, in theory, Internet Explorer users are more protected than Mozilla Firefox or Chrome users (is this a first or what!?!).

Firefox and Chrome support XHR level 2. It is worth pointing out here that the attacker is not able to read the request through active scripting due to the fact that the server will not set the correct Access-Control-Allow-Origin header, but the attacker does not care about that since he just wants to be able to use the browser as an Oracle for guessing encrypted stuff.

The biggest problem with this is, it appears, that XHR cannot be used to create streaming requests, which are needed to perform the guessing. Many other possible exploitation vectors, such as plain IFRAMEs, Websockets or Silverlight have similar issues that prevented Doung and Rizzo from using them – keep in mind that this does not mean these are “safe” against BEAST, just that current attempts to use them failed.

Is SSL broken?

Simple question, simple answer – NO. As you can see above, there are many prerequisites that the attacker needs to do in order to conduct the BEAST attack.

While the attack is inherent to block encryption algorithms, it requires the attacker to be able to append these specially crafted input blocks into an active session. In other words, it is very difficult, or impossible to exploit BEAST on other protocols that use SSL, such as POP3s, IMAPs and similar. Doung and Rizzo did it with browsers because there are many scripting (extending) possibilities with browsers and the HTTP protocol.

Couple of things I would suggest doing:
-    Be careful about switching to TLS 1.1 or TLS 1.2 because you might break things for many clients. While this definitely fixes the vulnerability, be very careful.

-    Move to RC4 over CBC. RC4 has also its own issues but just the fact that Google prefers RC4 says something too – you can use the nice sslscan utility to see what ciphers are supported by a server, here are the results for mail.google.com:
# sslscan --no-failed mail.google.com:443
           ___ ___| |___  ___ __ _ _ __
          / __/ __| / __|/ __/ _` | '_ \
          \__ \__ \ \__ \ (_| (_| | | | |
          |___/___/_|___/\___\__,_|_| |_|

                  Version 1.8.2
        Copyright Ian Ventura-Whiting 2009

Testing SSL server mail.google.com on port 443

  Supported Server Cipher(s):
    Accepted  SSLv3  256 bits  AES256-SHA
    Accepted  SSLv3  128 bits  AES128-SHA
    Accepted  SSLv3  168 bits  DES-CBC3-SHA
    Accepted  SSLv3  128 bits  RC4-SHA
    Accepted  SSLv3  128 bits  RC4-MD5
    Accepted  TLSv1  256 bits  AES256-SHA
    Accepted  TLSv1  128 bits  AES128-SHA
    Accepted  TLSv1  168 bits  DES-CBC3-SHA
    Accepted  TLSv1  128 bits  RC4-SHA
    Accepted  TLSv1  128 bits  RC4-MD5

  Prefered Server Cipher(s):
    SSLv3  128 bits  RC4-SHA
    TLSv1  128 bits  RC4-SHA

-    Do not accept any unsigned Java applets and allow them to run. You should always do this, not only in this case. Same goes for any other active technology.

-    When accessing sensitive sites, close all browser windows (not tabs, all windows) and open a fresh new one and use it only to access the sensitive site. After you’re done, close it again and reopen it for further surfing. This should make exploitation a bit difficult, but keep in mind that as of Java 6 Update 10 an attacker can potentially trick a victim into dragging applets out of browser windows so they continue running after the browser is closed (I’m not sure if this can be used to help BEAST).

-    If you are a sensitive server owner – keep an eye on errors on your server. The BEAST attack needs to issue quite a bit of requests (generally each byte has a 1/256 chance of being guessed, so in average 128 blocks need to be appended to a single request). One request is needed for a byte so if you see a lot of 404 requests with similar patterns (/AAAAA) that should raise some flags. Of course, you should always monitor and correlate your logs, not only now :)

At the end, I must again admit I like the attack a lot - the idea is really cool, amazing how they came up with everything. That being said, as you can see above, there are a lot of prerequisites for successful exploitation so I don't think that the resulting risk is very high at the moment.



Published: 2011-10-03

What are the 20 Critical Controls?

[the following is a guest diary contributed by Dr. Eric Cole]

One of the questions I often receive is what are the twenty critical controls.  Details can be found at www.sans.org/cag but the general approach of the controls is to begin the process of establishing the prioritized baseline of information security measures and controls that will lead to effective security. The consensus effort that has produced the controls have identified 20 specific technical security controls that are viewed as effective at defending against the most common methods of attack. Fifteen of these controls can be monitored, at least in part, automatically and continuously. The consensus effort has also identified a second set of five controls that are essential but that are more difficult to be monitored continuously or automatically with current technology and practices; however they are critical to achieving an optimal level of security. Each of the 20 control areas includes multiple individual sub-controls, each specifying actions an organization can take to help improve its defenses.

Additionally, the controls are designed to support agencies and organizations that currently have different levels of information security capabilities.  To help organizations focus on achieving a sound baseline of security and then improve beyond that baseline, certain subcontrols have been categorized as follows:

  • Quick Wins: These fundamental aspects of information security can help an organization rapidly improve its security stance generally without major procedural, architectural, or technical changes to its environment.  It should be noted, however, that a Quick Win does not necessarily mean that these subcontrols provide comprehensive protection against the most critical attacks.  If they did provide such protection, there would be no need for any other type of subcontrol.  The intent of identifying Quick Win areas is to highlight where security can be improved rapidly. 
  • Improved Visibility and Attribution: These subcontrols focus on improving the process, architecture, and technical capabilities of organizations so that the organization can monitor their networks and computer systems, gaining better visibility into their IT operations.  Attribution is associated with determining which computer systems, and potentially which users, are generating specific events.  Such improved visibility and attribution support organizations in detecting attack attempts, locating the points of entry for successful attacks, identifying already-compromised machines, interrupting infiltrated attackers' activities, and gaining information about the sources of an attack.  In other words, these controls help to increase an organization’s situational awareness of their environment. 
  • Hardened Configuration and Improved Information Security Hygiene: These aspects of various controls are designed to improve the information security stance of an organization by reducing the number and magnitude of potential security vulnerabilities as well as improving the operations of networked computer systems.  This type of control focuses on protecting against poor security practices by system administrators and end users that could give an adversary an advantage in attacking target systems.  Control guidelines in this category are formulated with the understanding that a well-managed network is typically a much harder target for computer attackers to exploit.
  • Advanced: These items are designed to further improve the security of an organization beyond the other three categories. Organizations already following all of the other controls should focus on this category.   

For additional details on the controls, please go to www.sans.org/cag.  Portions of the above are taken from version 2.0 of The Twenty Critical Controls. 

Dr. Eric Cole
twitter: drericcole
école .at. secure-anchor.com



Published: 2011-10-03

Critical Control 1 - Inventory of Authorized and Unauthorized Devices

Control 1

How many servers are in your DMZ?
How many Servers do you have in total?
How many workstations are connected to the network?
How many printers?
Switches/switches/routers/firewalls/Access Points?

If you can answer all the questions above for your organisation accurately, well done. Unfortunately the reality is that many people will not be able to answer them at all.  

Knowing what you have in your environment is critical to the security of the environment. We know that many attackers use automated processes to identify and attack machines on the internet.  If you are not aware of what internet facing systems you have, or they are not controlled, then it is likely that they will be discovered and compromised quickly.  So it is quite important to know what is actually there.

How can you achieve that? you need to be able to control what is plugged in.  Failing that, you will need to know when something has been plugged in.  802.1x controls or other forms of Network Access Control will help you achieve the first, but this may not be suitable for all areas of your environment, or you may not get around to implementing it for a while.

Detecting what is plugged in can be achieved in a number of ways.  Tools like arpwatch will detect when something is plugged in.  You could scan the network segment on a regular basis using something like nmap and use ndiff to compare the results.  This will let you know when something is connected to your network.   You my be able to watch DHCP allocations and detect or prevent unauthorised allocations.  In order for it to be effective you will need some sort of inventory, if you don't know what you have, then you will not know what should or should not be there.  Document the operating systems in use, the types of hardware used, switch types, printer types etc.  

There are of course other tools that will help in this scenario. Many management tools will have inventory capabilities, some patching tools have the capability and some of the AV solutions will now detect "unknown" devices on the network.  

What do you do to identify and control what is on your network?

Mark - Shearwater


Published: 2011-10-02

Cyber Security Awareness Month Day 1/2 - Introduction to the controls

Information security is a vast field and it can be difficult to determine where your efforts will do the most good. Even when controls are implemented it is often difficult to determine whether they are working as expected or they are achieving their objective.  The 20 critical controls have been built to provide guidance and address those areas that will improve the over all security of the organisation.  They won't solve all your problems, but they have to potential to solve many of your problems. 

The controls were built by a wide group of professionals and were designed with some guiding principles in place.

  • Defenses  should address the attacks that are actually occurring today
  • Automated - We all have limited resources and by automating tasks we can achieve more.
  • Root Causes - The controls attempt to fix the root cause of the issue resulting in a compromise.
  • Metrics - A mechanism by which the effectiveness can be measured


The controls are divided into two groups. Controls 1 through 15 can be automated, controls 16 through 20 are broader and can typically not be fully automated.  The idea behind the implementation is certainly not to start with control 1 and work your way up to control 20.  The controls are designed to be implemented on their own merit and based on the risk profile of the organisation.  Some of the controls overlap a little. For example if you are implementing control 11 "Account monitoring and Control" then likely you will have touched most if not all aspect of control 8. The idea is to look at the controls and what they can achieve and implement those that will do your organisation good first, before working on the others. If you decide that some do not apply in your organisation, then that is also fine. So please do not get stuck on thinking you have to implement control 1, before 2, etc.  Implement those you can, it will be one more control than is currently being done and will therefore help.

Each control will have some quickwins that will help you get over the line quickly, but if you already have things in place, there is the advanced component.  Something to aim for in future plans.  When implementing the controls make sure you do not skimp on the metrics or audit component of the control.  Knowing whether a control is functioning as expected is almost as valuable as having it in place in the first place. Regarding the metrics, each control will have a suggested time period, e.g. check every 24 hours or have a detection target of x hours.  Again this is a guide and whilst aiming for the suggested time is the idea, if you can only check for new devices once per week, sure not ideal, but again better than what is likely being done right now.  

Over the next few weeks, we'll go through the controls and outline what has worked for us. As always we'd like you all to contribute via comments or the contact forms.




Published: 2011-10-02

Cyber Security Awareness Month Day 1/2 - Schedule

This year for Cyber Security awareness month we are going to go through the 20 critical controls.  Because there are 20 controls we have decided that we will publish controls during the week days and a summary, expansion and/or some guest diaries on the weekends. So the schedule for the month looks roughly as follows:

  1 & 2/10 introduction 
  oct 3  Critical Control 1: Inventory of Authorized and Unauthorized Devices
  oct 4  Critical Control 2: Inventory of Authorized and Unauthorized Software
  oct 5  Critical Control 3: Secure Configurations for Hardware and Software on Laptops, Workstations, and Servers
  oct 6  Critical Control 4: Secure Configurations for Network Devices such as Firewalls, Routers, and Switches
  oct 7  Critical Control 5: Boundary Defense

  8 & 9/10 Summary/free form/tie in/elaboration/Guest diary 

  oct 10  Critical Control 6: Maintenance, Monitoring, and Analysis of Audit Logs
  oct 11  Critical Control 7: Application Software Security
  oct 12  Critical Control 8: Controlled Use of Administrative Privileges
  oct 13  Critical Control 9: Controlled Access Based on the Need to Know
  oct 14  Critical Control 10: Continuous Vulnerability Assessment and Remediation

  15 & 16/10 Summary/free form/tie in/elaboration/Guest diary

  oct 17  Critical Control 11: Account Monitoring and Control
  oct 18  Critical Control 12: Malware Defenses 
  oct 19  Critical Control 13: Limitation and Control of Network Ports, Protocols, and Services
  oct 20  Critical Control 14: Wireless Device Control
  oct 21  Critical Control 15: Data Loss Prevention

  22 & 23/10 Summary/free form/tie in/elaboration/Guest diary

The following sections identify additional controls that are important but cannot be fully automatically or continuously monitored to the same degree as the controls covered earlier in this document.

  oct 24  Critical Control 16: Secure Network Engineering
  oct 25  Critical Control 17: Penetration Tests and Red Team Exercises
  oct 26  Critical Control 18: Incident Response Capability
  oct 27  Critical Control 19: Data Recovery Capability
  oct 28  Critical Control 20: Security Skills Assessment and Appropriate Training to Fill Gaps

  29 &30 /10 Summary/free form/tie in/elaboration/Guest diary

  31 Overview of the month.

 If you click on the link you will be taken to the appropriate control. Each control is divided into several sections.

  • How do attackers exploit the control,
  • how can it be implemented, automated and measured,
  • Links to NIST and other documents, procedures and tools for implementing and automating the control.
  • Example metrics and Example tests



Published: 2011-10-01

Adobe Photoshop for Windows Vulnerability (CVE-2011-2443)

Adobe has just released a vulnerability advisory for Photoshop Elements for Windows  (http://www.adobe.com/support/security/advisories/apsa11-03.html). It is in older versions of the product (8 and earlier) and will not be fixed. The advice from Adobe is to upgrade to version 10, or to avoid opening .grd or .abr files.

It actually poses an interesting question, what should vendors be doing in cases where an issue is identified in a product that is no longer supported, especially products that are likely to be still in use by quite a number of people?  In this particular case I think they have probably gone the right path, sure the upgrade advice stings, but at least there is a work around available.