Critical Control 7 - Application Software Security

Published: 2011-10-11
Last Updated: 2011-10-11 19:58:02 UTC
by Swa Frantzen (Version: 1)
2 comment(s)

[the following is a guest diary contributed by Russ McRee]

Given the extraordinary burst in headlines over the last six months relating to "hacktivist "exploitation of web application vulnerabilities,  Critical Control 7: Application Software Security deserves some extra attention.

The control describes WAF (Web Application Firewall) use, input validation, testing, backend data system hardening, and other well-defined practices. Not until the 6th suggested step does the control state: “Organizations should verify that security considerations are taken into account throughout the requirements, design, implementation, testing, and other phases of the software development life cycle of all applications.
For your consideration: it can be argued that, as a canonical principle, strong SDL/SDLC practices woven into the entire development and deployment process leads to reduction of attack vectors. Reduce said vectors and mitigations provided by enhanced controls become less of a primary dependency. Long story short, moving SDL/SDLC practices to the front of the line, while not a “quick win,” can be a big win. That’s not to say that SDL/SDLC replace or supplants controls, but a reduction in risk throughout the development process puts the onus on secure code where controls become an additional layer of defense rather than the only layer of defense.
One of the advantages to a strong SDL/SDLC practice is the prescription of threat modeling where classification schemes such as STRIDE or DREAD help identify issues early as part of the development lifecycle rather than reactively or as part of controls-based activity.

OWASP offers excellent resources to help with SDL/SDLC efforts.

As you take a look at testing “in-house-developed and third-party-procured web applications for common security weaknesses using automated remote web application scanners” don’t fall victim to vendor hype. Test a number of tools before settling on one as some tools manage scale and application depth and breadth very differently. If you’re considering monthly or ongoing scans of applications that may serve thousands of unique “pages” but with very uniform code, you’ll want a scanning platform that can be configured to remove duplicate items (same URL and parameters) as well as items with media responses or certain extensions.
There is a wide array of offerings, commercial and free/open source, so test well and consider that you may want to consider more than one particularly if you’re considering inexpensive or free. Static code analysis tools are more often commercial but there are some free/open source offerings there as well. Plenty of search results will get you pointed in the right direction but again, test more than one. The diversity of results you’ll receive from different tools for both dynamic and static testing will surprise you.
Always glad to share experience with some of the tools in these categories should you have questions via russ at holisticinfosec dot org.

Takeaways:

  • A strong SDL/SDLC program reduces dependencies on controls.
  • Test a variety of dynamic and static web application testing tools.
2 comment(s)

Comments

I'd much rather have new projects embrace a bottom-up security framework such as ESAPI (Enterprise Security API) for OWASP:
https://www.owasp.org/index.php/ESAPI
Than tick a checkbox behind "have a web application firewall". These firewalls are terrible to configure manually and if you let them "learn": how do you know all legitimate traffic is seen, and how do you know no attacks were seen during the learning phase ?
I could not agree more. That touches on a nerve for me as I have spent a good portion of my IT career writing software and watching all the problems we have with buggy software. Most of it can be boiled down to lack of proper input validation even though the conditions are present in the code, the conditions that would indicate a problem are often just not checked and acted upon. I can't tell you how many times have I seen sample code that is completely void of any error checking used in book after book. Even though they usually say you need to do error checking in production code you still end up seeing this stuff just pasted into prod code.

The output of this situation is random function failures, crashes, and of course lots of exploitable holes that we have to work around with things like WAF tools. If you need a WAF for a program then in my 20 plus years experienced opinion you have a program that is not viable as a production tool, especially if it is internet facing. A WAF is only going to know about the attack vectors that someone else has already identified, just like anti-virus software. Relying on WAF, even with defense in depth, is giving in that you will always be behind the bad guys and never caught up or actually ahead. The vendor should be forced back to the drawing board to fix it even if they have to re-write it. We let them get away with far too low a quality by not demanding them to shore up their products to at least make attacking them very difficult. Or of course a more stable product can be chosen instead. It may cost more but the total cost of ownership will still most likely be less.

Occasional bugs are going to happen, we are all human. But there is a big gap between an occasional issue and recurring problems of a common theme. Recurring issues is a bad sign that the design or implementation was sub-par.

BC

Diary Archives