Are All Networks Vulnerable?

Published: 2011-06-27
Last Updated: 2011-06-27 23:10:26 UTC
by Johannes Ullrich (Version: 1)
3 comment(s)

One of the assertions made by the recent run of high profile attacks was that all networks are vulnerable, and the groups behind these attacks either had or could have access to many more systems if they wish.

Several articles expanded on this assertion and using the recent compromises as evidence considered this fact a failure of information security. I would like to question the conclusion that recent attacks prove that all networks are vulnerable, or that these attacks prove a large scale failure of information security.

First of all, let me state my philosophy of information security: I don't believe it is the goal of information security to prevent every single breach. As little, as it is the goal of a guard at a bank to prevent every single bank heist.

As an information security professional, it should be your goal to mitigate risks to a level that is small enough to be acceptable to business. It is much more about risk management then avoiding every single risk.

With that focus on risk management, information security itself becomes a solvable problem.

But back to Lulzsec. What did Lulzsec proof? Lulzsec proved that there are insecure networks. They did not prove that all networks are insecure. Lulzsec took very large targets ("the government", "banks", "on-line gaming") and rattled doors until they found an open one.

How do you protect yourself against that? First of all, you don't. Lets get back to the basics of risk: "the probable frequency and probable magnitude of future loss" [1]. We can address risk two ways:

- Reduce the probably frequency of a loss

This comes down to reducing your attack surface, and hardening the remaining castle. Most organizations suffer from the diffusion of confidential information. The better your are able to compartmentalize and limit access to confidential organization, the less likely it is that some of this information will leak. The tricky part in my opinion is the labeling or classification of information. This can be difficult and labor intensive. Classifications may also change over time.

- Reduce the probable magnitude of a loss

Limit the information you store to information the business needs. Consider information a liability, not just an asset. Storing credit card numbers will lead to more purchases. But will it be enough to justify the risk?

In the end, doing business on-line is to a large extend about trust. The difficult part is that trust is asymmetric. Trust is much easier lost then gained. Last week, when someone announced that Lulzsec may have compromised UK census data, the overall sentiment was to assume the announcement was true. Even though there was no evidence to proof this, and later Lulzsec stated that the claim was wrong.

In the end, it is not your job to prevent every single breach. It is your job to build trust in your systems so suppliers and customers will use them. A well written privacy policy, and being open and transparent may be as important in achieving this trust as the firewall, the IDS and the DLP appliance used to enforce it. 



Johannes B. Ullrich, Ph.D.
SANS Technology Institute

3 comment(s)


I've been following these attacks via this CNET online spreadsheet document:

In most cases it seems like the 'low-hanging fruit' of high-value targets was hit -- for example a NATO... book store. I think this represents the simple business logic of applying security that is proportional to the perceived risk.

But in some cases maybe the risk is higher than anticipated -- for example if the leaked credentials were re-used for something more important.

And a breach of some 'expendable' service is likely to affect the reputation of other services your company offers, where security and privacy are of great concern. This seems to be the motivation behind these attacks, anyway -- to harm reputation, to make headlines even if the consequences aren't so serious in operational terms.

I'm sure many of the victims of these attacks will, in hindsight, wish they'd given more attention to information security.

But where it no longer seems viable to do that, it would be better to tie-off properly -- to shut down services, decommission servers and destroy leftover data rather than wait around for the inevitable to happen. Many of the hacked sites or data were old, defunct, and hardly producing any real value any more.
As the article states, one of the risks is a lack of trust. A service isn't really "expendable" if its breach would result in reduced trust into the organization. The password reuse part fits in here, and I will write more about password reuse on Wednesday (I hope). A site hardly ever needs to know the actual password, so by hashing sufficiently the risk of the password getting stolen may be mitigated.
Of course, password resuse is a shared responsibility. There is only so much the site can do in this case. In part, as you state, the responsibility lies with the user not to reuse passwords.

Diary Archives