Last Updated: 2008-09-07 03:23:16 UTC
by Marcus Sachs (Version: 3)
Last Saturday we asked for "leap ahead" ideas that could change the rules of the game to favor the Good GuysTM. We receive lots of responses, some really good, and some...well....interesting. All of the responses are below, with attribution removed. There's no need to send more ideas but we'd like to see your comments on these ideas. Use the COMMENT feature of the Diary to add your thoughts. That way, others can see what everybody is saying including the Bad GuysTM.
Here's an idea: start teaching "secure thinking" in early grammar school. Things like shredding credit card applications, not giving out personal information, etc. Start with simple concepts that young children can understand, and reinforce the message and make it more sophisticated throughout schooling.
Develop a new Framework for Cybersecurity
Topic: Game-Changing Ideas in Cyber Security
- New concepts with accompanying strategy
- Technology Driven Mechanisms or
- Non-Technical Mechanisms supported through technology
- Deployed over the next decade
Mission: Change the cyber game into one which the good guys can win
As backend systems, end-user systems, etc. increase (or are continually expected to increase) in "power", there is a directly proportional growth in applications' complexity. Thus creating an environment ripe for unintended consequences that can not be fully tested within the constraints of limited time, people, and budgets.
Now let's apply Murphy's Law to this complexity.
Software is man-made, and therefore imperfect. If vulnerabilities exist in software, they will be invariably discovered. Depending on the motivations of the discoverer, the discovery may be stored, sold, disclosed irresponsibly or disclosed responsibly; at which time a target's attack landscape is altered to varying degrees.
Now let's accept that fact that our software not only drives consumer and business applications but also backend systems and the systems that employ our "defense in depth" strategy.
Thus we can assume we are always at a varying degree of vulnerability:
- Always vulnerable to a few potential attackers (can't control motivation)
- At particular times in a vulnerable state to some potential attackers (can control opportunity to varying degrees)
- Protected from vulnerabilities by technical controls (can control the bar for which a potential attacker no longer has the means)
So from this perspective, the shift in mentality should recognize that bad guys can and will penetrate one's defenses, no matter the depth of defense or strength of tools deployed.
- Employing a framework for a bottom-up approach for enterprise infrastructures that include whitelisting, blocklisting, deny-all/allow by exception, user/application profiling
- unintended application behavior
- unauthorized end user systems behavior
- monitored honeytoken activity (inbound requests/outbound requests, inbound response/outbound response)
- fingerprint normal application or it's intended behavior
- location-based authentication mechanism
- network GPS tagging
- a national "IR Team" responsive to regional incidents where intermediary networks are used as hop points
- a international alliance "IR Team" responsive to incidents where international intermediary networks are used as hop points
Prevention is needed but will never offer 100% protection. So the timeliness of detection/reporting is critical to response. The better this part of the industry gets, the greater the cost to the attacker.
Just as an automated tool has a structure, a dedicated attacker has a process/workflow, whether or nor he/she is conscious of it. Let them offer you lessons learned and best practices so you can:
- affirm technical controls that offered protection
- identify technical gaps in an quasi-automated fashion
- create an attacker profile based on what was targeted; you cannot control the motivation of the attacker (or those sponsoring it) but you should be aware of it
- have triggers (as those you encounter in a forensics/incident response report) based on the assumption that breeches are just around the corner.
These are based on brainstorming ideas that are not based on a foundation of the bad guys winning. Rather, it allows for a change in the cyber game into one where the good guys can be realistic about the landscape and manage it accordingly - with the goal to have the upper hand or the ability to respond and remediate effectively.
Scoring System for IDSs
This may already exist, but here it goes anyways.
Nowadays IDS uses profiles (like snort) which target specific worms/exploits/shellcodes, but what if it became more heuristic, and have a scoring system like anti spam systems.
For example, router x sees that ip 188.8.131.52 is trying to acces port 80 on an ip adress in 192.168.0.0/24. Nothing wrong with that, but then the router sees that within several minutes it tries to acces 50x port 80 on different ip's in the same block, and it gives it a score of +2.
Now, when (for example the score is higher then 1, there will be deep packet inspection, and if it finds obfuscated java/sql strings in http requests, the score goes up.
Finally, if the score is 5+, the IP gets reported/blocked/nuked with tsar bomb.
Of course there should be many more heuristic detection things, but i hope you'll catch my drift :)
Enforce Least Privileges
At the desktop level, combining least-privilege user accounts with Software Restriction Policy is a very profound change of the gameboard. Having been available since 2001, it's not exactly "new," but certainly underappreciated.
Result: a process running with the privilege level of the non-Admin user cannot execute most potential payload types, unless the file was placed in a location that only the ADMIN account has privileges to. Catch-22. Arbitrary protection against infected CDs, USB drives, picture frames, as well as the next new exploits for _________ (QuickTime Player, Flash Player, Word, OpenOffice, whatever). It also arbitrarily blocks execution of, say, napster_setup.exe and such. ;) Whatever the method of payload delivery (exploit, user action, infected media), even if it succeeds, the payload is going to be arbitrarily blocked from execution. I guess this could be a form of process whitelisting. Anyway, I've probably submitted this link before, but my page on the subject is at mechbgon.com/srp (geared toward the power user more than the I.T. crew, but possibly informative).
Obstacles to this approach are fairly obvious: some software will not work correctly when run as a non-Admin. Some users will not like being deprived of the Admin powers. Some versions of Windows don't do SRP (although there are alternate anti-execution softwares out there). But if it can be implemented, I consider this combo very powerful, and use it on my systems with great results.
“Best By” Date
Idea. Similar to a firewall, a software program that recognizes ALL file types coming into the system and if not a registered vendor would alert the end user to a phone-homer installed on the p.c. and give the end user the option of stopping the process immediately as well as reporting it to I.S.C. for immediate analysis. It's a big ask but an expiry date for any file not approved by the global security community would be cool. Any malware writer could still do the dirty(human nature) but if not a registered specialist his stuff would "expire" after a set time limit based on file recognition software installed on the end users system. Everything else has a shelf life these days,why not in cyberspace? If its good, its; gold, if its unregistered and unapproved it expires 24hours afters release.... I think this is possible no matter what file type or program. Paid for approved global registration, or even free catagorized registration to those software writers who agree to submit their stuff for testing. Anything else let it have its's day,but have something in windows or linux/mac that makes malware files limited to a short day.
Develop a new Operating System for the Masses
re: Leap Ahead.
I suspect you're on a losing game. Fundamentally it is because we now have something like a billion people connected on this thing we call the Internet, and some of them (for perfectly rational reasons) have mutually-incompatible agendas.
In the old days, when we had a dumb terminal in the showroom connected to a mainframe at 'head office', if there was something which did not work quite right we could tell the salesperson 'Do not exploit it' because all participants were under the same management control. In the modern world, with the billion people connected, we're not under the same management control. Nobody can possibly employ all billion of us, and threaten to terminate us if we misbehave.
My understanding is that IBM scrapped OS/2 (and recommended Linux) because the cost of providing warranty service rose above the revenue from software sales; thereby turning a 'software' business into a 'service' business.
So I think the next step is for Microsoft to scrap Windows ... they won't recommend Linux, maybe FreeBSD would be a viable candidate; effectively doing the same thing, washing their hands of the (impossible) obligation to provide warranty service for a billion people worldwide; and accepting that they and their partners can only do it for businesses and consumers under an explicit warranty service contract. 'Microsoft FreeBSD OneCare', hypothetically, would be a good name for the business venture.
Where will we be then ? No 'cheap commercial' operating system for deployment on the public Internet; only 'free unwarranted' ones (Linux and BSD) and 'expensive commercial' ones (AIX and MVS). But a much clearer basis to build the future on; whatever that future might be.
Use Concepts from Genetic Engineering
If you treat virusses, worms and other malware as biological attack vectors, you have the following pressumptions:
1) They need a point to attach to the dna of the program or system
2) All program dna has a start and stop sequence, that creates the whole environment that is used.
3) If you can change the start sequence on the fly, the attacking system's attack vector will be useless, because it will not find the system dns where it should be.
4) If you change the stop sequence, the attck vector can try and inject code and change the system dna, but in a growing dna, this becomes useless clutter and gets ignored.
Mandate a User Test to get Online
ISPs should require each of their household customers to pass a test (via telephone) demonstrating their knowledge of basic internet security practices; or else deny them service. Educational pamphlets should be freely provided by the ISP.
Extend the DNS to Include New Features
I saw the post regarding ways to allow the good guys to win and have been thinking about the same sort of thing for a while now.
My thought was to use an adapted version of DNS (a proven service that gets attacked frequently and stands up well) to implement another similar service that goes beyond what DNS is chartered to do and implements a trust score for IPs and Domains.
Because the DNS service uses a distributed hierarchical structure, it is able to be quite robust and withstands attack reasonably well. One way to strengthen the trust system is to allow the system to use DNS as the root for the original requests, but then implement the trust service on a domain / IP reserve level (like DNS but not DNS) to provide the answers to RR like queries about a given IP or domain. The answer could be local and could also be provided to a Root set of servers which could aggregate them in such a way as to get a true threat picture on a given domain or IP/IP range.
The flow for an IP would be something like:
1) request IP Address via DNS (goes to .arpa root to find who has the zone on the IP)
2) request from the "TrustedZone" root server regarding the authority (because if we can't trust the answers from the service, we probably should not bother going there).
3A) If the answer is that the "TrustedZone" service for the IP address in question is bad, query the "TrustedZone" root designee to get direct information on the IP.
3B) If the answer is that the "TrustedZone" service for the IP address is good, then go there to ask the question regarding the IP and possibly report a problem via another RR like query.
4) Act upon whatever is returned.
In practice, the system would be able to capture such things as data on DOS from an IP or the spread of an exploit. The data could be used by whatever takes advantage of the system to avoid going to the site, etc.
The system should be as easy to work with as DNS and serve as a complementary optional layered service, working with DNS to protect the infrastructure.
As I mentioned, I have been thinking about this for a while and have some thoughts on paper someplace. If anyone is interested in discussing what I have in mind, I'd be happy to chat further.
Consider Everything Insecure Unless an Authority Deems it to be Safe
I´ve thought about such "game changeing" a lot of times. I guess some of you got similar ideas like mine are, so let´s discuss that here and see where we can come to.
You know, biggest problem in security is, that some "software parts"( libs, protocols, etc ) are said to be secure, what is in fact just a method to make money with security certificates and it´s just a nice dream( http://www.neowin.net/news/main/08/08/08/vista39s-security-rendered-completely-useless-by-new-exploit ). Every part of software got some bits and peaces which can be used to brakein, takeover or what ever. Even Kerberos was found to be not secure just begining that year.
So, why not turn everything arround and rather call "stuff" unsecure and just block that. Sure that´s not a brand new idea, these technics are used wide often already but only as an additional peace of security software what in the end also can be hacked or disabled a virus.
You guys at ISC do a good job, but there is someone needed to act on those infos and change firewall rules, block URLs in proxys, etc.
Why not keep it simple and just use a techniqe that already exist? So just set up an forwarding DNS, that is controlled by ISC. You could easily redirect some hosts, set up an webserver that uses virtualhosts and show people informations why that site or IP got blocked.
By that, it also means that you could make stats about different worms, as long as they are using hostnames and not IPs to get updated, send hacked data to, etc.
Apply "products liability" at commercial web site owners.
Find Two New Factors for Two-Factor Authentication
I don't know if this is what you're looking for but here it goes...
"Here's an idea. Could what I know (my questions & answers) combined with something I have (my friends) fulfill the two-factor authentication previously mentioned? When we visit a website and enter information on a secure form to open an account (any account). Could we create three questions with the corresponding answers to replace entering useless information such as our mothers maiden names?"
Build Hardware Firewalls into Personal Computers
Many ADSL routers and modems have hardware firewalls included as standard. In theory at least, it should be possible for new computers to also have hardware firewalls included as standard. So, even if the technically challenged new-user doesn't install a software firewall, there will be some protection via the hardware one -- this would be especially relevant if a dial-up connection is being used.
Seriously, the best would be to find a way to train/educate users.
Humoursly: Electrified 'dog' collar that jolts the users when they (a) browse insecurely, (b) click without reading, (c) do stupid (in a security sense) things. This should reinforce all the training lessons we give them so that in a very short time the internet and our private networks will be secure.
Introduce Entropy into all Software
The bad guys go after software that is pervasive and known. But what if that software was modified using the same sorts of tools that the hackers use to obfuscate their own code? What if every copy of Windows was just a little bit different than the next, even though the base code was the same? Wouldn't that make it seriously more difficult to write malware that could infect enough machines to make it financially viable for the bad guys?
Update Internet Protocols Like BGP to Include Historical Metrics
With the advent of the recent CyberConflict landscape, BGP Route-Injection and DNS vulnerabilities, one of the long-term concerns should involve the area of External Gateway Protocols (EGP) or in reality BGP.
Having the ability to perform historical analysis on these legacy protocols would go a long way to identify problems in these areas.
Unfortunately, by the mere presence of legacy protocols that have very little in the manner of embedded metrics capabilities any effort to perform a historical analysis is deficient to begin with.
From a pure business perspective one has to develop some sort of metrics in order to establish the standard business case with which to initiate any future action(s).
The ability to detect a dynamic traffic level between (2) dissimilar BGP ASN's isn't unrealistic. But the ability to then analyze that information for an individual protocol and ultimately associate the entire ASN as potentially malicious might go a long way once RBN related ASN's such as the ATRIVO ASN are considered.
There has to be a minimum responsibility as an ISP or as an ASN, why not evaluate this type of methodology to help an ASN identify and then clean up malicious traffic within their Administrative Boundaries?
Spoof the Spoofers
I am reminded as a result of reading your note on new Cyber strategies, about a place I used to work that constructed aerospace components for the many companies that offered their goods to the USA military machine. One of these devices were housings machined to contain the electronics which was capable of jamming the navigation signal of the French Exocet missile.
When the target, usually a ship, is acquired from the radar of the jet fighter, the navigation was transferred to the missile's gyro system so that once released from under the wing of the fighter, the missile was on it's own to reach it's designated target. Raytheon, a Canadian designer of anti aircraft and anti missile electronics developed a signal to send to the incoming missile that would instruct the missile to turn around and return to the jet from whence it came. Needless to say the pilot would have a sudden digestive problem once it became known the missile he just launched was coming back for a visit.
My point here is that I suggest IT Storm Centers such as yours take a look at creating a dummy gateway to attract and subsequently divert the SPAM/Malware from our mailboxes/internet browsing. I don't think this has been done yet, but spoofing the spoofers by creating servers dedicated to ghosting our image on the web would be akin to scientists that have developed pheromones to attract undesirable insects into a trap and terminate their existence. Firewall-filtering the inundation of SPAM eventually will lead to congestion of network bandwidth. Diverting the traffic however, so that the culprits make a detour into a black hole should do a lot to keep our network highways cleaner from much of the SPAM dirt floating around or better yet send it back where it came from...
A note on submitting to the diary:
The drawback to that is the cyberpunks will read your new strategies and thwart your progress so I think it best to keep your secrets secret and collect the new strategies in stealth.
New Backup/Recovery Procedures for Average Users
Ideas for "changing the nature of the game": Behind these ideas is the premise that the Microsoft Apple and *Nix environments are still too technically complex for the average user to operate safely. At the base of any substantial changes will be user interfaces that allow a non-engineer to make intelligent choices. Anyone who thinks that Vista has brought us closer to this is deceiving themselves. My particular suggestions all pertain to backup/recovery which is still a task too burdensome to all but trained engineers. When the average user notes that unintended changes have occured to his system, he ahould be able to quickly recover back to earlier states:
1) System restore in XP and Vista was a step forward, but finding it and understanding it is still for the engineers to explain to the avg users. System restore should work consistently (it doesn't) and be more intuitive - there should be equivalent tools for other OSes as they become part of the non-engineer experience.
2) Along with system restore - or instead of it - the avg user needs to have the option to easily recover EVERYTHING back to a safe point in the recent past. Symantec GoBack does this well but is still too complicated for the avg user and does not play nicely with systems having multiple boot partitions such as Acer builds, full disk encryption, multiple OS partitions. Make an all-data system restore journaling recovery system available as a clearly understood basic componant of the operating system. Upon logging out or shutting down, the user should be given a pop-up that clearly summarizes changes to the system that may not have been intended by the user. If the user isn't comfortable with the changes, the non-privileged user can be prompted to restore back to the beginning of his login, and the priviliged user can be prompted to go back to an earlier restore point. Antivirus infection alerts would be a clear indication to revert to an earlier state.
3) Vista removed the NT backup tool and replaced it with file backups only - unless the user was savy enough to know he had to shell out extra bucks for a Vista version with image backup. At minimum, an easy to use, image backup tool needs to be built in to every system. Inexpensive external storage drives need to be the target of the image backups or an inexpensive second drive can be included in the hardware.
Marcus H. Sachs
Director, SANS Internet Storm Center