FireEye takes on Ozdok and Recovery Ideas
The folks over at FireEye report (http://blog.fireeye.com/research/2009/11/smashing-the-ozdok.html) on one of their takedown efforts of the Ozdok (aka Mega-D) botnet. Victims of this infection have pop-up advertisements pushed their system and they are used to send spam—a significant amount of spam according to M86 Security (http://www.m86security.com/TRACE/traceitem.asp?article=510). More information is available from Joe Stewart: http://www.secureworks.com/research/threats/ozdok/.
This is good news. A major spam source has been disrupted. Unfortunately we’re still left with thousands of machines that have been infected. In many cases of adware/spyware infection the malware with disable or impede Anti-virus programs, leaving these machines unprotected to follow-on infections. Taking down Command and Control servers and registering the future/fallback domains is time-consuming and expensive. Yet compared to the effort required to clean up all of the infected systems it’s only the tip of the iceberg.
A centralized plan or organization to drive such an effort is doomed to fail. The response needs to be community-driven, decentralized, and personalized. Organizations may be able to support an incident response team, but individuals cannot. Law Enforcement treats this as an individual’s problem, the individuals’ think law enforcement should act, and ISPs are stuck in the middle. There are opportunities there, but it’s risky. I’m fond of the idea of walled-garden services—I’m more fond of optional walled-gardens (which brings more expense to the ISP.)
Although the information about what is a bad URL and what isn’t can be centralized, its delivery has to be decentralized. Services like OpenDNS are attractive and I have hopes that it will be successful. Web filtering services can have a big impact on not only malware, but also phishing attacks. There’s one feature that I haven’t found in the web filtering services yet (I hope they’re reading this.) I would like to have the option to block access to all domains that are younger than X days. For some folks, 1 or 2 days is fine, other organizations might like 7 or more. This shouldn’t be too hard to implement with some whois-lookups, right? Or better yet, allow the new domain in, if it’s been categorized by the filtering service, but block it if nobody has evaluated it yet. Perhaps someone could write a FireFox plugin to implement this block for folks who can’t afford a web-filtering service?
Comments
The link suggests that this botnet may have approximately 264K members, which may underestimate the population if a lot of the infected machines are behind firewalls doing NAT.
I would tend to agree that the walled-garden approach probably makes sense. It cuts the infected machines off from any other sources of malware until they can be reimaged or cleaned.
There is a market out there for people to do cleanups/reinstalls/virus removal from machines. Average users really aren't going to have enough experience with this sort of thing. If someone came to me and I didn't feel like doing it myself (usually the case), I don't know who I would recommend. I see horror stories about the "services" offered at some stores.
Jack Russell
Nov 8th 2009
1 decade ago