XOR DDOS Trojan Trouble
Basic MOThe victim server was CentOS 6.5 system with a basic LAMP setup, that offered ssh and VSFTP services. Iptables was in use, but NOT SELinux. It is my untested claim that SELinux likely would have prevented this trojan from taking hold. I am not an SELinux user/expert so I was unable to take time to add it to this environment. MitigationThe following steps were taken for mitigation. The only thing that prevented the recreation of the malware was the use of the chattr command. Adding the immutable bit to the /etc/init.d and /lib directories were helpful in preventing the malware from repopulating. I put together the following for loop script and added the following IP addresses to IP tables to drop all communication. The for loop consists of clean up of four running processes. I used ls and top to determine the for loop arguments and PID's used in the kill command. I through the following into a script called runit.sh and executed it. PreventionI now keep the immutable bit set on /lib on a clean system. It turn it off before patching and software installs, in the event the /lib directory is needed for updating. |
Kevin Shortt 85 Posts ISC Handler Jun 23rd 2015 |
Thread locked Subscribe |
Jun 23rd 2015 7 years ago |
Usually it's easier to just sent the process a SIGSTOP instead of a SIGKILL (does not require the chattr approach). Then you can clean up and send a kill to the process afterwards.
This DDoS trojan is around for quite some time, IIRC there are also stackexchange discussions about it. I also failed to narrow down the initial attack vector. We suspected some outdated php application plus some local root exploit at the time. |
Anonymous |
Quote |
Jun 23rd 2015 7 years ago |
This has been around quite some time. My first encounter with it was Aug 2014.
I did encounter that a SIGSTOP would only be effective if you never killed the process. (left in perpetual pause) Once the process was killed however, new spawn would occur. (for my variants) |
Kevin Shortt 85 Posts ISC Handler |
Quote |
Jun 23rd 2015 7 years ago |
The comments in the third reference have the chattr -i fix. It seems there is more than one way to skin this cat.
|
jbmoore 11 Posts |
Quote |
Jun 23rd 2015 7 years ago |
Thanks for chiming in jbmoore. For clarification purposes, my prevention method after the rebuild/reinstall of the server, only included the directory. The recursive flag was used during mitigation and triage.
chattr +i /lib This is has proven effective and cleaner from a management perspective. |
Kevin Shortt 85 Posts ISC Handler |
Quote |
Jun 23rd 2015 7 years ago |
"Usually it's easier to just sent the process a SIGSTOP instead of a SIGKILL (does not require the chattr approach). Then you can clean up and send a kill to the process afterwards."
Doesn't work as well as one might think. Many of these make a copy before executing. The process goes like this. 1. Load binary_1 in memory. 2. Copy binary_1 to binary_2 3. Remove all references to binary_1 and replace with references to binary_2 4. Lather rinse repeat. The best way I've found (been doing an insane amount of work on the actors behind this) is to reboot into single user mode and remove the stuff from the /etc/init.d and all related binaries. Zach W. |
Zach W 10 Posts |
Quote |
Jun 23rd 2015 7 years ago |
Good write up for those that want a summary of what's happening... Well done.
I recently investigated a system that was compromised via a dictionary attack on the root account via the sshd service. The M.O. was basically: * Dictionary attack that ended with a successful login on the root account * This session was ended immediately after the successful login * ~19.5 hours later a new IP logged into the root account in a single attempt (Q's: [1] Was the password shared between attackers or [2] Is the attacker in control of a bigger set of infrastructure than we'd like? Who knows...) * After the second login, the standard MO was followed with regards to malware installation I would agree with the /lib and /lib64 immutable bit, but this for me is a stage too late. the attacker already has root on your system. its only a matter of time until they google and find what we are doing that is breaking his/her install script and alters it to remove the immutable bit before installation... fail2ban is a good way to mitigate many different types of brute-force/DoS attacks but I also don't believe its a silver bullet for this situation either... In this case, a best practice approach would have saved my client: * Don't let root ssh in. Period. * Normal users must only be able to SSH in with a certificate and never with a password * If you must have a root password (for console access for example) use a generated one that is of significant length (16 Chars at least) and save it in a trusted password safe (LastPass for example). It needs to be huge, it should never actually be used except in an emergency. In our case it was easy to destroy and rebuild all infected systems so this is the route we went. Public Indicators: * Dictionary attack source - 59.63.188.44 * Infection source - 175.126.82.235 |
colinvanniekerk 1 Posts |
Quote |
Nov 8th 2015 6 years ago |
Sign Up for Free or Log In to start participating in the conversation!