This is a guest diary submitted by Xavier Mertens. Writing documentation is a pain for most of us but... mandatory! Pentesters and auditors don't like to write their reports once the funny stuff has been completed. It is the same for the developers. Writing code and developing new products is fun but good documentation is often missing. By documentation, I mean "network" documentation. Why? But today, more and more devices are connected (think about the IoT-buzz - "Internet of Things"). These devices are manufactured in a way that they automatically use any available network connectivity. Configure a wireless network and they are good to go. Classic home networks are based on xDSL or cable modems which provide basic network services (DHCP, DNS). This is not the best way to protect your data. They lack of egress filters and any connected device will have a full network connectivity and potentially exfiltrate juicy data. That's why I militate in favor of a documentation template to describe the resources required to operate such "smart" devices smoothly. Here is an good example. I've a Nest thermostat installed at home and it constantly connects to the following destinations: --- |
Johannes 4067 Posts ISC Handler Mar 16th 2015 |
Thread locked Subscribe |
Mar 16th 2015 5 years ago |
Thanks Xavier and Dr. B. Nice diary for some thought.
One additional thing I've done is to force internal hosts at home to use my internal DNS as well. You alluded to this but thought I'd point it out in more detail. I don't allow udp/tcp 53 outbound from my house, except for allowing my DNS server to look up queries upstream. I've seen several software applications and a hardware device recently start attempting to use external DNS hosts first regardless of my DNS and DHCP settings. I realize that some of these applications are doing this to make sure there's no hokey DNS stuff going on with my network, but I think I deserve to decide that first. ;) I added a SecurityOnion installation after Dr. J's diary regarding data exfiltration in February and I'm really enjoying the visibility that shows. |
Jack G. 6 Posts |
Quote |
Mar 16th 2015 5 years ago |
> You alluded to this but thought I'd point it out in more detail. I don't allow udp/tcp 53 outbound from my house, except for allowing my DNS server to look up queries upstream.
I guess I could never really put up with that.... Being able to troubleshoot DNS issues is very important, and a 'dig' trace is one of the most useful tools in my toolbox. I have dig installed on all my workstations. And it ought to be in the toolbox of any sysadmin or network op who needs to do some further diagnosis, after it's determined that accessing a URL has apparently inconsistent results, or inconsistent load times. Forward and Reverse DNS query response is vital. The tool might be more important than PING. For example... if the DNS response is wrong, I need to determine whether it's still wrong, or if it's bad data in cache. I occasionally have applications that need to query all of a target domain's authoritative servers, to verify that all servers are returning the same results for a certain query; this is also a basic DNS monitoring function. |
Mysid 146 Posts |
Quote |
Mar 16th 2015 5 years ago |
I'd submit that monitoring all outbound traffic is useful not just in a home environment. I have a love/hate relationship with a few custom snort rules I dropped in our snort servers at $DAYJOB$. Basically, they alert on any traffic from certain server subnets and any outside IP address except for a list of IPs/networks I have exempted.
I hate these rules because they can be noisy. It's become ridiculous how many applications want to phone home to some IP or network (like a certain backup client that phones home every time backups are run - how creepy is that - I'm told it's just date/time/start/stop/volume sort of info but still). Or all the windows servers downloading various certification revocation lists. Or some app that downloads it's updates from an Akamai IP that never seems to be the same one twice. (sigh) On the other hand, I also love these rules because they uncover "stupid" (tm) like vendors/contractors surfing to myfacespacebook.com from systems they're supposed to be working on, not surfing the interwobble. They also uncover badly configured hardware/software (like some app determined to use a public NTP server instead of the internal ones we told it to use). The original intent of these rules was to help watch for data exfiltration and just to get a handle on what these servers were talking to (in theory, we thought, they shouldn't be talking to ANYONE outside of our IP space). It's been useful, but it's also been a bit of a headache occasionally... |
Brent 123 Posts |
Quote |
Mar 16th 2015 5 years ago |
Oh yeah, one other note... blocking outbound DNS except from from DNS proxies you control is a VERY good idea. I'd go so far as to say it oughta be on everyone's mandatory checklist.
![]() Once upon a time, I had oak (my swiss-army-knife-tool for watching logs) alerting me anytime it saw ANY outbound DNS being blocked and we caught quite a few infected systems that way. These days, however, I've had to turn that off because of all the newer linuxes that (stupidly, IMHO) use dnsmasq as a caching DNS client. They're always trying to contact the root servers or other DNS servers on their own instead of using the DNS servers we tell 'em to use via DHCP. (sigh - whoever thought using dnsmasq was a good idea deserves a slap) I DO still have a watchlist of IPs known to be hosting or having once hosted malicious DNS servers though. And we do still watch our firewall logs for any blocked outbound DNS packets going to those IPs. |
Brent 123 Posts |
Quote |
Mar 16th 2015 5 years ago |
Sign Up for Free or Log In to start participating in the conversation!