Published: 2024-05-22

NMAP Scanning without Scanning (Part 2) - The ipinfo API

Going back a year or so, I wrote a story on the passive recon, specifically the IPINFO API (https://isc.sans.edu/diary/28596).  This API returns various information on an IP address: the registered owning organization and ASN, and a (usually reasonably accurate) approximation of where that IP might reside.
Looking at yesterday's story, I thought to myself, why not port my script from last year to an NMAP NSE script?  So I did!

Using the shodan-api nmap script as a template, I updated the following lines:

The actual API call of course is different:
  local response = http.get("ipinfo.io", 443, "/".. target .."/json?token=" .. registry.apiKey, {any_af = true})

This was a simple change, since the API key is still represented a a parameter in the URI this was just plug-n-play.

Also, because of differing return formats, in that same function I removed all the error checking of the returned values and replaced it with a simple return:
  return response.body

Note that there is a line
-- local apikey =""
If you want to embed your own API key into this script, remove the "--" (comment characters) and put your key in that line.

As with the Shodan script, you can tack IPINFO on to an existing active scan, or you can run it passively wiuth "-sn -Pn -n" as:

nmap -Pn -sn -n -P --script ipinfo.nse --script-args "ipinfo.apikey=<your apikey goes here>"
Starting Nmap 7.92 ( https://nmap.org ) at 2024-05-21 11:34 Eastern Daylight Time
Nmap scan report for
Host is up.

Host script results:
| ipinfo: {
|   "ip": "",
|   "hostname": "dns.google",
|   "anycast": true,
|   "city": "Mountain View",
|   "region": "California",
|   "country": "US",
|   "loc": "37.4056,-122.0775",
|   "org": "AS15169 Google LLC",
|   "postal": "94043",
|   "timezone": "America/Los_Angeles"

Post-scan script results:
|_ipinfo: IPInfo done: 0 hosts up.
Nmap done: 1 IP address (1 host up) scanned in 0.41 seconds

Simple as that!  Now I can return host ownership and location info with my nmap scans, or if I'm in a hurry, instead of the normal nmap scan!  This information can be pretty handy when analyzing potential attacks, for instance looking at a failed authentication to see if the geography matches where that person could conceivably be - you only have to go back a few days for my post on VPN credential stuffing attacks for an example of this.

I've got one more of these APIs in the hopper - if you have another recon API you'd like to see in an nmap script, by all means let me know in our comment form!

All of my recon scripts (both the command-line and the nmap scripts) are posted in my github: https://github.com/robvandenbrink/recon_scripts

Rob VandenBrink


Published: 2024-05-21

Scanning without Scanning with NMAP (APIs FTW)

A year ago I wrote up using Shodan's API to collect info on open ports and services without actually scanning for them (Shodan's API for the (Recon) Win!).  This past week I was trolling through the NMAP scripts directory, and imagine my surprise when I stumbled on shodan-api.nse.
So the network scanner we all use daily can be used to scan without actually scanning?  Apparently yes!

First the syntax:
nmap <target> --script shodan-api --script-args 'shodan-api.apikey=SHODANAPIKEY'
(note: use double quotes for script-args if you are doing this in Windows)

This still does a basic scan of the target host though.  To do this without scanning, without even sending any packets to your host, add:

-sn do a ping scan (ie we're not doing a port scan)
-Pn Don't ping the host, just assume that it's online

Neat trick there eh?  This essentially tells nmap to do nothing for each host in the target list, but don't forget that script we asked you to run!

This also has the advantage of doing the "scan" even if the host is down (or doesn't return on a ping)

Plus, just to be complete:
-n  Don't even do DNS resolution
This way NMAP isn't sending anything to the host or even to hosts under the client's control (for instance if they happen to host their own DNS).

If you're doing a whole subnet, or the output is large enough to scroll past your buffer, or if you want much (much) more useful output, add this to your script-args clause:

Let's put this all together:

nmap -sn -Pn -n www.cisco.com --script shodan-api --script-args "shodan-api.outfile=out.csv,shodan-api.apikey=<my-api-key-not-yours>"
Starting Nmap 7.92 ( https://nmap.org ) at 2024-05-17 09:53 Eastern Daylight Time
Nmap scan report for www.cisco.com (
Host is up.

Host script results:
| shodan-api: Report for (www.static-cisco.com, www.cisco.com, www.mediafiles-cisco.com, www-cloud-cdn.cisco.com, a184-26-152-97.deploy.static.akamaitechnologies.com)
| 80    tcp    AkamaiGHost
|_443   tcp    AkamaiGHost

Post-scan script results:
| shodan-api: Shodan done: 1 hosts up.
|_Wrote Shodan output to: out.csv
Nmap done: 1 IP address (1 host up) scanned in 1.20 seconds

Neat eh?  It collects the product and version info (when it can get it).  The CSV file looks like this:


This file format is a direct import into a usable format in powershell, python or just about any tool you might desire, even Excel :-)

Looking at a more "challenging" scan target:

nmap -sn -Pn -n isc.sans.edu --script shodan-api --script-args "shodan-api.outfile=out.csv,shodan-api.apikey=<my-api-key-not-yours>"

.. and so on.

Look at line 4!  If you've ever done a UDP scan, you know that it can take for-e-ver!  Since this is just an api call, it collects both tcp and udp info from Shodan.

How many ports are in the output?
type out.csv | wc -l

159 ports, that's how many! (subtract one for the header line)  This would have taken a while with a regular port scan, but with a shodan query it finishes in how long?

Post-scan script results:
| shodan-api: Shodan done: 1 hosts up.
|_Wrote Shodan output to: out.csv
Nmap done: 1 IP address (1 host up) scanned in 1.20 seconds

Yup, 1.2 seconds!

This script is a great addition to nmap, it allows you to do a quick and dirty scan for what ports and services have been available recently, with a bit of rudimentary info attached.

Did you catch that last hint?  If you're doing a pentest, it's well worth digging into that word "recently".  Looking at ports that are in the shodan list, but aren't in a real portscan (that you'd get from nmap -sT or -sU) can be very interesting.  These are services that the client has recently disabled, maybe just for the duration  of the pentest.  For instance, that FTP server or totally vulnerable web or application server that they have open "only when they need it" (translation: always, except for during the annual pentest).  If you can pull a diff report between what's in the shodan output and what's actually there now, that's well worth looking into, say for instance using archive.org.  If you do find something good, my bet is that it falls into your scope!  If not, you should update your scope to "services found during the test in the target IP ranges or DNS scopes" or similar.  You don't want something like this excluded simply because it's (kinda) not there during the actual assessment :-)

Got another API you'd like to see used in NMAP?  Please use our comment form.  Stay tuned I have a list, but if you've got one I haven't thought of I'm happy to add anohter one!

Rob VandenBrink


Published: 2024-05-20

Analyzing MSG Files

.msg email files are ole files and can be analyzed with my tool oledump.py.

They have a lot of streams, so finding the information you need (body, headers, attachments, ...) can take some time searching.

That's why I have a plugin that summarizes important information from .msg files: plugin_msg.summary.py.

This is how its output looks like when I run it on a .msg file with malicious attachment:

While showing a friend my plugin features, I got the idea to make some updates to this plugin.

First, when attachments are inline and/or hidden, that information is added to the attachment overview, as can be seen in the screenshot above for attachment 0.

Inline attachments are typically pictures that have been pasted into the email's body, and do not appear as seperate attachments when opened in Outlook, for example.

If you are analyzing an email for malicious attachments, you can first focus on attachments that are not inline.

This information also appears when outputing JSON information for the analyzed .msg file:

Second, I added a new option to output JSON information for the attachments with their contents, so that these attachments can be analyzed as I explained in recent diary entries "Analyzing PDF Streams" and "Another PDF Streams Example: Extracting JPEGs".

This JSON output can be piped into my other tools that support this JSON format, like file-magic.py (to identify the file type based on its content):

If an attachment is inline and/or hidden, this -J output option prefixes the attachment name in the JSON output:

And I also updated my plugin plugin_msg.py to parse property streams. This is where information like inline and hidden are stored:

As can be seen in the screenshots, there are also properties for timestamps like creatiion time and last modification time.


Didier Stevens
Senior handler


Published: 2024-05-18

Wireshark 4.2.5 Released

Wireshark release 4.2.5 fixes 3 vulnerabilities (%%cve:2024-4853%%, %%cve:2024-4854%% and %%cve:2024-4855%%) and 19 bugs.

Didier Stevens

Senior handler



Published: 2024-05-17

Another PDF Streams Example: Extracting JPEGs

In my diary entry "Analyzing PDF Streams" I showed how to use my tools file-magic.py and myjson-filter.py together with my PDF analysis tool pdf-parser.py to analyze PDF streams en masse.

In this diary entry, I will show how file-magic.py can augment JSON data produced by pdf-parser.py with file-type information that an then be used by myjson-filter.py to filter out files you are interested in. As an example, I will extract all JPEGs from a PDF document.

First, let's produce statistics with pdf-parser.py's option -a:

This confirms that there are many "Indirect objects with a stream" in this document.

Next, I let pdf-parser.py produce JSON output (--jsonoutput) with the content of the unfiltered streams, and I let file-magic.py consume this JSON output (--jsoninput) to try to identify the file type of each stream based on its content (since streams don't have a filename, there is no filename extension and we need to look at the content):

If we use option -t to let file-magic.py just output the file type (and not the file/stream name), we can make statistics with my tool count.py and see that the PDF document contains many JPEG files:

Now we want to write all of these JPEG images to disk. We use file-magic.py again in JSON mode, but let it also output the same JSON data augmented with file-type information (--jsonoutput):

Next, this JSON data is consumed by myjson-filter.py and filtered with regular expression (case sensitive) JPEG on the file type: -t JPEG.

Finally, we write the JPEG images to disk with -W hashext:jpeg: this writes each JPEG stream to disk with a filename consisting of the sha256 of the file's content and extension .jpeg.

By using the hash of the content as filename, there are no duplicate pictures:

Should you want to reproduce the commands in these diary entries with the exact same PDF files I used, my old ebook on PDF analysis can be found here and the analysis on TLS backdoors done by a colleague can be found here.

Didier Stevens
Senior handler


Published: 2024-05-16

Why yq? Adventures in XML

I was recently asked to <ahem> "recover" a RADIUS key from a Microsoft NPS server.  No problem I think, just export the config and it's all there in clear text right?

... yes, sort of ...

The XML file that gets output is of course perfect XML, but that doesn't mean it's easy to read for a human, it's as scrambled as my weekend eggs.  I got my answer, but then of course started to look for a way to get the answer in an easier way, something I could document and hand off to my client.  In other words, I started on the quest for a "jq" like tool for XML.  Often security work involves taking input in one text format and converting it to something that's human readable, or more easily parsed by the next tool in the pipeline.  (see below)

XMLLint is a pretty standard one that's in Linux, you can get it by installing libxml2.  Kali has it installed by default - usage is very straightforward:

xmllint < file.xml


cat file.xml | xmllint

There are a bunch of output options, but because it's not-so windows friendly I didn't dig to far - run man xmllint or browse here: https://gnome.pages.gitlab.gnome.org/libxml2/xmllint.html  if you need more than the basics on this.

However, finding something like this for Windows turned into an adventure, there's a port of xmllint for Windows but it's in that 10-year age range that makes me a bit leary to install it.  With a bit of googling I found yq.

This is a standalone install on most Linux distro's (sudo apt-get install yq or whatever), and has a few standard install methods for windows:

  • you can just download the binary and put it in your path
  • choco install yq
  • winget install --id MikeFarah.yq

yq is written to mimic jq like you'd expect from the name, but will take json, yaml, xml, csv and tsv files.  It's not as feature-heavy as jq, but it's got enough, and let's face it, most of us use these for pretty print output, so that we can grep against that anyway.
I especially liked it for today's problem because I can adjust the indent, since the NPS XML export has a fairly deep heirarchy I went with an indent of 1:

type nps-export.xml | yq --input-format xml --output-format xml --indent 1 > pretty.xml

A quick peek at the file found me my answwer in the pretty (and grep-able) format that I wanted.  A typical RADIUS Client section looks lke:

 <Clients name="Clients">
     <Client_ _Template_Guid="_Template_Guid" xmlns:dt="urn:schemas-microsoft-com:datatypes" dt:dt="string">{00000000-0000-0000-0000-000000000000}</Client_>
     <IP_Address xmlns:dt="urn:schemas-microsoft-com:datatypes" dt:dt="string">IP.Address.Goes.Here</IP_Address>
     <NAS_Manufacturer xmlns:dt="urn:schemas-microsoft-com:datatypes" dt:dt="int">0</NAS_Manufacturer>
     <Opaque_Data xmlns:dt="urn:schemas-microsoft-com:datatypes" dt:dt="string"></Opaque_Data>
     <Radius_Client_Enabled xmlns:dt="urn:schemas-microsoft-com:datatypes" dt:dt="boolean">1</Radius_Client_Enabled>
     <Require_Signature xmlns:dt="urn:schemas-microsoft-com:datatypes" dt:dt="boolean">0</Require_Signature>
     <Shared_Secret xmlns:dt="urn:schemas-microsoft-com:datatypes" dt:dt="string">SuperSecretSharedKeyGoesHere</Shared_Secret>
     <Template_Guid xmlns:dt="urn:schemas-microsoft-com:datatypes" dt:dt="string">{1A1917B8-D2C0-43B3-8144-FAE167CE9316}</Template_Guid>

Or I could dump all the shared secrets with the associated IP Addresses with:

type pretty.xml | findstr "IP_Address Shared_Secret"


cat pretty.xml | grep 'IP_address\|Shared_Secret'

After all that, I think I've found my go-to for text file conversions - in particular xml or yaml, especially in Windows ..

Full details on these two tools discussed:

If you've got a different text formatter (or un-formatter), or if you've used xmllint or yq in an interesting way, please let us know about it in our comment form!

Rob VandenBrink


Published: 2024-05-15

Got MFA? If not, Now is the Time!

I had an interesting call from a client recently - they had a number of "net use" and "psexec" commands pop up on a domain controller, all called from PSEXEC (thank goodness for a good EDR deployed across the board!!).  The source IP was a VPN session.

Anyway, we almost immediately declared an incident, and the VPN that was in use that had just Userid / Password authentication was the ingress.  We found a US employee with an active VPN session from Europe (the classic "impossible geography session") - so the standard "kill the session, deactivate the account / change the password action" ensued.
Followed by a serious conversation - really your userid/password protected VPN is only as strong as your weakest password.  Any you KNOW that some folks have kept their "Welcome123" password that they got at their last "I forgot my password" helpdesk call.  Also, your userid/password VPN is only as strong as the weakest other site that your folks have used their work credentials for.

Anyway the actions and discussion above was followed by the "who would want to target us?" conversation, so off to the logs we went.

The standard Cisco VPN rejected login syslog message looks like this:

Local4.Info     <fw.ip.add.ress>    %ASA-6-113005: AAA user authentication Rejected : reason = AAA failure : server = <rad.ius.server.ip> : user = ***** : user IP = <att.ack.er.ip>

So, we started by dumping all the Rejected logins for the day (note that this client has syslog in Windows):

type fw.ip.add.ress.txt | find "Rejected" > aaafail.txt

Now let's see how many events we have in a day:

type aaafail.txt | wc -l

Let's look at a representative timeslice.  We'll look at:

  • 5pm-6pm (so the time is 17:xx)
  • remove any repeating space characters (tr -s " ")
  • field 24 is the souce IP address, extract that with "cut"
  • sort | uniq -c  Give me just uniq addresses, with counts, sorted in descending order
  • After that, I'm just looking (manually) at the attacking hosts with a 10 count or higher
type aaafail.txt | find " 17:" | tr -s " " | cut -d " " -f 24 | sort | uniq -c | sort /r

The first thing we notice is that the first IP stands out, so let's block that.
Now we'll look at those IP's a bit closer using ipinfo, see my story on this utility here: https://isc.sans.edu/diary/Using+Passive+DNS+sources+for+Reconnaissance+and+Enumeration/28596

  "ip": "",
  "hostname": "cp.srv.plusdatacenter.com",
  "city": "Frankfurt am Main",
  "region": "Hesse",
  "country": "DE",
  "loc": "50.1155,8.6842",
  "org": "AS51167 Contabo GmbH",
  "postal": "60306",
  "timezone": "Europe/Berlin"

Next we note that these "top 20" hosts generate 953 requests out of the for the hour, so this really does look like 1 outlier, plus ~19-20 hosts in a managed cluster.

OK, let's look at those other two subnets that are over-represented in this top 20 list:

  "ip": "",
  "city": "Sandnes",
  "region": "Rogaland",
  "country": "NO",
  "loc": "58.8524,5.7352",
  "org": "AS201454 UPHEADS AS",
  "postal": "4301",
  "timezone": "Europe/Oslo"

  "ip": "",
  "city": "Kyiv",
  "region": "Kyiv City",
  "country": "UA",
  "loc": "50.4547,30.5238",
  "org": "AS208467 MagicService LLC",
  "postal": "03027",
  "timezone": "Europe/Kyiv"

Note that we're 3 searches in and we still haven't found of the traditional "boogeyman".  No Russia, no DPRK, no Iran (yet).

So, keeping in mind that we're just playing with part of the attack, I started blocking subnets, ASN's, and countries.

We blocked the subnets above, and the attack shifted within seconds to ramp up from a Cloud Service Provider in Germany.  We blocked their address space, and it shifted to a CSP in France.  Two more CSP's later, and we finally cut the "top 20" volumes down, and our high volume hosts were down to 5 hosts in Russia.
Blocking Russia shifted the attack to India, then South America.  

You see the patterns here, and have hopefully drawn the same conclusions.

The attackers have pre-built "malicious assets" wherever they can spin up legitimate free or low cost cloud hosts.  The attacker is not attacking from their own IP space or even thier own country.  The entire thing is automated, over the course of that day we saw malicious attacks from roughly 1100 IP addresses as we blocked various subnets and ASNs for various (mostly legitimate) cloud providers.

Looking at the other half of the equation, this attacker in particular was using account names that were not related to the organization being attacked - the userid's being used were a mix of all formats plus favourites such as admin, administrator and root.  So it looks like they were using a combination of standard password dumps as input.  I'd have been more concerned - and wouldn't have played around so much - if the attacker had harvested user account info from LinkedIn and similar sources.  If the credential stuffing attack contains mostly legitimate people names, the chances of success are WAY higher - they're most likely combining legit names with password dump data that matches those names.  Normally we see this sort of thing with red team or more targeted malicious activity, where the per-company costs are a bit higher but because things are targetted, the attack tends to succeed sooner.  In that situation we'd have likely shut down the VPN and implemented MFA offline.  Password dump files such as this are easy to come by, and are generally free - though you can certainly purchase access targeted lists or even purchase access to particular companies from "access as a Service" companies in the "criminal supply chain".  Don't believe folks who use phrases like "the dark web" when describing these (though that does exist too)

Needless to say, we did a crash migration to MFA for their VPN, we had them over within a couple of hours of making the decision.  Since their email was already using MFA, this was free and no fuss at all (thanks Microsoft!) - the MFA prompts were already familiar to the user base.

Lessons learned?

  • Nobody is targetting you, they are targetting everyone that hasn't implemented MFA.  Or possibly targetting even the MFA sites, since it's tough to tell either way until you get to the MFA prompt.  
  • Also from the connection volumes, the attackers were very careful not to lock out accounts.  Each IP address has roughly 15 attempts max in an hour, so that's once every ~4 minutes.
  • With just one hit every 4 minutes, just this one example cluster has gobs of capacity to easily scale up to easily target hundreds of other organizations.
  • They are targeting VPNs.  None of this "pivot through a website" gymnastics, then "pivot out of the DMZ" heartache, they're after the front door and full access to the network.  Though they're still targeting websites too.  Anything with a login prompt is fair game.
  • This entire thing is automated.  As I shut down each cluster of addresses, a new cluster would pop up elsewhere in the world within seconds.  The old cluster is still chugging along against its other targets.
  • The cluster of hosts at any given time likely are not the actual attacking hosts.  Remember, the attackers have a significant application and data management challenge here.  They need to centrally store the "prospective credentials" and actual compromised credentials, as well as keep track of hundreds of targets and where each target is in the campaign.  So these clusters of hosts are likely proxy servers being used by a central cluster of actual servers backed by a database and a decent application, or at least a pretty good script.  This means my estimate of hundreds of targets is likely on the light side.
  • Geo-blocking is getting less useful over time - while we still do see attacks from the countries you might expect, anyone who is any good has automation to either source their attack from almost anywhere, and once they see you blocking them, from anywhere else.  This argues further for central control and data management, since I'd bet only our attacker's proxy servers were jumping from datacenter to datacenter, there's no profit  in moving the attack against any given organization unless you have to.  This means that "we are being targeted by country x" are very difficult to attribute (this is not new)
  • The foreign influence attacks" (check the media lately in Canada) headlines aren't worth the headline - OF COURSE the attack is coming from outside of your country.  Nobody is going to mount an attack where their local constabularly can roll up and knock down their door.  These folks take great pains (usually) to operate in countries that their government doesn't play so nice with.  Good luck finding the actual servers though, unless you compromise a proxy host that is (this may not be legal in your jurisdiction, this was NOT advice)
  • Attack styles do tend to come in waves.  The classic "drop powershell to download the malware" email attacks have declined somewhat since Microsoft blocked most scripting in Office.  We had "whale phishing" attacks that drove MFA for email a couple of years back.  In more recent times we've seen attacks against vulnerabilities in everyone's edge appliances (firewalls, vpn's, file transfer, terminal service proxies etc).  Credential stuffing against userid/passwords seem to be seeing an uptick lately.  But guess what, they're all in play, all the time - none of these are new, and just because one hits the headlines doesn't mean the others are not just as active as they were last month or last year.  Credential stuffing attacks very much like this one have been a part of the landscape since the 90's (or before), they're just too cheap and easy to set and forget for the attacker, especially these days.

The "moral of the story"?

  • If you haven't implemented MFA, now is the time.
  • If you have just userid/password protection on your VPN and are not compromised, you likely will be soon.
  • If you think you're not compromised, that doesn't mean that you're not.  The attacker in this incident is likely not the only one, and is likely not even the only one from today.  They'll likely sell any compromised credentials to the real attacker that's in it for extortion of one kind or another.  That real attacker likely is purchasing the credentials from one provider, the malware from another and so on - the bad guys are just as much focused on "As A Service" as regular IT teams. In fact, the attackers are regular IT teams, just (mostly) operating outside of their target jurisdictions.
  • So you might be compromised a good long time before you see anything obvious in your daily operations that will tell you that you have a problem.
  • If you are still running simple antivirus, you need to look at better options.  
  • If you don't have a SIEM that will alert you to attacks that show up in your logs, then you won't know about your attacks until they succeed.

Anyway, this story went on longer than I had planned.  Long story short, if you have anything (VPN, Website, SSH, application, whatever) facing the internet that has a simple userid / password login, then you should probably rethink that decision in 2024.

Rob VandenBrink


Published: 2024-05-14

Microsoft May 2024 Patch Tuesday

This month we got patches for 67 vulnerabilities. Of these, 1 are critical, and 1 is being exploited according to Microsoft.

The critical vulnerability is a Remote Code Execution (RCE) affecting the Microsoft Sharepoint Server (CVE-2024-30044). According to the advisory, an authenticated attacker with Site Owner permissions or higher could upload a specially crafted file to the targeted Sharepoint Server and craft specialized API requests to trigger deserialization of file's parameters. This would enable the attacker to perform remote code execution in the context of the Sharepoint Server. The CVSS for the vulnerability is 8.8.

The zero-day vulnerability is an elevation of privilege on Windows DWM (Desktop Windows Management) Core Library (CVE-2024-30051). According to the advisory, an attacker who successfully exploited this vulnerability could gain SYSTEM privileges. The CVSS for the vulnerability is 7.8.

There is an important vulnerability affecting MinGit software (CVE-2024-32002), used by Microsoft Visual Studio, caused by an improper limitation of a pathname to a restricted directory ('Path Traversal') making it susceptible to Remote Code Execution. It is being documented in the Security Update Guide to announce that the latest builds of Visual Studio are no longer vulnerable. Please see Security Update Guide Supports CVEs Assigned by Industry Partners for more information. The CVSS for the vulnerability is 9.0 – the highest for this month.

See the full list of patches:

CVE Disclosed Exploited Exploitability (old versions) current version Severity CVSS Base (AVG) CVSS Temporal (AVG)
.NET and Visual Studio Remote Code Execution Vulnerability
%%cve:2024-30045%% No No - - Important 6.3 5.5
Azure Migrate Cross-Site Scripting Vulnerability
%%cve:2024-30053%% No No - - Important 6.5 5.9
CVE-2024-32002 Recursive clones on case-insensitive filesystems that support symlinks are susceptible to Remote Code Execution
%%cve:2024-32002%% No No - - Important 9.0 7.8
Chromium: CVE-2024-4331 Use after free in Picture In Picture
%%cve:2024-4331%% No No - - -    
Chromium: CVE-2024-4368 Use after free in Dawn
%%cve:2024-4368%% No No - - -    
Chromium: CVE-2024-4558 Use after free in ANGLE
%%cve:2024-4558%% No No - - -    
Chromium: CVE-2024-4559 Heap buffer overflow in WebAudio
%%cve:2024-4559%% No No - - -    
Chromium: CVE-2024-4671 Use after free in Visuals
%%cve:2024-4671%% No No - - -    
DHCP Server Service Denial of Service Vulnerability
%%cve:2024-30019%% No No - - Important 6.5 5.7
Dynamics 365 Customer Insights Spoofing Vulnerability
%%cve:2024-30047%% No No - - Important 7.6 6.6
%%cve:2024-30048%% No No - - Important 7.6 6.6
GitHub: CVE-2024-32004 Remote Code Execution while cloning special-crafted local repositories
%%cve:2024-32004%% No No - - Important 8.1 7.1
Microsoft Bing Search Spoofing Vulnerability
%%cve:2024-30041%% No No - - Important 5.4 4.7
Microsoft Brokering File System Elevation of Privilege Vulnerability
%%cve:2024-30007%% No No - - Important 8.8 7.7
Microsoft Edge (Chromium-based) Spoofing Vulnerability
%%cve:2024-30055%% No No Less Likely Less Likely Low 5.4 4.7
Microsoft Excel Remote Code Execution Vulnerability
%%cve:2024-30042%% No No - - Important 7.8 6.8
Microsoft Intune for Android Mobile Application Management Tampering Vulnerability
%%cve:2024-30059%% No No - - Important 6.1 5.8
Microsoft PLUGScheduler Scheduled Task Elevation of Privilege Vulnerability
%%cve:2024-26238%% No No - - Important 7.8 6.8
Microsoft Power BI Client JavaScript SDK Information Disclosure Vulnerability
%%cve:2024-30054%% No No - - Important 6.5 5.7
Microsoft SharePoint Server Information Disclosure Vulnerability
%%cve:2024-30043%% No No - - Important 6.5 5.7
Microsoft SharePoint Server Remote Code Execution Vulnerability
%%cve:2024-30044%% No No - - Critical 8.8 7.7
Microsoft WDAC OLE DB provider for SQL Server Remote Code Execution Vulnerability
%%cve:2024-30006%% No No - - Important 8.8 7.7
Microsoft Windows SCSI Class System File Elevation of Privilege Vulnerability
%%cve:2024-29994%% No No - - Important 7.8 6.8
NTFS Elevation of Privilege Vulnerability
%%cve:2024-30027%% No No - - Important 7.8 6.8
Visual Studio Denial of Service Vulnerability
%%cve:2024-30046%% Yes No - - Important 5.9 5.2
Win32k Elevation of Privilege Vulnerability
%%cve:2024-30028%% No No - - Important 7.8 6.8
%%cve:2024-30030%% No No - - Important 7.8 6.8
%%cve:2024-30038%% No No - - Important 7.8 6.8
Windows CNG Key Isolation Service Elevation of Privilege Vulnerability
%%cve:2024-30031%% No No - - Important 7.8 6.8
Windows Cloud Files Mini Filter Driver Information Disclosure Vulnerability
%%cve:2024-30034%% No No - - Important 5.5 4.8
Windows Common Log File System Driver Elevation of Privilege Vulnerability
%%cve:2024-29996%% No No - - Important 7.8 6.8
%%cve:2024-30025%% No No - - Important 7.8 6.8
%%cve:2024-30037%% No No - - Important 7.5 6.5
Windows Cryptographic Services Information Disclosure Vulnerability
%%cve:2024-30016%% No No - - Important 5.5 4.8
Windows Cryptographic Services Remote Code Execution Vulnerability
%%cve:2024-30020%% No No - - Important 8.1 7.1
Windows DWM Core Library Elevation of Privilege Vulnerability
%%cve:2024-30032%% No No - - Important 7.8 6.8
%%cve:2024-30035%% No No - - Important 7.8 6.8
%%cve:2024-30051%% Yes Yes - - Important 7.8 7.2
Windows DWM Core Library Information Disclosure Vulnerability
%%cve:2024-30008%% No No - - Important 5.5 4.8
Windows Deployment Services Information Disclosure Vulnerability
%%cve:2024-30036%% No No - - Important 6.5 5.7
Windows Hyper-V Denial of Service Vulnerability
%%cve:2024-30011%% No No - - Important 6.5 5.7
Windows Hyper-V Remote Code Execution Vulnerability
%%cve:2024-30010%% No No - - Important 8.8 7.7
%%cve:2024-30017%% No No - - Important 8.8 7.7
Windows Kernel Elevation of Privilege Vulnerability
%%cve:2024-30018%% No No - - Important 7.8 6.8
Windows MSHTML Platform Security Feature Bypass Vulnerability
%%cve:2024-30040%% No Yes - - Important 8.8 8.2
Windows Mark of the Web Security Feature Bypass Vulnerability
%%cve:2024-30050%% No No - - Moderate 5.4 5.0
Windows Mobile Broadband Driver Remote Code Execution Vulnerability
%%cve:2024-29997%% No No - - Important 6.8 5.9
%%cve:2024-29998%% No No - - Important 6.8 5.9
%%cve:2024-29999%% No No - - Important 6.8 5.9
%%cve:2024-30000%% No No - - Important 6.8 5.9
%%cve:2024-30001%% No No - - Important 6.8 5.9
%%cve:2024-30002%% No No - - Important 6.8 5.9
%%cve:2024-30003%% No No - - Important 6.8 5.9
%%cve:2024-30004%% No No - - Important 6.8 5.9
%%cve:2024-30005%% No No - - Important 6.8 5.9
%%cve:2024-30012%% No No - - Important 6.8 5.9
%%cve:2024-30021%% No No - - Important 6.8 5.9
Windows Remote Access Connection Manager Information Disclosure Vulnerability
%%cve:2024-30039%% No No - - Important 5.5 4.8
Windows Routing and Remote Access Service (RRAS) Remote Code Execution Vulnerability
%%cve:2024-30009%% No No - - Important 8.8 7.7
%%cve:2024-30014%% No No - - Important 7.5 6.6
%%cve:2024-30015%% No No - - Important 7.5 6.5
%%cve:2024-30022%% No No - - Important 7.5 6.5
%%cve:2024-30023%% No No - - Important 7.5 6.5
%%cve:2024-30024%% No No - - Important 7.5 6.5
%%cve:2024-30029%% No No - - Important 7.5 6.5
Windows Search Service Elevation of Privilege Vulnerability
%%cve:2024-30033%% No No - - Important 7.0 6.1
Windows Win32 Kernel Subsystem Elevation of Privilege Vulnerability
%%cve:2024-30049%% No No - - Important 7.8 6.8


Renato Marinho
Morphus Labs| LinkedIn|Twitter


Published: 2024-05-14

Apple Patches Everything: macOS, iOS, iPadOS, watchOS, tvOS updated.

Apple today released updates for its various operating systems. The updates cover iOS, iPadOS, macOS, watchOS and tvOS. A standalone update for Safari was released for older versions of macOS. One already exploited vulnerability, CVE-2024-23296 is patched for older versions of macOS and iOS. In March, Apple patched this vulnerability for more recent versions of iOS and macOS.


Safari 17.5 iOS 17.5 and iPadOS 17.5 iOS 16.7.8 and iPadOS 16.7.8 macOS Sonoma 14.5 macOS Ventura 13.6.7 macOS Monterey 12.7.5 watchOS 10.5 tvOS 17.5
CVE-2024-27834 [moderate] WebKit
The issue was addressed with improved checks.
An attacker with arbitrary read and write capability may be able to bypass Pointer Authentication
x x   x     x x
CVE-2024-27804 [important] AppleAVD
The issue was addressed with improved memory handling.
An app may be able to execute arbitrary code with kernel privileges
  x   x     x x
CVE-2024-27816 [moderate] RemoteViewServices
A logic issue was addressed with improved checks.
An attacker may be able to access user data
  x   x     x x
CVE-2024-27841 [important] AVEVideoEncoder
The issue was addressed with improved memory handling.
An app may be able to disclose kernel memory
  x   x        
CVE-2024-27839 [moderate] Find My
A privacy issue was addressed by moving sensitive data to a more secure location.
A malicious application may be able to determine a user's current location
CVE-2024-27818 [moderate] Kernel
The issue was addressed with improved memory handling.
An attacker may be able to cause unexpected app termination or arbitrary code execution
  x   x        
CVE-2023-42893 [moderate] Libsystem
A permissions issue was addressed by removing vulnerable code and adding additional checks.
An app may be able to access protected user data
  x   x        
CVE-2024-27810 [important] Maps
A path handling issue was addressed with improved validation.
An app may be able to read sensitive location information
  x   x     x x
CVE-2024-27852 [moderate] MarketplaceKit
A privacy issue was addressed with improved client ID handling for alternative app marketplaces.
A maliciously crafted webpage may be able to distribute a script that tracks users on other webpages
CVE-2024-27835 [moderate] Notes
This issue was addressed through improved state management.
An attacker with physical access to an iOS device may be able to access notes from the lock screen
CVE-2024-27803 [moderate] Screenshots
A permissions issue was addressed with improved validation.
An attacker with physical access may be able to share items from the lock screen
CVE-2024-27821 [moderate] Shortcuts
A path handling issue was addressed with improved validation.
A shortcut may output sensitive user data without consent
  x   x     x  
CVE-2024-27847 [important] Sync Services
This issue was addressed with improved checks
An app may be able to bypass Privacy preferences
  x   x        
CVE-2024-27796 [moderate] Voice Control
The issue was addressed with improved checks.
An attacker may be able to elevate privileges
  x   x        
CVE-2024-27789 [important] Foundation
A logic issue was addressed with improved checks.
An app may be able to access user-sensitive data
    x   x x    
CVE-2024-23296 [moderate] *** EXPLOITED *** RTKit
A memory corruption issue was addressed with improved validation.
An attacker with arbitrary kernel read and write capability may be able to bypass kernel memory protections. Apple is aware of a report that this issue may have been exploited.
    x   x      
CVE-2024-27837 [moderate] AppleMobileFileIntegrity
A downgrade issue was addressed with additional code-signing restrictions.
A local attacker may gain access to Keychain items
CVE-2024-27825 [moderate] AppleMobileFileIntegrity
A downgrade issue affecting Intel-based Mac computers was addressed with additional code-signing restrictions.
An app may be able to bypass certain Privacy preferences
CVE-2024-27829 [moderate] AppleVA
The issue was addressed with improved memory handling.
Processing a file may lead to unexpected app termination or arbitrary code execution
CVE-2024-23236 [moderate] CFNetwork
A correctness issue was addressed with improved checks.
An app may be able to read arbitrary files
CVE-2024-27827 [moderate] Finder
This issue was addressed through improved state management.
An app may be able to read arbitrary files
CVE-2024-27822 [important] PackageKit
A logic issue was addressed with improved restrictions.
An app may be able to gain root privileges
CVE-2024-27824 [moderate] PackageKit
This issue was addressed by removing the vulnerable code.
An app may be able to elevate privileges
CVE-2024-27813 [moderate] PrintCenter
The issue was addressed with improved checks.
An app may be able to execute arbitrary code out of its sandbox or with certain elevated privileges
CVE-2024-27843 [moderate] SharedFileList
A logic issue was addressed with improved checks.
An app may be able to elevate privileges
CVE-2024-27798 [moderate] StorageKit
An authorization issue was addressed with improved state management.
An attacker may be able to elevate privileges
CVE-2024-27842 [important] udf
The issue was addressed with improved checks.
An app may be able to execute arbitrary code with kernel privileges
CVE-2023-42861 [moderate] Login Window
A logic issue was addressed with improved state management.
An attacker with knowledge of a standard user's credentials can unlock another standard user's locked screen on the same Mac
CVE-2024-23229 [moderate] Find My
This issue was addressed with improved redaction of sensitive information.
A malicious application may be able to access Find My data


Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu


Published: 2024-05-12

DNS Suffixes on Windows

I was asked if I could provide mote details on the following sentence from my diary entry "nslookup's Debug Options":

     (notice that in my nslookup query, I terminated the FQDN with a dot: "example.com.", I do that to prevent Windows from adding suffixes)

A DNS suffix is a configuration of the Windows DNS client (locally, via DHCP, ...) to have it append suffixes when doing domain lookups.

For example, if a DNS suffix local is configured, then Windows' DNS client will not only do a DNS lookup for example.com, but also for example.com.local.

As an example, let me configure mylocalnetwork as a suffix on a Windows machine:

With DNS suffix mylocalnetwork configured, nslookup will use this suffix. For example, when I perform a lookup for "example.com", nslookup will also do a lookup for "example.com.mylocalnetwork".

I can show this with nslookup's debug option d2:

You can see in these screenshots DNS type A and AAAA resolutions for example.com.mylocalnetwork and example.com.

One of the ideas behind DNS suffixes, is to reduce typing. If you have a NAS, for example, named mynas, you can just access it with https://mynas/login. No need to type the fully qualified domain name (FQDN) https://mynas.mylocalnetwork/login.

Notice that the suffix also applies for AAAA queries, while in the screenshots above I only configured it for IPv4. That's because the DNS suffix setting applies both to IPv4 and IPv6:

Before I show the results with "example.com." (notice the dot character at the end), let me show how I can summarize the lookups by grepping for "example" (findstr):

If I terminate my DNS query with a dot character (.), suffixes will not be appended:

Notice that there are no resolutions for mylocalnetwork in this last example. That's because the trailing dot instructs Windows' DNS client to start resolving from the DNS root zone.

A domain name consists of domain labels separated by dots:

If you are adding a trailing dot, you are actually adding an empty domain label:

The empty label represents the DNS root zone, and no suffixes are appended to the DNS root zone, as it is the top-level (root) DNS zone.

A small tip if you want to restrict nslookup's resolutions to A records, for example. There is an option for that.

If you use nslookup's help option /?, you will see that you can provide options, but the actual options are not listed:

To see the available options, start nslookup, and then type "?" at its prompt, like this:

Now you can see that option "type" allows you to specify which type of records to query. Here is an example for A records:


Didier Stevens
Senior handler


Published: 2024-05-09

Analyzing PDF Streams

Occasionaly, Xavier and Jim will ask me specific students' questions about my tools when they teach FOR610: Reverse-Engineering Malware.

Recently, a student wanted to know if my pdf-parser.py tool can extract all the PDF streams with a single command.

Since version 0.7.9, it can.

A stream is (binary) data, part of an object (optional), and can be compressed, or otherwise transformed. To view a single stream with pdf-parser, one selects the object of interest and uses option -f to apply the filters (like zlib decompression) to the stream:


I added a feature that is present in several of my tools, like oledump.py and zipdump.py: extract al of the "stored items" into a single JSON document.

When you use pdf-parser's option -j (--jsonoutput), all objects with a stream, will have the raw data (e.g., unfiltered) extracted and put into a JSON document that is sent to stdout:

To have the filtered (e.g., decompressed data), use option -f together with option -j:

What can you do with this JSON data? It depends on what your goals are. I have several tools that can take this JSON data as input, like file-magic.py and strings.py.

Here I use file-magic.py to identify the type of each raw data stream:

From this we can learn, for example, that object 143's stream contains a JPEG image.

And here I use file-magic.py to identify the type of each filtered data stream:

From this we can learn, for example, that object 881's stream contains a compressed TrueType Font file.

What if you want to write all stream data to disk, in individual files, for further analysis (that's what the student wanted to do, I guess)?

Then you can use my tool myjson-filter.py. It's a tool designed to filter JSON data produced by my tools, but it can also write items to disk.

When you use option -l, this tool will just produce a listing of the items contained in de JSON data:

And you can use option -W to write the streams to disk. -W takes a value that specifies what aming convention must be used to write the file to disk. vir will write items to disk with their sanitized name and extension .vir:

hashvir will write items to disk with their sha256 value as name and extension .vir:

Didier Stevens
Senior handler
Microsoft MVP


Published: 2024-05-08

Analyzing Synology Disks on Linux

Synology NAS solutions are popular devices. They are also used in many organizations. Their product range goes from small boxes with two disks (I’m not sure they still sell a single-disk enclosure today) up to monsters, rackable with plenty of disks. They offer multiple disk management options but rely on many open-source software (like most appliances). For example, there are no expensive hardware RAID controllers in the box. They use the good old “MD” (“multiple devices”) technology, managed with the well-known mdadm tool[1]. Synology NAS run a Linux distribution called DSM. This operating system has plenty of third-party tools but lacks pure forensics tools.

In a recent investigation, I had to investigate a NAS that was involved in a ransomware attack. Many files (backups) were deleted. The attacker just deleted some shared folders. The device had two drives configured in RAID0 (not the best solution I know but they lack storage capacity). The idea was to mount the file system (or at least have the block device) on a Linux host and run forensic tools, for example, photorec.

In such a situation, the biggest challenge will be to connect all the drivers to the analysis host! Here, I had only two drives but imagine that you are facing a bigger model with 5+ disks. In my case, I used two USB-C/SATA adapters to connect the drives. Besides the software RAID, Synology volumes also rely on LVM2 (“Logical Volume Manager”)[2]. In most distributions, the packages mdadm and lvm2 are available (for example on SIFT Workstation). Otherwise, just install them:

# apt install mdadm lvm2

Once you connect the disks (tip: add a label on them to replace them in the right order) to the analysis host, verify if they are properly detected:

# lsblk
sda       8:0    0 465.8G  0 disk
|-sda1    8:1    0 464.8G  0 part  /
|-sda2    8:2    0     1K  0 part
`-sda5    8:5    0   975M  0 part  [SWAP]
sdb       8:16   0   3.6T  0 disk
|-sdb1    8:17   0     8G  0 part
|-sdb2    8:18   0     2G  0 part
`-sdb3    8:19   0   3.6T  0 part
sdc       8:32   0   3.6T  0 disk
|-sdc1    8:33   0   2.4G  0 part
|-sdc2    8:34   0     2G  0 part
`-sdc3    8:35   0   3.6T  0 part
sr0      11:0    1  1024M  0 rom

"sdb3" and "sdc3" are the NAS partitions used to store data (2 x 4TB in RAID0). The good news, the kernel will detect that these disks are part of a software RAID! You just need to rescan them and "re-assemble" the RAID:

# mdadm --assemble --readonly --scan --force --run 

Then, your data should be available via a /dev/md? device:

# cat /proc/mdstat
Personalities : [raid0]
md0 : active (read-only) raid0 sdb3[0] sdc3[1]
      7792588416 blocks super 1.2 64k chunks

unused devices: <none>

The next step is to detect how data are managed by the NAS. Synology provides a technology called SHR[3] that uses LVM:

# lvdisplay
  WARNING: PV /dev/md0 in VG vg1 is using an old PV header, modify the VG to update.
  --- Logical volume ---
  LV Path                /dev/vg1/syno_vg_reserved_area
  LV Name                syno_vg_reserved_area
  VG Name                vg1
  LV UUID                08g9nN-Etde-JFN9-tn3D-JPHS-pyoC-LkVZAI
  LV Write Access        read/write
  LV Creation host, time ,
  LV Status              NOT available
  LV Size                12.00 MiB
  Current LE             3
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto

  --- Logical volume ---
  LV Path                /dev/vg1/volume_1
  LV Name                volume_1
  VG Name                vg1
  LV UUID                fgjC0Y-mvx5-J5Qd-Us2k-Ppaz-KG5X-tgLxaX
  LV Write Access        read/write
  LV Creation host, time ,
  LV Status              NOT available
  LV Size                <7.26 TiB
  Current LE             1902336
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto

You can see that the NAS has only one volume created ("volume_1" is the default name in DSM).

From now on, you can use /dev/vg1/volume_1 in your investigations. Mount it, scan it, image it, etc...

[1] https://en.wikipedia.org/wiki/Mdadm
[2] https://en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux)
[3] https://kb.synology.com/en-br/DSM/tutorial/What_is_Synology_Hybrid_RAID_SHR

Xavier Mertens (@xme)
Senior ISC Handler - Freelance Cyber Security Consultant


Published: 2024-05-06

Detecting XFinity/Comcast DNS Spoofing

ISPs have a history of intercepting DNS. Often, DNS interception is done as part of a "value add" feature to block access to known malicious websites. Sometimes, users are directed to advertisements if they attempt to access a site that doesn't exist. There are two common techniques how DNS spoofing/interception is done:

  1. The ISP provides a recommended DNS server. This DNS server will filter requests to known malicious sites.
  2. The ISP intercepts all DNS requests, not just requests directed at the ISPs DNS server.

The first method is what I would consider a "recommended" or "best practice" method. The customer can use the ISP's DNS server, but traffic is left untouched if a customer selects a different recursive resolver. The problem with this approach is that malware sometimes alters the user's DNS settings.

Comcast, as part of its "Business Class" offer, provides a tool called "Security Edge". It is typically included for free as part of the service. Security Edge is supposed to interface with the customer's modem but can only do so for specific configurations. Part of the service is provided by DNS interception. Even if "Security Edge" is disabled in the customer's dashboard, DNS interception may still be active.

One issue with any filtering based on blocklists is false positives. In some cases, what constitutes a "malicious" hostname may not even be well defined. I could not find a definition on Comcast's website. But Bleeping Computer (www.bleepingcomputer.com) recently ended up on Comcast's "naughty list". I know all to well that it is easy for a website that covers security topics to end up on these lists. The Internet Storm Center website has been on lists like this before. Usually, sloppy signature-based checks will flag a site as malicious. An article may discuss a specific attack and quote strings triggering these signatures.

Comcast offers recursive resolvers to it's customers:,, 2001:558:feed:1 and 2001:558:feed:2. There are advantages to using your ISP's DNS servers. They are often faster as they are physically closer to your network, and you profit from responses cached by other users. My internal resolver is configured as a forwarding resolver, spreading queries among different well performing resolvers like Quad9, Cloudflare and Google.

So what happened to bleepingcomputer.com? When I wasn't able to resolve bleepingcomputer.com, I checked my DNS logs, and this entry stuck out:

broken trust chain resolving 'bleepingcomputer.com/A/IN': 

My resolver verifies DNSSEC. Suddenly, I could not verify DNSSEC, which is a good indication that either DNSSEC was misconfigured or someone was modifying DNS responses. Note that the response appeared to come from Google's name server (

My first step in debugging this problem was dnsviz.net, a website operated by Sandia National Laboratory. The site does a good job of visualizing DNSSEC and identifying configuration issues. Bleepingcomputer.com looked fine. Bleepingcomputer didn't use DNSSEC. So why the error? There was another error in my resolver's logs that shed some light on the issue:

no valid RRSIG resolving 'bleepingcomputer.com/DS/IN':

DNSSEC has to establish somehow if a particular site supports DNSSEC or not. The parent zone should offer an "NSEC3" record to identify zones that are not signed or not signed. DS records, also offered by the parent zone, verify the keys you may receive for a zone. If DNS is intercepted, the requests for these records may fail, indicating that something odd is happening.

So, someone was "playing" with DNS. And it affected various DNS servers I tried, not just Comcast or Google. Using "dig" to query the name servers directly, and skipping DNSSEC, I received a response: > 35148 2/0/1 www.bleepingcomputer.com. A, www.bleepingcomputer.com. A (85)

Usually, www.bleepingcomputer.com resolved to:

% dig +short www.bleepingcomputer.com

It took a bit of convincing, but I was able to pull up the web page at the wrong IP address:

screen shot of Comcast block page.

The problem with these warning pages is that you usually never see them. Even if you resolve the IP address, TLS will break the connection, and many sites employ strict transport security. As part of my Comcast business account, I can "brand" the page, but by default, it is hard to tell that this page was delivered by Comcast.

But how do we know if someone is interfering with DNS traffic? A simple check I am employing is to look for the DNS timing and compare the TTL values for different name servers.

(1) Check timing

Send the same query to multiple public recursive DNS servers. For example:

% dig www.bleepingcomputer.com @

; <<>> DiG 9.10.6 <<>> www.bleepingcomputer.com @
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 8432
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1

; EDNS: version: 0, flags:; udp: 512
;www.bleepingcomputer.com.    IN    A

www.bleepingcomputer.com. 89    IN    A
www.bleepingcomputer.com. 89    IN    A
www.bleepingcomputer.com. 89    IN    A

;; Query time: 59 msec
;; WHEN: Tue May 07 20:00:05 EDT 2024
;; MSG SIZE  rcvd: 101

Dig includes the "Query time" in its output. In this case, it was 59 msec. We expect a speedy time like this for Comcast's DNS server while connected to Comcast's network. But let's compare this to other servers: 59 msec 59 msec 64 msec 68 msec 69 msec

The results are very consistent. In particular, the last one is interesting. This server is located in China. 

(2) check TTLs

A recursive resolver will add a response it receives from an authoritative DNS server to its cache. The TTL for records bulled from the cache will decrease with the time the response sits in the resolver's cache. If all responses come from the same resolver, the TTL should decrement consistently. This test is a bit less telling. Often, several servers are used, and with anycast, it is not always easy to tell which server the response comes from. These servers do not always have a consistent cache.

Final Words

DNS interception, even if well-meaning, does undermine some of the basic "internet trust issues". Even if it is used to block users from malicious sites, it needs to be properly declared to the user, and switches to turn it off will have to function. This could be a particular problem if queries to other DNS filtering services are intercepted. I have yet to test this for Comcast and, for example, OpenDNS.




Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu


Published: 2024-05-05

nslookup's Debug Options

A friend was having unexpected results with DNS queries on a Windows machine. I told him to use nslookup's debug options.

When you execute a simple DNS query like "nslookup example.com.", you get an answer like this (notice that in my nslookup query, I terminated the FQDN with a dot: "example.com.", I do that to prevent Windows from adding suffixes):

You see the result of a reverse DNS lookup ( is dns.google) and you get 2 IP addresses for example.com in your answer: an IPv6 address and an IPv4 address.

If my friend would have been able to run packet capture on the machine, he would have seen 3 DNS queries and answers:

A PTR query to do a reverse DNS lookup for, an A query to lookup IPv4 addresses for example.com, and an AAAA query to lookup IPv6 addresses for example.com.

One can use nslookup's debug options to obtain equivalent information, without doing a packet capture.

Debug option -d displays extra information for each DNS response packet:

Here is nslookup's parsed DNS response packet for the PTR query:

Here is Wireshark's dissection of this packet:

You can see that the debug output contains the same packet information as Wireshark's, but presented in another form.

The same applies for the A query:

And the AAAA query:

If you also want to see the DNS query packets, you can use debug option -d2:

Besides the parsed DNS query, you now also see the length in bytes of each DNS packet (the UDP payload).

Here is the A query:

And here is the AAAA query:

Didier Stevens
Senior handler


Published: 2024-05-02

Scans Probing for LB-Link and Vinga WR-AC1200 routers CVE-2023-24796

Before diving into the vulnerability, a bit about the affected devices. LB-Link, the make of the devices affected by this vulnerability, produces various wireless equipment that is sometimes sold under different brands and labels. This will make it difficult to identify affected devices. These devices are often low-cost "no name" solutions or, in some cases, may even be embedded, which makes it even more difficult to find firmware updates.

Before buying any IoT device, WiFi router, or similar piece of equipment, please make sure the vendor does:

  1. Offer firmware updates for download from an easy-to-find location.
  2. Provide an "end of life" policy stating how long a particular device will receive updates.

Alternatively, you may want to verify if the device can be "re-flashed" using an open source firmware.

But let us go back to this vulnerability. There are two URLs affected, one of which showed up in our "First Seen URLs":


The second one has been used more in the past, the first is relatively new in our logs. The graph below shows how "set_LimitClient.cfg" is much more popular. We only saw a significant number of scans for "sysTools" on May 1st.

The full requests we are seeing:

POST /goform/set_LimitClient_cfg HTTP/1.1
Cookie: user=admin

And yes, the vulnerability evolves around the "user=admin" cookie and a command injection in the password parameter. This is too stupid to waste any more time on, but it is common enough to just give up and call it a day. The NVD entry for the vulnerability was updated last week, adding an older PoC exploit to it. Maybe that got some kids interested in this vulnerability again.

Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu