Podcast Detail

SANS Stormcast Friday, May 1st, 2026: Libredtail; FreeBSD dhclient vuln; Linux Copy-Fail; @sans_edu Detecting AI Pickling

If you are not able to play the podcast using the player below: Use this direct link to the audio file: https://traffic.libsyn.com/securitypodcast/9914.mp3

Podcast Logo
Libredtail; FreeBSD dhclient vuln; Linux Copy-Fail; @sans_edu Detecting AI Pickling
00:00

My Next Class

Click HERE to learn more about classes Johannes is teaching for SANS
Application Security: Securing Web Apps, APIs, and MicroservicesSan DiegoMay 11th - May 16th 2026
Network Monitoring and Threat Detection In-DepthOnline | Arabian Standard TimeJun 20th - Jun 25th 2026
Network Monitoring and Threat Detection In-DepthRiyadhJun 20th - Jun 25th 2026
Application Security: Securing Web Apps, APIs, and MicroservicesWashingtonJul 13th - Jul 18th 2026
Application Security: Securing Web Apps, APIs, and MicroservicesOnline | British Summer TimeJul 27th - Aug 1st 2026
Application Security: Securing Web Apps, APIs, and MicroservicesLas VegasSep 21st - Sep 26th 2026

Podcast Transcript

 Hello and welcome to the Friday May 1st, 2026 edition
 of the SANS Internet Storm Centers Stormcast. My name is
 Johannes Ulrich, recording today from Jacksonville,
 Florida. And this episode is brought to you by the SANS.edu
 Graduate Certificate Program in Cybersecurity Leadership.
 Well, and we do have another diary by one of our
 undergraduate interns, James Roberts, is writing about
 Redtail. Now, Redtail is usually known as a malware
 that installs crypto coin miners via SSH. So typical
 password brute forcing, trying some weak passwords in order
 to figure out what gets them in. And then they're taking
 over various devices, servers and such to install their
 crypto miner. But what James is talking about is not so
 much the SH part of this malware. Well, they are also
 attempting to exploit web application vulnerabilities,
 in particular, some older web application vulnerabilities
 like PHP unit flaws or some older like PHP directory
 traversal and remote code execution flaws if PHP is run
 as a CGI on Windows. So a lot of these flaws are older or
 often only affect some home window systems, experimental
 systems, death systems and the like. This may a little bit be
 part of the strategy here where they are also going
 after these older vulnerabilities just because
 they typically happen in less monitored systems. So their
 malware is more likely going to survive. The problem with
 this, of course, is that they're probably not the first
 one to find these vulnerable systems. James is summarizing
 some of his findings here where these attacks are coming
 from and what particular vulnerabilities and such
 they're exploiting. If you do find a crypto coin miner of
 any kind on your system, remember, it typically
 indicates that you have an easily exploitable
 vulnerability and it's hardly ever the only malware that you
 find. Quite often you have multiple crypto coin miners
 kind of fighting for dominance. But of course,
 there may be other things running that are less visible
 as well. Then I want to point out a vulnerability that I
 think may not have gotten quite the attention it
 deserves. And that's a vulnerability in dhclient for
 FreeBSD. dhclient is the DHCP client in free BSD. It's
 a default component and this vulnerability allows for
 remote code execution. As part of DHCP, you're able to
 transmit a file name that has been used to basically
 bootstrap the operating system. And that value is then
 written to a lease file. So next time the system starts
 before it actually gets DHCP lease, it can use that lease
 file to basically get started with networking. The problem
 is that this boot file name is not escaped properly. So
 commands can be injected into the lease file that will then
 be executing once that file is being read. This
 vulnerability, of course, is only exploitable if you're on
 the same subnet because you have to basically be able to
 spoof the DHCP server. But once you are on the same
 subnet, this is not all that difficult to exploit. The one
 reason I think why this is more severe than it may look
 like after all, you know, free BSD is not that sort of one of
 the most used operating systems out there, but it is
 often used in, for example, firewall systems, routers and
 the like. That's where you often find free BSD and those
 devices are of course often also quite vulnerable to
 spoofed DHCP messages. So that's why I think this
 deserves some attention and as usual always make sure sort of
 once a week that you're still running the latest firmware
 for your favorite firewall or router. Well, let's move on to
 a vulnerability that a lot of people are talking about and
 that's the copy fail vulnerability in Linux. Now,
 this is a recently discovered privilege escalation flaw in
 Linux and what makes this particular vulnerability so
 interesting is that first of all, it's easy and reliable to
 exploit and secondly, it affects all recent Linux
 kernels and with that all recently published Linux
 distributions. On the other hand, it is just a privilege
 escalation of vulnerability and privilege escalation
 vulnerabilities tend to be always around and any sort of
 no pentester and particular more advanced attackers
 typically have a couple of privilege escalation
 vulnerabilities on hand that they tend to use. Also, many
 configuration errors being made on systems can lead to
 privilege escalation. So that's why I wouldn't sort of
 overrate this vulnerability. Where, however, it does become
 a much bigger deal is if you are for example running a
 shared hosting environments and you have well unrelated or
 worse competing users on one system then of course
 privilege escalation may become a bigger deal. Well,
 there is a working privilege escalation exploit out that
 takes advantage of this vulnerability. The XINT code
 has published the details about vulnerability including
 the proof of concept that works quite reliable. At this
 point, I haven't seen a lot of patches from major
 distributions yet but I assume they will be coming shortly. A
 patch has been committed to the Linux kernel but of
 course, well, not everybody compiles their own kernel and
 those who don't shouldn't definitely start doing so now
 but wait a couple days and hopefully it's not much longer
 for updated kernels to be offered for your favorite
 Linux distribution. The root cause here is a problem where
 an attacker is able to basically overwrite some of
 the cryptographic results. The vulnerable code here is in the
 crypto primitives for Linux and they of course are then
 used to do things like verify where requests come from and
 do authentication and access control decisions and that's
 sort of how this becomes a privilege escalation
 vulnerability. Well, and today is Friday and Fridays we do
 also have often some science edu students talk about their
 research. With me today I have Brian. Brian, could you
 introduce yourself please? Brian, my name is Brian Nice.
 I'm a technical leader that's been working in healthcare and
 life sciences space for over a decade and focused on security
 engineering and stuff with respects to AI you know is
 innovation being brought in to those kinds of organizations
 and how to do it safely without creating adverse
 events that may impact patients. Can you tell us a
 little bit about what your paper was about? So my paper
 was focused on looking at how organizations can pull AI from
 model hubs in order to reuse it inside the organizations
 that you know from a data science perspective trying to
 innovate you know interventions or treatments
 for potential you know patients and stuff like that
 inside organizations. One of the kind of problematic nature
 of like any kind of open source you have to think about
 the of the data science that we have to offer supply chain
 and respects if like an attacker maybe have pushed
 code or something malicious inside of the AI model that an
 organization like a healthcare organization may have
 inadvertently brought in and exposed kind of like a
 vulnerability that may cause an adverse event. And the
 language and one of the other reasons I come from prompted
 that was also because Python is predominantly a used
 language in this space as well which is also known for having
 kind of that attack surface. After all the supply chain
 events that we had recently in particular around Python of
 course this is a current topic. You dealing with
 healthcare of course are in a specifically challenging
 environment. Why Python? Why did you select Python here as
 the focus of your research? Well, well for one, I mean,
 you know, everything has a trade off on there, but like
 Python in general is an open source. It's an interpretive
 language on there. There's a lot of libraries that are
 produced on there. And kind of also one of the standards
 that's predominantly used for people who built AI models,
 you know, as a data scientist is using the pickle based
 format. Pickle based format, you know, has some
 vulnerabilities with kind of what they call the op codes
 that, you know, an attacker can hook in malicious code
 like a command and control or fire off something else on
 there. So it's not, it's not necessarily focusing on model
 weights, but the way that some backdoor process can get fired
 off from that, that's something that may not be
 detectable on there. And one of the other reasons that
 that's there is that because pickle files use what's called
 a pickle virtual machine. And then, and so how it
 instantiates, it starts to fire off all these things on
 there. So even if you inspect a pickle based file, you know,
 without kind of like really digging into it from like a
 forensics perspective, you may have actually much delicious
 code inside of that when you just spawned it up to, you
 know, run as a service to do inferences on. So in some way,
 it really comes down to the good old deserialization issue
 that an object isn't just data, it's also code. So
 similar pickle files. Now I've seen in some of these machine
 learning frameworks where you can import a model using the
 weights only option. Does this help you? Is this efficient to
 prevent any of these attacks? Well, that's the kind of the
 issue is like if weights are stored in a pickle based file,
 right? So it, you know, because it was originally made
 to kind of help, you know, make a part of artifacts
 reproducible kind of thing and to run it, right? And the
 model weights were packaged inside of this, which kind of
 compress it down. That's kind of the issue. It's not
 necessarily. So even if you separate it from like the
 algorithm or the code that the configuration, you still have
 how the weights are stored. And if you're using pickle
 based formats, that pickle based formats have hooks that
 you can put code inside of that that gets fired off. So
 is this why we do sometimes have these vulnerabilities in
 libraries like PyTorch, where even if you use weights only,
 code execution is still possible? Are there some
 models also where, well, it just won't work with weights
 only because they legitimately use some of the code? They're
 taking advantage of the fact that they can inject code that
 runs because, again, when it deserializes, it uses the
 pickle virtual machine that fires off on that. So you have
 to treat pickle based files, even though it's intended or
 was intended to be a way of storing data or the weights of
 a model. But it does code as well. So your research looked
 into detecting the malicious code in pickle files or what
 aspect of this problem did you tackle? So what I was more
 interested in is like, OK, you know, great. The whole
 ecosystem is there and we're aware of that. But how is the
 performance of like scanning tools from a static code
 analysis? Because I want to be, you know, the idea is if
 I'm bringing it into an organization, I want to do a
 sort of kind of a certification process. What
 are the current scanning tools that exist and what was the
 performance in relation for them to one, identify, you
 know, true positives versus false positives? And, you
 know, in true negatives versus false negatives kind of thing
 across these tools from a static perspective. So I want
 to be able to interrogate without instantiating and see
 if there is like a way of kind of from a gate control
 perspective, be able to detect that and be able to prevent
 from like a model with malicious code coming into an
 organization. Well, and of course, the obvious next
 question is, did it work or what did you find? Well, there
 was across four tools and at least my experiment, there was
 divergent results on that. So there was inconsistency like
 some of them. So the tools that I looked at were
 fickling, model scan, pickle tools, and I'm not recalling
 the fourth one, but they had divergent results. Some of
 them flagged false positives on, you know, models that were
 safe because I created baseline models. And other
 ones were flagging the poison models that I generated after
 I altered them as, you know, false negatives kind of thing.
 So it was like, oh, you know. And so the thing to be mindful
 of is like, you know, you can't just implement a tool.
 You have to be mindful of like making to verify that it is
 catching those things when you're doing a static
 perspective. But also it prompts to like potentially
 you may have to do like multi -tool verification just to
 kind of triangulate to make sure that you've got good
 coverage on that. So it sounds like you implement a little
 bit that virus total approach by combining the output of
 multiple scanning tools. Yeah. Are you implementing of this
 in your day job right now? Yes. It is a kind of
 reproducible framework for kind of playing, you know,
 some of the clients that we work with in public health and
 healthcare kind of clients on that. You know, especially
 since we had the rise of like, or, you know, the desire of
 like responsible AI, which I think this is like more of the
 physical controls that's associated to that. Again, in
 healthcare, you know, the thing that we want to be
 mindful of is like, yes, we want innovation because it's
 ultimately driving better outcomes for people. But also
 when we're leveraging innovation is that making sure
 the right kind of controls are there. So that way an adverse
 event from an attack that is prompted by an attacker
 doesn't actually manifest. Excellent talking to you,
 Brian. And thanks for joining me here. We will add a link to
 the paper to the show notes. So please refer to all the
 details here that Brian discusses in the full paper.
 Thanks and talk to you again on Monday. Bye. Bye. Bye.
 Thank you.