Podcast Detail

SANS Stormcast Friday, May 8th, 2026: AI Generated Dashboard; Ivanti Patches; Redis Vuln; @sans_edu Marcio Enriquez

If you are not able to play the podcast using the player below: Use this direct link to the audio file: https://traffic.libsyn.com/securitypodcast/9924.mp3

Podcast Logo
AI Generated Dashboard; Ivanti Patches; Redis Vuln; @sans_edu Marcio Enriquez
00:00

My Next Class

Click HERE to learn more about classes Johannes is teaching for SANS

An Adaptive Cyber Analytics UI for Web Honeypot Logs
https://isc.sans.edu/diary/An%20Adaptive%20Cyber%20Analytics%20UI%20for%20Web%20Honeypot%20Logs%20%5BGuest%20Diary%5D/32962

Ivanti May Patchday
https://hub.ivanti.com/s/article/May-2026-Security-Advisory-Ivanti-Endpoint-Manager-Mobile-EPMM-Multiple-CVEs

Redis Security advisory: [CVE‑2026‑23479] [CVE‑2026‑25243] [CVE-2026-25588] [CVE‑2026‑25589] [CVE-2026-23631]
https://redis.io/blog/security-advisory-cve202623479-cve202625243-cve-2026-25588-cve202625589-cve-2026-23631/

@sans_edu research paper: Marcio Enriquez
[link will be added once the paper has been published]

Podcast Transcript

 Hello and welcome to the Friday, May 8, 2026 edition of
 the SANS Internet Storm Center's Stormcast. My name is
 Johannes Ullrich, recording today from Jacksonville,
 Florida. And this episode is brought to you by the SANS.edu
 Credit Certificate Program in Cybersecurity Engineering. In
 diaries today, we had another diary by one of our
 undergraduate interns, in this case, Eric Roldan wrote about
 how to create better sort of analytics UIs to analyze
 HoneyPod logs. And of course, this can be adopted to other
 types of logs too. A problem with all these dashboards and
 such is often that they're too static. You may have created
 them years ago or such for a particular problem you had
 back then, but they often aren't sort of adjusted very
 frequently to really sort of point out what's new and
 what's interesting in the data. So what Eric did here is
 Eric wrote a Python script that will first of all
 summarize all the data and create a summarized version in
 a standardized format. And then, well, Eric, as the kids
 do it these days, sent AI after the data and had AI
 Claude, in this case, create a dashboard to display the data.
 And I'm actually quite impressed with the different
 dashboards that Claude came up with here. So there are some
 generic security dashboards that have things that you
 would sort of expect. Things like, you know, how many ports
 scanned, how many requests we had, different time series and
 the like. But then there are also sort of some things like,
 you know, what are, for example, the big actors and
 then what are those actors doing? So essentially a
 dashboard that will just summarize the traffic for a
 particular actor or for a particular vulnerability
 that's being exploited. So yes, lots of classic
 dashboards have these kind of features, but of course having
 the simplicity of creating, essentially vibe coding these
 dashboards, I think makes a lot of sense. And like I said,
 kind of impressed by the output that was generated
 using this particular approach. Let's continue with
 some vulnerabilities. Ivanti had its monthly patch day, so
 for May, and it fixed a number of vulnerabilities. One I want
 to point out here is, and that's a vulnerability that's
 already exploited in the wild. However, in order to exploit
 it, an attacker must first have admin credentials. And as
 Ivanti points out here, that they patched some
 vulnerabilities in their January update that would gain
 access to those credentials. And back then they rightfully
 recommended that you swap and rotate your credentials. So if
 you haven't done that yet, well, this puts you at more
 risk of more persistent exploitation of your devices.
 So two things you should do here. Number one, apply the
 patches. And then of course, double check that you recently
 rotated your credentials on your Ivanti endpoint manager
 mobile devices. That's what's being affected here. Ivanti
 also published a blog post that sort of went with this
 security bulletin and stated that they're now, well, and
 everybody does really use more AI tools in order to find new
 vulnerabilities. They are having some initial success.
 And basically they're just highlighting that yes, in the
 next couple of months, you may see more vulnerabilities being
 patched. But in some ways, it's actually a good thing
 that Ivanti is using these tools and not just now waiting
 for some researchers kind of to report the vulnerabilities
 to them, but instead proactively is going out and
 trying to find as many vulnerabilities as they can in
 -house. I talked earlier this week about Google no longer
 really disclosing all the vulnerabilities. Sounds like
 Ivanti is going to disclose all vulnerabilities they find,
 even the ones they find internally using these AI
 tools. Well, users of Redis, be aware there is a patch for
 you. And this patch also fixes a remote code execution
 vulnerability. So definitely something that you want to pay
 attention to. You must have credentials in order to
 exploit it. So it's not a pre -authentication vulnerability.
 Still, in particular with Redis, I often see it exposed
 and sort of credentials being given to various users. So
 something that you should pay attention to as a result. They
 also have a couple of other recommendations which I think
 are really important for many of these NoSQL databases, like
 not allowing direct network access. Find the strongest
 authentication method that you can use for the particular
 database. So also read kind of these footnotes to the
 advisory in addition to patching the vulnerability.
 Well, and it's Friday again and we do have another sans
 .edu student to talk about a research project. Marcio,
 could you introduce yourself please? I'm Marcio Enriquez
 and currently a lead service manager over exposure
 management in one of the Fortune 500 companies and been
 doing cyber security for more than a decade now and IT even
 longer. So very excited to be here. So lots of changes over
 a decade and you sort of covered a cutting edge kind of
 topic, you know, AI and hacks around it. So can you explain
 a little bit what the paper was about? So I wanted to
 focus on something that was relevant and was important to
 what we're experiencing and seeing every day. And so we
 were having a meeting, actually, it was pretty
 interesting. We had a phishing alert come in from one of our
 SOC analyst teams and they're reviewing it and going
 through, but it quickly escalated when they noticed
 that the user that clicked on the phishing email were
 starting to see their identity across all the machines within
 the enterprise. That ultimately led up to the
 manager level. We got on a very important call with
 everybody, including VPs and directors. And we discovered
 that what occurred was Microsoft's automatic attack
 disruption tool. It's an autonomous AI tool that's
 enabled, took these actions without none of us knowing. We
 didn't even know it was enabled. We didn't even know
 that it was running. And we started to dig deeper and we
 noticed that the fidelity was actually pretty good because
 immediately the question from our VP and directors and us
 were, should we turn it off? Right. Because we had taken
 actions that are really disabling a user account. And
 it added the user across the enterprise under the deny
 interactive logon. And it did all this like in a short span
 time, which was really great. And one of our VPs had a
 question. He said, well, could this be used against us? And
 that question really provoked my thought process and thus
 the research topic. I really wanted to dig into that
 question. You know, from a defense perspective, we're
 leveraging IT or AI almost everywhere. Right. Artificial
 intelligence is like the key word to use in all your
 defense mechanism tools, especially autonomously to
 take automated action increases like your mean time
 to respond, mean time to closure or you name it. It
 increases everything that the speed because everything is
 functioning at machine speed. And so my research delve
 around, let's look at these defensive tools and see, could
 they be manipulated in a way that would cause a disruption?
 And thus the topic or the title of my paper, right, is
 introducing the autonomous defense induced disruption,
 ADID, kind of like a mouthful, and how AI driven automated
 response can be manipulated to disrupt enterprise operations.
 And so the experiment focused on that. Really, I worked hard
 on building out a really good lab as close to real as
 possible. E5 trial license, real domain connected to it, a
 bunch of identities, ran a ton of scripts to create personas,
 like a user who mimic a user who's barely logging onto the
 machine, mimic a user that does normal logins, web
 traffic and whatnot, and really made sure that the E5
 license had a lot of telemetry to kind of consume web
 analysis, traffic analysis, created virtual machines, and
 then created an attack network. So just a Kali Linux
 box completely isolated and segmented. And the whole
 purpose was not to check fidelity of the autonomous
 actions that were taken, but more importantly, can we hit
 the triggers that caused the automatic actions to take? Can
 we simulate that to make it force user containment? And
 how far would it go? Yeah, so that reminds me all of the
 good old account lockout after five missed logins. Exactly.
 Password spray is locking everybody up. I think Cisco
 had that feature where it would detect port scans
 internal to internal. Oh, yeah. And then automatically
 block systems. And one of the problems there was if you
 received an email with like lots of image tags that point
 back to an internal IP address with like colon 80, colon 81,
 colon 82, and so on. Yeah, you got it. You could trigger
 that. Oops. That's exactly right. So just by opening the
 email, you would trigger that because now your email client
 goes out to all these ports and that internal IP address
 and triggers it. So in the research, again, you know, in
 the lab environment wasn't focused on like, let's fully
 compromise a system and how to do that. It was more once an
 attacker gets a foothold into like one machine and, you
 know, through open source intelligence, they get a
 handle of usernames or targeting a few users.
 Ultimately, attack disruption, we had about 18 test accounts,
 disabled all 18 accounts. And once I enabled the password
 writeback tool for the hybrid environment, it also locked
 out my domain admin account on the local environment, which
 was a lot of fun to get back into. So, and this showed us
 the kind of like one of the big points, you know, the
 current form of artificial intelligence that we leverage
 today. And this is IBM did a really good job of kind of
 breaking this down was a artificial narrow
 intelligence, you know, meaning that the, all the
 language learning models that we're using, chat GPTs, clods,
 all that, every AI that's currently existing is really
 good at what it's specifically coded or designed to do.
 Meaning contextually, it may not fully comprehend the
 ramifications of its actions. Case in point, a normal
 analyst seeing that the domain admin account is doing
 something malicious may not so quickly disable that account
 without first making sure that we have access back into our
 network. So that contextual knowledge appears to be
 missing. And that was one of the biggest points, right? Is
 a call out to that, a call to action, so to speak, right?
 Let's, we have these tools that we're using, but let's
 make sure that we have the proper guardrails, break glass
 accounts, all the traditional things that we do for any type
 of runbooks that have automatic actions, just to
 ensure we don't get locked out. Yeah. Is there any kind
 of configurability in the tool where you can say, okay, don't
 lock out these accounts or? It was interesting. In the
 beginning of the research, we noticed that there was an
 option by Microsoft where you can input user identities that
 you did not want any actions taken on automatically. But an
 interesting call out is that it's not a requirement to fill
 in a user to have this enabled. So, you know, if you
 don't know to look for it, you would never know that it
 existed. And as my research increased later down the line,
 I noticed that they added a new feature, which was
 interesting for machines, add machines here that you don't
 want containment actions to be taken care of, like your crown
 jewels and things like that. So it's interesting that
 they're starting to exist, which is great. But if you
 don't know to configure them, they're not a requirement for
 having any automatic actions taken. And there's no easy
 undo button or anything like that. Actually, what I had to
 do in order to get back into the infrastructure, right,
 into my own lab environment was I had it. Luckily, I had a
 controlled cloud account in Azure. I had to go in there,
 log in through that and like leverage a live session to
 undo and disable all the actions that were taken by the
 autonomous agents. So the rollback can occur. But if I'm
 targeting anybody just for disruption, I might just
 target the main IT admin you have. What about sort of, you
 know, visibility into what happened? Is there some decent
 logging that will tell you? All the logging is extremely
 well documented. Everything within the XDR is really,
 really great. You'll be able to see all the actions that
 were taken specifically by the autonomous agent. And you can
 roll those back if you see them. And so the bigger point
 of the paper that I wanted to bring out was that you can see
 everything happening, right? Everything you can control
 even like, you know, if you let's say you want to roll a
 few actions back. Those are possible. But the bigger
 danger is that if an adversary were to know what, let's say,
 modern equipment you're using, imagine a Palo Alto or a
 Fortinet with some sort of autonomous blocking on the
 edge, right? And they know that how the system works.
 Documentation is great on how the AI works out there. They
 no longer are trying to hide themselves to cause
 disruption. They're more like, let's purposefully find these
 high yield triggers to make actions taken to force a
 containment that maybe a human would not do that would cause
 high severity disruption. Yeah, thanks for joining me
 here. And there should be a link then in the show notes if
 anybody's interested in the full paper. And yeah, thanks
 everyone for listening and talk to you again on Monday.
 Bye.