Podcast Detail

SANS Stormcast Friday, November 21st, 2025: Oracle Idendity Manager Scans; SonicWall DoS Vuln; Adam Wilson (@sans_edu) reducing prompt injection.

If you are not able to play the podcast using the player below: Use this direct link to the audio file: https://traffic.libsyn.com/securitypodcast/9710.mp3

Podcast Logo
Oracle Idendity Manager Scans; SonicWall DoS Vuln; Adam Wilson (@sans_edu) reducing prompt injection.
00:00

My Next Class

Application Security: Securing Web Apps, APIs, and MicroservicesDallasDec 1st - Dec 6th 2025
Network Monitoring and Threat Detection In-DepthOnline | Central European TimeDec 15th - Dec 20th 2025

… more classes


Oracle Identity Manager Exploit Observation from September (CVE-2025-61757)
We observed some exploit attempts in September against an Oracle Identity Manager vulnerability that was patched in October, indicating that exploitation may have occurred prior to the patch being released.
https://isc.sans.edu/diary/Oracle%20Identity%20Manager%20Exploit%20Observation%20from%20September%20%28CVE-2025-61757%29/32506
https://slcyber.io/research-center/breaking-oracles-identity-manager-pre-auth-rce/

DigitStealer: a JXA-based infostealer that leaves little footprint
https://www.jamf.com/blog/jtl-digitstealer-macos-infostealer-analysis/

SonicWall DoS Vulnerability
Sonicwall patched a DoS vulnerability in SonicOS
https://psirt.global.sonicwall.com/vuln-detail/SNWLID-2025-0016

Adam Wilson: Automating Generative AI Guidelines: Reducing Prompt Injection Risk with 'Shift-Left' MITRE ATLAS Mitigation Testing

Podcast Transcript

 Hello and welcome to the Friday, November 21st, 2025
 edition of the SANS Internet Storm Center's Stormcast. My
 name is Johannes Ullrich, recording today from
 Jacksonville, Florida. And this episode is brought to you
 by the SANS.edu Bachelor's Degree Program in Applied
 Cybersecurity. Well, in diaries today, let's start
 with Oracle Identity Manager. In October, as part of its
 critical patch update that Oracle releases once a
 quarter, one critical vulnerability was patched in
 Oracle Identity Manager that not only allows authentication
 bypass, but also remote code execution as part of that
 authentication bypass. And with Oracle Identity Manager
 being sort of a critical part of the entire Oracle
 ecosystem, this is certainly a big deal. And Oracle Identity
 Manager was also sort of one of the issues behind the
 breach of Oracle's cloud earlier this year. Now, today
 Searchlight Cyber did release some details regarding this
 vulnerability. And it turns out exploitation is pretty
 straightforward, pretty simple for this vulnerability.
 Essentially a bug in the Oracle Identity Manager
 authentication logic that any URL that adds with .wadl will
 bypass authentication. Now, typically, if you just add
 .wadl, you end up with a different file that it points
 to and you get a 404 error. So not much happens there. But if
 you do a semicolon .wadl, the authentication bypass still
 works, and you're not pointing to a different URL. So this is
 basically what Searchlight Cyber found and then reported
 to Oracle. Now seeing that particular proof of concept
 URL that Searchlight Cyber published, I went back through
 our honeypot logs to see if you already see any
 exploitation for this vulnerability. Well, I didn't
 see anything for today or the last couple days. But I did
 see some exploitation for the first week of September. So
 well before the vulnerability was actually patched and
 publicly known. The URL that's being exploited there against
 our honeypots is slightly different but still use that
 same semicolon .wadl pattern. So maybe a different group
 found the same vulnerability. And given that they hit our
 honeypots, it's well fair to assume that they did internet
 -wide scans and as a result probably hit a lot of these
 Oracle Identity Managers that were exposed at the time and
 may have exploited some of them. So this kind of changes
 the equation a little bit. If you're running Oracle Identity
 Manager, if you patched as the patch was released in October,
 you still may have to go back and check for compromise prior
 to patching if this particular exploit hit your particular
 system before you applied the patch. And Apple security
 company YAMF has published a blog post with details
 regarding a new info stealer that they spotted. This
 particular info stealer tricks users into installing it by
 claiming to be Dynamic Lake. Dynamic Lake is software that
 implements something similar to the iOS dynamic island for
 macOS. So it's legitimate software, but in this case the
 attacker emulates or mimics this legitimate software in
 order to trick the victim into installing malware. Kind of
 interesting malware. First of all, it's very specific on
 what systems it installs on. It does avoid virtual
 machines. It does avoid the M1 processor interestingly, but
 maybe because that's often being emulated sort of in
 virtual machines. So that's probably why it's avoiding
 them. And then it installs one out of four sort of info
 stealer components that do steal the usual things like
 your keychain database, things like telegram information, and
 of course crypto wallet details. And it wouldn't be a
 podcast episode if we wouldn't talk about some kind of
 permanent device security issue. This time it's not
 actually all that severe. It's well just a denial of service.
 In SonicWall we have a Sonic OS SSL VPN vulnerability. It's
 a buffer overflow but only leads to crashing the service.
 And while we're talking about SL VPNs, I just saw that
 Fortinet did announce that in their latest version of the 7
 track of FortiOS they're actually going to move away
 from SL VPN and they're urging users to switch to IPSec
 instead. Well then since it's Friday again we do have
 another science study.edu student here to talk about
 their research project. Adam, could you introduce yourself
 please? Hi Dr. Ullrich, it's great to be on the program.
 Thanks for having me on the show. Yeah, my name is Adam
 Wilson. I'm a senior manager of DevSecOps at a company R1
 RCM. I've been working in application security and
 DevSecOps for several years with a software engineer
 before that. Your topic was well one of everybody's
 favorite topic these days. It was AI related. Tell us a
 little bit about what the topic was. Yeah, the topic it
 came out of MITRE Atlas and trying to mitigate prompt
 injection with something that they prescribe and they call
 it generative AI guidelines. And MITRE with the Atlas
 framework they're not necessarily trying to
 prescribe details on how to implement this. So this was an
 experiment in implementing generative AI guidelines,
 testing to see how effective are they, how can we use these
 in build system automation and shift application security
 left for the AI engineering lifecycle. So the MITRE Atlas
 framework, that's something of course people may not be that
 familiar with. People usually the MITRE ATT&CK framework I
 think is what everybody probably has heard of who is
 listening in here. How is Atlas different and what does
 it sort of do for you here? Yeah, I would almost describe
 Atlas as like an extension of the ATT&CK framework. And as
 you mentioned, you know, most of us in cyber are very
 familiar with the ATT&CK framework. We use that a lot
 in threat intel and pen testing, red team operations.
 But Atlas is sort of this extension where they are
 focusing on how do threat actors attack and abuse AI
 systems and AI applications. And then of course, they're
 adding in a knowledge base of mitigations prescribed for
 defending against those tactics and techniques. Yeah.
 So can you walk us through an example like how that applies?
 Yes. So one of the tactics that they discuss is something
 that OWASP has ranked as the number one risk against
 generative AI applications, which is prompt injection. And
 we hear about prompt injection all the time. MITRE prescribes
 four different mitigation techniques for prompt
 injection. They talk about guidelines, guardrails, model
 alignment, and AI telemetry logging. And this experiment
 focused very specifically on the guidelines mitigations.
 Yeah, so what would be a guideline basically adjusting
 the prompt here to prevent prompt injection? Or what's
 one of those guidelines? Yeah, yeah. And it really the
 experiment keyed off of how MITRE defines these
 guidelines, they talk about, and I'll just quote part of
 the definition here, that guidelines can be implemented
 as instructions appended to all user prompts or as part of
 the instructions in the system prompt. So that sounds an
 awful lot like prompt engineering, right? And we
 know that that prompt injection really is a
 malicious form of prompt engineering. And so what
 they're prescribing here with guidelines is almost like
 fighting fire with fire. You're going to use prompt
 engineering best practices to help mitigate the effects of
 prompt injection. So one example of that, if we look at
 the landscape of prompt engineering best practices, we
 see things like chain of thought. We see few-shot
 learning. And this experiment focused on what if we automate
 chain of thought prompt engineering within this
 guidelines mechanism. What if we automate few-shot learning?
 And then what if we combine the two and use a defense in
 -depth approach? Yeah. And what did you find there? So I
 think that one of the really interesting things about the
 experiment was that it underscored the importance of
 one of the first principles of cybersecurity, which we teach
 and we try to implement in our controls, which is defense in
 depth. And that in this experiment, no single control,
 whether it was just chain of thought or it was just few
 -shot learning, we saw incremental improvements in
 the system's ability to defend itself against prompt
 injection. But when the defenses were layered, when
 there was few-shot learning and chain of thought, when
 we're using both of those prompt engineering best
 practices, that showed the most
 dramatic effect. That was the best approach for mitigation.
 Yeah, which makes sense. And now, when you introduce
 yourself, you mentioned that you're sort of in DevOps and
 can you automate these defenses as part of a DevOps
 tool, Jane? Yeah. Once you have these guidelines
 mechanisms implemented as an automated control within the
 system, within the application, we can use, just
 like we use unit testing frameworks for testing
 different parts of an application, we can automate
 that, especially in the build system. You know, one of the
 DevOps tenets is, you know, we want to shift any kind of
 testing, automated testing, as early as possible in the
 process. So we can do the same thing here, where we are,
 we're testing these generative AI guidelines mechanisms early
 in the process.
 So one example is like using GitHub actions, where you can,
 you can have these policy as code tests, early in the build
 stage.
 Yeah, so the good old shift left, that we sort of always
 preach for software security still applies, that we want to
 make as many of these tests early in the process and
 automate it. Yeah, absolutely. Yeah. And again, another one
 of these first principles of cyber security is that we want
 to test early, we want to automate, we want to use
 defense in depth. And using GitHub actions or Jenkins or
 whatever build system is in your environment is going to
 give you a lot of insight very early in the process on, are
 these defensive mechanisms effective? Are they something
 that I need to pause my deployment process so I can go
 back and improve? Yeah. Yeah. So that's great. The paper
 will be in the show notes, there will be linked to the
 paper in the show notes. Any final words, anything that
 people should consider or should focus on when they read
 your paper? Anything that you want to... Yeah, one thing
 that I mentioned is that there's a lot of possibility
 here, a lot of opportunity for continued research. We can
 look at other mitigations that MITRE Atlas prescribes, such
 as the generative AI guardrails, which would be
 validating the output before it goes to the end user or
 before it goes to the next, you know, agentic AI system in
 the chain. And so what if we layered these, continued to
 build our defense in depth layers and added guardrails to
 guidelines? So I would want readers to continue to
 innovate and look at new ways to increase that defense in
 depth approach. Yeah, that's great. Thanks for joining me
 here, Adam. And thanks, everyone for listening and
 talk to you again on Monday. Thanks so much