Podcast Detail

SANS ISC Stormcast, Jan 24, 2025: XSS in Email, SonicWall Exploited; Cisco Vulnerablities; AI and SOAR (@sans_edu research paper by Anthony Russo)

If you are not able to play the podcast using the player below: Use this direct link to the audio file: https://traffic.libsyn.com/securitypodcast/9294.mp3

Podcast Logo
XSS in Email, SonicWall Exploited; Cisco Vulnerablities; AI and SOAR (@sans_edu research paper by Anthony Russo)
00:00

In today's episode, learn how an attacker attempted to exploit webmail XSS vulnerablities against us. Sonicwall released a critical patch fixing an already exploited vulnerability in its SMA 1000 appliance. Cisco fixed vulnerabilities in ClamAV and its Meeting Manager REST API. Learn from SANS.edu student Anthony Russo how to take advantage of AI for SOAR.

XSS Attempts via E-Mail
https://isc.sans.edu/diary/XSS%20Attempts%20via%20E-Mail/31620
An analysis of a recent surge in email-based XSS attack attempts targeting users and organizations. Learn the implications and mitigation techniques.

SonicWall PSIRT Advisory: CVE-2025-23006
https://psirt.global.sonicwall.com/vuln-detail/SNWLID-2025-0002 CVE-2025-23006
Details of a critical vulnerability in SonicWall appliances (SNWLID-2025-0002) and what you need to do to secure your systems.

Cisco ClamAV Advisory: OLE2 Parsing Vulnerability
https://sec.cloudapps.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-clamav-ole2-H549rphA
A DoS vulnerability in the popular open source anti virus engine ClamAV

Cisco CMM Privilege Escalation Vulnerability
https://sec.cloudapps.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-cmm-privesc-uy2Vf8pc
A patch of a privilege escalation flaw in Cisco’s CMM module.







Podcast Transcript

 Hello and welcome to the Friday, January 24th, 2025
 edition of the SANS Internet StormCenters Stormcast. My
 edition of the SANS and at StormCenters Stormcast. My
 edition of the SANS Internet StormCenters Stormcast. My
 name is Johannes Ullrich and today I'm recording from
 Jacksonville, Florida. In diaries today we got something
 a little bit different. It's for a change, not honeypot
 data, but emails that actually went to one of our Internet
 StormCenter email addresses that attempted to exploit
 cross-site scripting. It's not clear exactly which cross-site
 scripting vulnerability tried to exploit. They embedded
 JavaScript in particular in the subject and then also in
 the body of the email. But more likely than not they went
 after some kind of webmail systems vulnerability. I
 always point out when we're covering cross-site scripting
 in the Defending Web Application class that webmail
 systems is probably one of the most difficult systems to
 write when it comes to cross -site scripting, given that
 email is usually these days expressed in HTML.
 There are some tricks that you can use these days in modern
 browsers like sandboxed iframes and the like, but
 still recently we had, for example, this interesting
 vulnerability in ProtonMail. It's not easy to get this
 right. So no surprise that attackers are going after it.
 And in the past, if some open source systems had
 vulnerabilities like this, well, they were often very
 quickly exploited. So if anybody can help me out and
 figure out what exact vulnerability they're trying
 to exploit here, let me know. Also interesting, in order to
 detect whether or not the exploit worked, they used a
 website called XSS, so cross -site scripting, dot report.
 That website, it's a free website that you can then
 basically use to deliver additional JavaScript to the
 victim and record things like browser setting, cookies, and
 the like. Interesting website too. Something that you
 probably do want to keep an eye on. Maybe block access to
 the website. At the very least, record any DNS lookups
 for that fairly unique host name. So in case you do have a
 cross-site scripting vulnerability in one of your
 sites and someone uses this particular tool to exploit it,
 well, you'll be alerted. And SonicWall released a critical
 update for its SMA-1000 Appliance Management Console,
 or short AMC, and the Central Management Console, CMC. These
 appliances are already being exploited. The vulnerability
 is a desteralization vulnerability that allows
 arbitrary code execution without authentication. Quick
 check on Shodan showed only a handful of exposed appliances.
 There's really no good reason to expose these because
 they're really sort of more internal management
 appliances, not something your users would necessarily need
 to connect to. So this is not the SonicWall firewall or
 anything like this. Please patch quickly. Again, this is
 already being exploited according to SonicWall.
 And then we got a couple of updates from Cisco. First one
 I covered because, well, it's Clam AV, which is very
 popular. It's only a denial-of -service vulnerability, but
 Clam AV could often be found embedded in other open-source
 products or even some commercial products. The OLE2
 file format decryption suffers from this denial-of-service
 vulnerability. And, of course, as you probably have heard me
 mention a few times, OLE, that file format is still important
 and something that anti -malware tools have to parse.
 So definitely try to patch this vulnerability. Next, we
 do have a critical vulnerability. This one
 affects the Cisco meeting management REST API. This is a
 privilege escalation vulnerability, but Cisco still
 rates it as critical. So definitely make sure that you
 get this addressed. It's basically improper
 authorization in the REST API, which could allow any user to
 then escalate their privileges and send administrative
 requests to the API. Of course, a very common issue in
 APIs that they don't always treat authorization
 authorization as careful as some of the traditional web
 apps being a little bit more hidden from Vue. Hello, it's
 Friday again, and I have yet another Sans.edu student with
 me here, Anthony. Anthony, could you introduce yourself,
 please? Hi there. Yeah, my name is Anthony Russo. I'm
 joining y'all. I'm actually currently working for
 Atlassian. I'm our U.S. team lead for the security
 operations team. So a pleasure to be here. I'm trying to keep
 developers safe there as well. Your developers always close
 to my heart. Can you talk a little bit about your research
 paper? Like what it is about? Yeah, sure. I mean, I think it
 goes without saying that AI has been a big talk for
 cybersecurity as of late, especially with the recent
 rollout of ChatGPT and all the ways that AI has kind of
 seemed to make its way into our world. And so I, as a
 security operations person, have been particularly
 interested to see how we can incorporate that into our
 work. And one of the big things that we utilize for our
 team is a SOAR platform, which is just a way to make fairly
 straightforward automations without having to write full
 -blown applications or getting in-depth software engineering.
 So what I found that was ChatGPT and particularly the
 ChatGPT, the custom ChatGPTs and the abilities to connect
 it to APIs provided us a pretty intuitive and useful
 way to make automations as opposed to having to manually
 do this with Lerb platforms. So that was primarily the
 focus of my paper to explore that and to see what these
 LLMs are capable of doing. Yeah, that I thought was a
 real great sort of use of LLMs. And SOAR has been sort
 of a big thing, I would say like three, four years ago
 when people sort of first discovered it. And it's a
 fairly intuitive thing. Hey, we have the same instance over
 and over. Let's automate what we're doing. But I think has
 gotten, like with a lot of these technologies, people got
 a little bit disappointed by it, that it's so much work to
 set it up and get these automations working reliably.
 Do you find that these ChatGPTs or LLMs made this not
 just faster, but also led to some better result? Yeah,
 definitely. I mean, I don't think that we are in a state
 where they're perfectly right every time. It's not the
 silver bullet. But, you know, with SOARs, I mean, it's
 similar to writing code, right? And you have to account
 for many different factors. So if, let's say, we were writing
 a SOAR play to handle phishing, you know, we write
 it to handle a certain set of headers in a certain format.
 And then let's say somebody uses a different email
 platform that changes the headers. Well, then we now
 have to go back and change an account for that. What I have
 found with LLMs and ChatGPTs, it's very good at intuiting,
 you know, these different changes in these different
 formats so that a lot of these smaller scale bugs are handled
 on the fly without having to go in and to fix every little
 thing. So, yeah, I definitely think it's been great for that
 at that point. Yeah, I use a lot like Copilot and such for
 coding. And I think I have a similar experience where the
 prompts that some people sort of show us a demo, I think are
 overselling it a little bit where it says, you know, write
 the next great iOS game and it writes the game for you. That
 may work, but that's not really sort of how it works.
 It sort of helps a somewhat skilled developer to really
 code faster. And that's sort of what you found with the
 SOAR application. Yeah, exactly. Yeah, it's kind of
 like just that added benefit. It really leads the way for
 you and as opposed to you having to, you know, go and
 search Stack Overflow, you know, a lot of the answers are
 already there presented to you real time. Yeah, and I think
 that comparison is good. I actually did a little
 experiment a while ago where I sort of compared Copilot to
 just Googling for the result. And yeah, it's faster. In some
 cases, you actually end up with verbatim the same
 solution. Chat GPT or Copilot that I used in that case was
 actually a little bit more secure. I think it had one
 security, vulnerably less than I got just from Stack
 Overflow. Now with these SOAR scripts, anything that just
 didn't work or where it sort of could have potentially
 caused damage if you just would have used it blindly?
 Yeah, I mean, I think you can't be 100% reliant on these
 for, like you can't just write an automation with ChatGPT and
 just say, handle all of my security operations triage,
 right? You need to be specific and you need to
 compartmentalize it as much as you can. So maybe you have a
 custom LLM handling phishing -specific use cases and you
 have prompts specifically set up to handle those type of
 events. And that way you can avoid having any of these,
 what we're calling hallucinations for LLMs to
 just kind of go off and completely miss the mark,
 which is definitely capable of happening. So that's one way
 I've found to help reduce any sort of issues. But yeah, I
 mean, if you really were just to throw it at it, like a
 broad sense, I think it's pretty inconsistent, which I
 mean, would be right for many things. Like if you can't just
 throw it at a SOAR play, you'd have to write a SOAR
 automation to handle it. So in that sense, I don't think it's
 a bad thing. But yeah, like I was mentioning earlier, I
 don't think it's perfect in any other sense. It still has
 its quirks. Yeah. And I've seen people compare it to
 outsourcing software development where a lot of it
 depends on getting the specifications right and
 similar here. Oh, yeah, that's a good... ...with the prompt
 kind of, yeah. That's a good metaphor, yeah. Because even
 if you have skilled developers, if they don't know
 what you really want to do, then you end up with bad code
 usually. And you just have to explain it to ChatGPT, what
 you actually want. Yeah, and the benefit... Oh, sorry.
 Yeah, not to cut you off, but I think the benefit with
 ChatGPT is you get that real -time response, right? Where
 it doesn't do something right, and then so you go back and
 you just modify that prompt a bit or you tell it, oh, I
 didn't like that, and then it alters itself. How do you test
 those scripts when or not they're actually working? Do
 you have some test cases that you run them against or just
 try them out and see what happens? Yeah. In my paper, I
 used phishing and malware as two of the main test use
 cases. Fortunately for us, we live in a world where a lot of
 this stuff is on the internet, so you can find many sources
 of samples of phishing cases. There's a lot on GitHub where
 you can just find emails and then throw that at your LLM
 and see how it handles those. And same thing for malware.
 You can find many different examples of malware. So, yeah,
 that's kind of the way I approached it, was using a lot
 of that open source data that was available to us to run it
 through these cases and then see how it performed. Yeah,
 great. And are you using this now in your day-to-day work?
 Yeah, I will say we aren't using ChatGPT directly. We are
 using a store platform internally that does have
 integrations to LLMs, but we are hosting them locally. And
 a lot of that has to do with privacy concerns. You know,
 with these LLMs, especially when you're operating in a
 public space, like that ChatGPT, and you're not paying
 for any sort of private license, you know, your data
 is kind of out there. So when you're dealing with customer
 data and production data, it's probably smart to consider
 that whenever you're building out your automations. So,
 yeah, we're using a flavor of it. Matt Lazian actually has
 just recently released Rovo, which is one of the AI agents
 that we've been using, which has our own flavor of LLMs,
 which is pretty cool because you can write custom agents,
 which effectively is similar to just prompting it and build
 out instructions for it to handle specific use cases. And
 so that's been one of the ways we've been using it, you know,
 dogfooding our own work, our own product. So, yeah. Yeah,
 excellent. There will be a link to the paper in the show
 notes. Any final word, anything people should look
 particular for in the paper? No, yeah. I would just like to
 take a moment to say thank you for having me on here. I love
 the show and just love how consistently you have been
 uploading and providing information to the community.
 It's been great to be on here. Yeah, if you have any feedback
 or thoughts, you can find me on LinkedIn and just shout out
 the paper and I'd love to have a back and forth with you.
 Well, and that's it for today. Thanks for listening. By the
 way, if anybody's interested in any classes I teach, I'll
 be teaching in Baltimore in March, our Intrusion and
 Texting class, SEC 503. Link is always below the show notes
 for any upcoming classes I teach. Thanks and talk to you
 again on Monday. Bye.