Podcast Detail

SANS Stormcast Tuesday, April 22nd: Phishing via Google; ChatGPT Fingerprint; Asus AI Cloud Vuln; PyTorch RCE

If you are not able to play the podcast using the player below: Use this direct link to the audio file: https://traffic.libsyn.com/securitypodcast/9418.mp3

Podcast Logo
Phishing via Google; ChatGPT Fingerprint; Asus AI Cloud Vuln; PyTorch RCE
00:00


It's 2025, so why are malicious advertising URLs still going strong?
Phishing attacks continue to take advantage of Google’s advertising services. Sadly, this is still the case for obviously malicious links, even after various anti-phishing services flag the URL.
https://isc.sans.edu/diary/It%27s%202025...%20so%20why%20are%20obviously%20malicious%20advertising%20URLs%20still%20going%20strong%3F/31880

ChatGPT Fingerprinting Documents via Unicode
ChatGPT apparently started leaving fingerprints in texts, which it creates by adding invisible Unicode characters like non-breaking spaces.
https://www.rumidocs.com/newsroom/new-chatgpt-models-seem-to-leave-watermarks-on-text

Asus AI Cloud Security Advisory
Asus warns of a remote code execution vulnerability in its routers. The vulnerability is related to the AI Cloud feature. If your router is EoL, disabling the feature will mitigate the vulnerability
https://www.asus.com/content/asus-product-security-advisory/

PyTorch Vulnerability
PyTorch fixed a remote code execution vulnerability exploitable if a malicious model was loaded. This issue was exploitable even with the “weight_only=True" setting selected
https://github.com/pytorch/pytorch/security/advisories/GHSA-53q9-r3pm-6pq6

Podcast Transcript

 Hello and welcome to the Tuesday, April 22nd, 2025
 edition of the SANS and Storm Center's Stormcast. My name is
 Johannes Ulrich and today I'm recording from Jacksonville,
 Florida. And no, I won't mention it every single
 episode going forward, but remember SANSFire, July 14th
 through July 19th. And to register, just SANSFire.us.
 Well, in today's diaries, Jan is writing about his favorite
 topic, phishing. And in part, well, why is it still so easy?
 In the particular case that Jan is presenting here, it's a
 very straightforward phishing attack. It implements one of
 those webmail forms that we see very often being used to
 phish email credentials. And then, well, it's advertised
 via sort of a simple email failure notice. Again, a very
 common scheme being used to lure users into clicking on
 phishing links. However, it's then being directed to a
 dynamic IP address. Essentially, it uses one of
 those dynamic IP forwarding systems to host this
 particular website. It is forwarded to this dynamic IP
 by Google's own DoubleClick .net system. And that's sort
 of really where Jan has an issue with Google making it
 just too easy for attackers. This particular site has been
 flagged for about a week now by VirusTotal, which also is
 run by Google. So the data is certainly there to prove that
 this is not a good site to direct users to. Well, I guess
 you can also do the same thing yourself. BlockDoubleClick
 .net. I've been actually doing this for a while. Sort of a
 very simple ad blocker as well. And it seems to be
 working well. Of course, it will break some real ads that
 you may be interested in, but you can always go to the
 company's website directly. And then if everybody taking
 currently a writing class in college could just skip for a
 minute, the next story is about ChatGPT and RummyDocs
 .com. They discovered that ChatGPT apparently keeps
 inserting now Unicode characters like non-breaking
 spaces and the like that could potentially be used to
 discover if a particular text was created with ChatGPT.
 These characters appear to be added somewhat randomly. Of
 course, that could also then be used to some kind of
 fingerprinting or figuring out which account created a
 particular document. But they may be preserved when you're
 doing a simple copy-paste out of ChatGPT. Well, I hope
 you're at least smart enough to figure out how to remove
 these non-printing Unicode characters. Again, they're
 essentially spaces, so they're not visible in your normal
 Word or other editors like this. But they definitely
 should show up in like a simple ASCII type editor.
 RummyDocs also suggests that this may be due to ChatGPT now
 offering free accounts for students. So somewhat
 balancing here the temptation, of course, for students to
 cheat using ChatGPT. An ASUS released a security update for
 its routers fixing a critical vulnerability in the AI cloud
 functionality. This vulnerability apparently can
 be used to execute arbitrary code in the router without
 authentication. The exact nature of the vulnerability
 isn't clear. However, one important device here, if
 there is no firmware update for your router, because it's
 either not released yet or maybe your router is out or
 end of life, then you have the option to disable the AI cloud
 functionality. I don't have an ASUS router right now, so I
 have no idea what you get with AI cloud. Maybe a good idea to
 shut it out anyway, but for now, apply the update and,
 yeah, if possible, disable the feature. And we have a
 vulnerability in PyTorch. This actually sounds like a
 vulnerability that we already covered, but appears to be at
 least a new version, new variety of it. And it's just a
 good old problem that if you are loading AI models, you may
 be executing a code. And if you're loading these models
 with the weights only equals true feature, which is
 supposed to be saved, supposed to only load the data, not
 execute any code, well, that actually is not safe either or
 was not safe either in versions of PyTorch prior to 2
 .6. So definitely get PyTorch updated because that setting,
 weights underscore only equals true, used to be the fix for
 this remote code execution issue, but apparently isn't in
 prior versions of PyTorch. Well, this is it for today. So
 thanks again for listening. Thanks everybody for
 subscribing, for liking this podcast and for recommending
 it, as well as for leaving good reviews. Thanks and talk
 to you again tomorrow. Bye.