Handler on Duty: Jan Kopriva
Threat Level: green
Podcast Detail
SANS Stormcast Monday, February 23rd, 2026: Japanese Phishing; AI Agents Ignoring Instructions; Starkiller MFA Phishing
If you are not able to play the podcast using the player below: Use this direct link to the audio file: https://traffic.libsyn.com/securitypodcast/9820.mp3
My Next Class
| Application Security: Securing Web Apps, APIs, and Microservices | Orlando | Mar 29th - Apr 3rd 2026 |
| Network Monitoring and Threat Detection In-Depth | Amsterdam | Apr 20th - Apr 25th 2026 |
Japanese-Language Phishing Emails
https://isc.sans.edu/diary/Japanese-Language%20Phishing%20Emails/32734
'God-Like' Attack Machines: AI Agents Ignore Security Policies
https://www.darkreading.com/application-security/ai-agents-ignore-security-policies
Starkiller: New Phishing Framework Proxies Real Login Pages to Bypass MFA
https://abnormal.ai/blog/starkiller-phishing-kit
| Application Security: Securing Web Apps, APIs, and Microservices | Orlando | Mar 29th - Apr 3rd 2026 |
| Network Monitoring and Threat Detection In-Depth | Amsterdam | Apr 20th - Apr 25th 2026 |
| Application Security: Securing Web Apps, APIs, and Microservices | San Diego | May 11th - May 16th 2026 |
| Network Monitoring and Threat Detection In-Depth | Online | Arabian Standard Time | Jun 20th - Jun 25th 2026 |
| Network Monitoring and Threat Detection In-Depth | Riyadh | Jun 20th - Jun 25th 2026 |
| Application Security: Securing Web Apps, APIs, and Microservices | Washington | Jul 13th - Jul 18th 2026 |
| Application Security: Securing Web Apps, APIs, and Microservices | Online | British Summer Time | Jul 27th - Aug 1st 2026 |
| Application Security: Securing Web Apps, APIs, and Microservices | Las Vegas | Sep 21st - Sep 26th 2026 |
Podcast Transcript
Hello and welcome to the Monday, February 23rd, 2026 edition of the SANS Internet Storm Center's Stormcast. My name is Johannes Ulrich, recording today from Jacksonville, Florida. And this episode is brought to you by the SANS.edu Graduate Certificate Program in Penetration Testing and Ethical Hacking. Brad on Friday published a diary talking about some Japanese phishing emails. Now Brad doesn't speak Japanese, he does not reside in Japan, but still for some reason got on a mailing list of a particular threat actor that's sending out phishing emails in Japanese. Now Brad is talking a little bit about why he believes that all of the emails that he received at part of these campaigns come from the same group, the same threat actor. But the real, I think the lesson here is that the threat actors, they are not just sending emails in English. And this can particularly be a problem for multinational companies where your normal business language is English. And as a result, often when you're talking about phishing, when you're doing phishing tests, you're sending these tests in English. A few years back, we had a sans.edu student who looked into, for example, how to figure out what languages are actually being used in your environment based on emails that are being sent, and then somewhat tailoring some of the phishing tests based on it. I know phishing testing and such is a controversial subject in itself, but if you're doing it, you may as well want to try to do it as well as possible. And I think part of this should be that you are looking at, well, phishing emails in different languages. Also, when you're looking at your spam and phishing filters, you have to make sure that they don't have similar biases and are going to capture these non-English phishing emails, which could be missed if, well, your phishing filter basically is only looking for English emails. And considers those as potential phishing emails. And the last couple weeks, we had a couple of incidents where security was breached by AI tools not following instructions, in particular when it comes to security guardrails that the AI was supposed to obey. I think what's happening here is very similar to what you have in humans, where humans often try to get work done and in the process ignore things like code phrases or not being supposed to use certain data or not supposed to be using certain tools. Well, as AI becomes more intelligent, well, it's adapting also to some of these behaviors that, of course, are often associated with intelligence. On the other hand, well, it's not quite that intelligent yet, so sometimes it doesn't make the right decision. There have been a number of these incidents and I'm going to link to an article by Robert Lemus on dark reading. He summarized some of this where, for example, and that was one that I almost included last week in the podcast, but didn't because that was just another story. But the Microsoft's co-pilot did apparently index some confidential emails, even though being told not to. There were other issues, like I mentioned, where AI agents made changes, even though they were told not to make any changes and the like. So definitely, this is a recurring problem. And in the end, the only way I believe that you're going to sort of safely use some of these tools is where you're actually preventing access by not providing them with the necessary credentials to, for example, make changes to your code, unless you actually want them to make changes to your code. There's also a story, and I haven't 100% verified it yet, but looks like it came from Amazon itself, where Amazon stated that they had a couple of outages that were caused by AI tools essentially overstepping their bounds and making changes they weren't supposed to make. Well, it wasn't the big Amazon outage, but some smaller sort of tools within the AWS ecosystem were down for multiple hours as a result. And going back to a phishing for another story, Starkiller. That is a new phishing framework that Abnormal did document in one of their blog posts. It's yet another improvement on a phishing framework that allows you to actually play machine in the middle attacks with multi-factor authentication interceptions. I think the real thing that I sort of try to keep through also, and I'm teaching about this in class and such, that not all multi -factor authentication schemes are the same. There are phishing resistant ones, and there are ones that are non -phishing resistant. The vast majority that's currently being implemented is not phishing resistant. If you're relying on some kind of one -time password or something like this, like the famous Google Authenticator, even some of the little bit more sophisticated varieties like Microsoft authentication with the code that you need to acknowledge. Well, pretty much anything like this, where the user decides whether or not they should enter a particular credential, whether that's a one-time password, whether that's acknowledging a number or whether that's a regular password, well, if the user is in charge and deciding what credentials to submit, then your authentication is not phishing resistant. So if you want to be phishing resistant, the machine needs to decide what credential to send. And that pretty much comes down to things like pass keys, other FIDO2 variants and such, that are somewhat phishing resistant. And that's really what you should try to implement these days. Well, and that's it for today. So thanks for listening. Thanks for liking. Thanks for subscribing to this podcast and talk to you again tomorrow. Bye.





