Handler on Duty: Guy Bruneau
Threat Level: green
Podcast Detail
SANS Stormcast Friday, August 8th, 2025:: ASN43350 Mass Scans; HTTP1.1 Must Die; Hyprid Exchange Vuln; Sonicwall Update; SANS.edu Research: OSS Security and Shifting Left
If you are not able to play the podcast using the player below: Use this direct link to the audio file: https://traffic.libsyn.com/securitypodcast/9562.mp3

: ASN43350 Mass Scans; HTTP1.1 Must Die; Hyprid Exchange Vuln; Sonicwall Update; SANS.edu Research: OSS Security and Shifting Left
00:00
My Next Class
Application Security: Securing Web Apps, APIs, and Microservices | Las Vegas | Sep 22nd - Sep 27th 2025 |
Application Security: Securing Web Apps, APIs, and Microservices | Denver | Oct 4th - Oct 9th 2025 |
Mass Internet Scanning from ASN 43350
Our undergraduate intern Duncan Woosley wrote up aggressive scans from ASN 43350
https://isc.sans.edu/diary/Mass+Internet+Scanning+from+ASN+43350+Guest+Diary/32180/#comments
HTTP/1.1 Desync Attacks
Portswigger released details about new types of HTTP/1.1 desync attacks it uncovered. These attacks are particularly critical for organizations using middleboxes to translate from HTTP/2 to HTTP/1.1
https://portswigger.net/research/http1-must-die
Microsoft Warns of Exchange Server Vulnerability
An attacker with admin access to an Exchange Server in a hybrid configuration can use this vulnerability to gain full domain access. The issue is mitigated by an April hotfix, but was not noted in the release of the April Hotfix.
https://msrc.microsoft.com/update-guide/vulnerability/CVE-2025-53786
Sonicwall Update
Sonicwall no longer believes that a new vulnerability was used in recent compromises
https://www.sonicwall.com/support/notices/gen-7-and-newer-sonicwall-firewalls-sslvpn-recent-threat-activity/250804095336430
SANS.edu Research: Wellington Rampazo, Shift Left the Awareness and Detection of Developers Using Vulnerable Open-Source Software Components
https://www.sans.edu/cyber-research/shift-left-awareness-detection-developers-using-vulnerable-open-source-software-components/
Application Security: Securing Web Apps, APIs, and Microservices | Las Vegas | Sep 22nd - Sep 27th 2025 |
Application Security: Securing Web Apps, APIs, and Microservices | Denver | Oct 4th - Oct 9th 2025 |
Application Security: Securing Web Apps, APIs, and Microservices | Dallas | Dec 1st - Dec 6th 2025 |
Podcast Transcript
Hello and welcome to the Friday, August 8, 2025 edition of the SANS Internet Storm Center's Stormcast. My name is Johannes Ullrich, recording today from Jacksonville, Florida. This episode is brought to you by the SANS.edu Graduate Certificate Program in Incident Response. In diaries today we have yet again one of our SANS.edu undergraduate interns that wrote up a little observation from their own honeypot. Duncan Woosley observed all for a sudden a big influx of scans from Panama. Looking at it closer, it actually turned out that this was associated with ASN 43350, ASN Autonomous Systems or Autonomous System Numbers. That's basically the different networks connected to the internet and ASN 43350, which is assigned to a company called NForce Entertainment, has a little bit of habit of often renting out its IP address space. So that of course opens them up to more suspicious and sometimes malicious uses. The traffic spiked over a couple days between April and then later again in July. The next question of course that always comes up here is block lists. And last time I mentioned block lists in Diary, there was a question that came up, why I don't like block lists. So should you block this particular ASN? Well, maybe. It really all depends on your own network. Blocking sort of a big scanner like this can certainly reduce the noise in your network, in your network sensors also. Does it protect you from attacks? Well, that's the part where I think it's a little bit questionable as far as successful attacks go. Many of these scans are more or less just random scans that are coming from many, many different other IP address as well. So just blocking that one ASN does not prevent actual exploitation. It may always block some legitimate traffic as well. So that's where your own network comes in. How you feel about potentially blocking non -malicious traffic from your network by putting in this relatively large blocks. Overall, just blocking the top 10 IP addresses for your own network is often pretty not that helpful. Because, well, like you see with the data that was collected here, you basically have a spike for a day. Next day it's going away again. So you're usually just chasing the peaks here. You're not actually blocking any current scans that are happening after you put the block in place. And of course this week, well, it's Hacker Summer Camp in Vegas. I'm, of course, not participating this year. But one of the papers that I really anticipated was this one about HTTP/1.1 and the syncing attack. This was published by Portswigger, the people behind the famous Burp proxy. And, well, I think really the paper has some interesting insights here. Why you probably don't want to use HTTP/1.1 anymore. Or why at the very least your end servers, your servers should support HTTP/2. And you shouldn't just rely on middle boxes like Cloudflare and other proxies to translate HTTP/2 to HTTP/1.1. So that's at least the lesson that sort of I took away from this paper. What they were really talking about was HTTP/request smuggling. And what this refers to is that sometimes it can be unclear where one request ends and the next request starts. And it becomes in particular a problem between middle boxes like proxies and your server. Because requests from multiple users are munched together in one connection. And then things like, for example, small modifications to the content length header. Where you add space here, for example, can confuse systems as to like, you know, where, how long is this request actually? And where does it end? Now, many of these attacks have been talked about quite a bit in the past. And Portsvigar is showing that some of the literature is going back to like the early 2000s. But they outline here a number of new attacks. One that I thought was particularly sort of novel and least interesting to me was the use of the expect-100 continue header. Now, 100 continue is a status code that you usually don't see. But in this case, well, you can sort of trigger it. And then again, these not very often used features often then cause problems. Again, sort of confusing servers, middle boxes as to the beginning and the end of requests. And in this particular case, well, this can then lead to the leakage of, for example, private information like session IDs, account IDs and the like. At least here in the example of Netlify that they showed. So the real problem here is that modern architectures often set up these sort of proxy bucket brigades, how I refer to them, where you have things like load balancers. You have web application firewalls and other devices that are sort of forwarding requests from one to the other. The problem with HTTP/1.1 is that it has this fairly fragile system of ASCII strings and colons and such to delineate different fields. HTTP 2 is a lot more modern. It uses a more binary encoding with like length and then a value. So that makes it much easier to write robust parsers for HTTP/2 than for HTTP/1.1. And in the translation from HTTP/2 to HTTP/1.1, some of those details sometimes get mixed up and lost. And that's sort of what this paper is really about. They also released new tooling in order to detect these vulnerabilities. So if you're working in these kind of infrastructures, particularly with cloud-based systems and such, certainly something to investigate. They specifically say that web application firewalls may help with some of the impact here, but are probably insufficient. And that's why they recommend, well, enable HTTP/2. So you stay HTTP/2 end-to-end and avoid the translation from HTTP/2 to HTTP/1.1. And Microsoft released interesting bulletin today outlining a vulnerability in Microsoft Exchange server if operated in a hybrid deployment. The issue here is that it's possible for an attacker who has admin access to the Microsoft Exchange server to pivot to your cloud environment, basically get a complete compromise of your domain. This vulnerability was addressed with a hotfix in April. However, back then, this vulnerability wasn't really sort of clearly spelled out. And the way I read the advisory, Microsoft actually didn't know about this vulnerability. The April advisory just happened to fix this vulnerability. So they're now making clear that you definitely do want to apply the April update to fix this particular problem. But remember, in order to exploit this vulnerability, an attacker first needs to have admin access to your Exchange server, which probably is a bad idea or a bad thing anyway. And that's probably then the escalation is just making things even worse. And then we got some updated guidance from SonicWall regarding these, well, presumptive possible Seredei attacks that are going around that Arctic Wolf report about. SonicWall now states that they do not believe that any new vulnerability is being exploited here. This is mostly just the reuse of credentials that may have leaked via other means. They do provide the basic guidance, how to secure your SonicWall setup. They no longer recommend that you should disable the SSL VPN. Just to put it in context, we have written about this over the last month or so, a couple of times, that we have really seen a huge spike in brute force attempts against SonicWall in our honeypots. So it's very possible that this is just regular brute forcing, credential stuffing, and, well, basically just careless configurations to some extent. However, Arctic Wolf specifically stated that they believe that the credentials that were used in the cases that they investigated were secure, meaning that multi -factor authentication was used and that the credentials were rotated after prior compromises. So still a little bit open here, but I tend to believe the SonicWall side of things at this point. Be careful and definitely, you know, review your credentials, review your logs, and check the SonicWall guidance for any updates. Well, and then it's Friday today, and with me today, another sans.edu student to talk about their research project. Could you introduce yourself, please? Yeah, my name is Wellington Rapazzo. I recently graduated a master's degree in information security engineering. I have been in the computer science industry for around 20 years now, working as a developer for many years, and more recently, the last five years in the information security field. And I, most of the, almost all those years, I have been working as a Microsoft, not only in the field with the customers, but also on the engineering side, creating our product. Yeah, so exciting work there, an exciting research paper. Actually, a topic that is very close to my heart. Doing a lot of web development myself and such, and considering myself still somewhat a developer. You talked about shifting left and vulnerable open source software. What was this about? Yeah, so the open source world is amazing, right? It helps a single individual or groups or companies to create amazing, amazing softwares, amazing libraries that are consumed by God knows how many products, right? JSON from Newtonsoft, use it everywhere. It's amazing library as an example. So usually when developers start using a reference component to their application, they just download, they start using, they create their product, and they don't pay attention on the security, what that library is doing, and if that library has a security vulnerability or not. But the key part here is not only this, but the shift at laughter, right? So it's very common for companies, especially to start creating their software, make the reference to the open source component to start using, and never pay attention on that updated again. So they write the whole software, they deploy as a service, or they ship as a product, and vulnerabilities are figured out later to that component. And they don't update for many different reasons. And even if they try to update, it's kind of too late, and it has a high cost for the teams and for the components. So we all know about the software development lifecycle. And there are many researchers that if you need to do a fix and an update after the software is ready, the cost is way higher than if you do in the beginning of the development phase. Right? So it's common in many companies, you have a product and you have a vulnerability, and you ask the developer to update for a fixed version where that vulnerability does not exist. And you're going to listen to answers like, oh, I cannot do this now because I have a new feature that I have to develop in my product, so this is not a priority. Or no, I cannot do it now because we have incompatibility. This is going to have a high cost for us to adapt and be compatible with this new version. And in the end, we do have services or products with security vulnerabilities impacting many, many users. So it would be amazing if a developer, when is building their software or starting a development, be aware right after. And not only the individual, but their team, their leadership to help them with priorities and cost and everything else that is part of the software development lifecycle. Yeah, and your paper is going more into technical details here, but let's stick a little bit with the cultural part because I think that's a real big part that you mentioned here. What I always find is, again, from a developer's point of view with a little bit of security background, that security from a developer's point of view often gets in the way. I've seen companies that really sort of utterly fail at the open source security game and spend a lot of money on it. So they basically spend a lot of effort at failing. And one problem they sometimes do is that they're too restrictive where they say, okay, developer, you're not allowed to use open source components unless they're specifically approved, which sounds really great. And if you want to have an open source component approved, our security team will take care of that. Then three, four months from now, they'll tell you if you can use it. And I find that just doesn't work for me as a developer. And the developers, I would say, they're fundamentally nice people. They want to get work done. And what they end up doing then is doing actually something much more dangerous, where they copy paste code from an open source library. So now you have the vulnerability, but you don't know that you have the actual library. Where do you try to balance here? Like, you know, how much scrutiny should you apply to an open source library before you allow your developers to use it? Because there has to be some kind of control around this or anything that works in your experience there? Yeah, I mean, we do have actually great products available in the market that people and companies in general could be using. And by the way, I'm not doing any marketing about those products. I've just mentioned because they are actually very useful. Not only GitHub with the GitHub Advanced Security and the PandaBot, also Azure DevOps and even SNCC. And I'm pretty sure there are others that when a developer download and reference and check an open source component into their code, but still doing a reference check if that component has a security vulnerability and alert in the repository that, hey, you have a security vulnerability, you should not be using this. Also, it's known that many package managers offer you to have your own private feed. So companies could be creating those private feed and doing their own checks. I understand it has a high cost. It's not easy. So small software companies probably would not be able to have their own private feeds and checking for security vulnerabilities, etc. However, they can, using those products in the market, which would be bad, probably wouldn't be perfect, but would be very reasonable and would help in them a lot. But one problem that I see with this is when it happens, the code, it's probably written already, at least some part, and checked in the code. So most teams is going to look on that one and say, yeah, but we have this super tight deadline. Let's deal with this later. So it would be great if not only the individual, the developer that is downloading that code at that time, because the package management giving them the alerts, right, in most of those cases when it happened, but also leadership. And for companies that they have secure teams, they have their blue team, their monitoring team, they are monitoring a lot of stuff, but they are not monitoring components being downloaded, right? So if they can't monitor and alert the leadership, hey, this dev team just downloaded a component with a vulnerability, the leadership would be aware and say, hey, hold on. We have a very tight deadline, but we cannot ship this with a security vulnerability, right? And to your point, which is not part of my research, but would be a great topic, when people just copy and paste code because they cannot make a reference, as you mentioned, I'm pretty sure today we could have an AI agent checking for this and say, hey, by the way, I see this code. It's very similar to this one. Why are you doing that? And give it this one another bit. So for people who listen to us, if you are looking for an idea for the research, maybe you just have one nice and good. I heard about people looking in stuff like this, particularly like outdated encryption algorithms and stuff like this, which it's relatively easy if you have the outdated OpenSL library, whatever encryption library you're using, you just update the library. But now with the updating, another problem that I have run into is that you do have a vulnerable library, but then the developer tells you, but I'm not actually using the vulnerable feature. So should I spend the effort updating? What's your take on that? I would say, yes, you should update regardless because you may not be using right now. You may use later, but I'm not a malware developer or anything like that. But I'm pretty sure that even if your code flow does not execute that line of code, someone maybe find a way to run that code and create that. And also, again, I'm not doing any marketing, but we do have other tools like CodeQL that people could take advantage and use also from GitHub to detect the code flow and maybe not update right away, but putting their backlog. Hey, we need to update this as soon as possible because we're not using that function, but that function has a vulnerability and someone probably can exploit that one. My personal takeaway on this is if there's something vulnerable, do not use. And if you don't know what you are downloading, don't want to ever check, don't do it. Not in your production systems, not in your library, not in your code. Maybe in a sandbox environment if you want to do your own tasks and things like that, but not in your daily basis machine, not in your work machine for sure. Yeah. Like I always say, don't underestimate the creativity of the attacker. They sometimes find ways to exploit it that may not be obvious initially. Any final words for developers? Don't give up on OSS. Don't give up on your components. And keep using it. Keep contributing. The software world is really amazing nowadays because of so many open source contributions. But be very careful, right? When I was in the college, in the university, security was not even mentioned during my computer science or degrees. I believe today it should have classes not totally focused on security, but they should have at least something. I believe so. I hope so. If not, there are SAMs, there are so many others that you can learn about it. But be very careful. Be very careful. Think that only... Your software may be very... You may think that, hey, this is not harmful. This is not doing anything. Oh, really? Are you creating a software that's going to run in a hospital? That's going to run in a grocery store. And then maybe you're going to impact people to receive what they need. You may stop a bank and people cannot get money and be harmful about that. And the hospital, I think, would be the worst case scenario, right? Imagine you are in an ER room and you cannot receive your treatment because the software that someone created is not working because of security vulnerability. So I always think about that. What I'm doing is impacting the world. And I'm pretty sure what you are doing is also having a big impact. So you're conscious about that. Thanks a lot. And I always think that there is no commercial versus open source software because any commercial software you buy is probably 90% open source components just repackaged. I can't agree more. I can't agree more. If you spend some time in a GitHub, you're going to be surprised at so many. And one thing that I was surprised at, go to the last comment, by the way, go to the NPM or any nugget package that provides you any nugget package, any package management. Go to any package manager that provides you some download statistics. You're going to be surprised by the number of people downloading that open source with linear abilities or without linear abilities only in a very short period of like a week or so. And you're going to be like, wow, there's a lot of people using this. Yeah. Thank you for joining me here. Thanks, everybody, for listening. And talk to you again on Monday. Bye. Bye. Thank you.