Podcast Detail

SANS Stormcast Friday Mar 7th: Chrome vs Extensions; Kibana Update; PrePw0n3d Android TV Sticks; Identifying APTs (@sans_edu, Eric LeBlanc)

If you are not able to play the podcast using the player below: Use this direct link to the audio file: https://traffic.libsyn.com/securitypodcast/9354.mp3

Podcast Logo
Chrome vs Extensions; Kibana Update; PrePw0n3d Android TV Sticks; Identifying APTs (@sans_edu, Eric LeBlanc)
00:00

Latest Google Chrome Update Encourages UBlock Origin Removal
The latest update to Google Chrome not only disabled the UBlock Origin ad blocker, but also guides users to uninstall the extension instead of re-enabling it.
https://chromereleases.googleblog.com/2025/03/stable-channel-update-for-desktop.html
https://www.reddit.com/r/youtube/comments/1j2ec76/ublock_origin_is_gone/

Critical Kibana Update
Elastic published a critical Kibana update patching a prototype polution vulnerability that would allow arbitrary code execution for users with the "Viewer" role.
https://discuss.elastic.co/t/kibana-8-17-3-security-update-esa-2025-06/375441

Certified PrePw0n3d Android TV Sticks
Wired is reporting of over a million Android TV sticks that were found to be pre-infected with adware
https://www.wired.com/story/android-tv-streaming-boxes-china-backdoor/

SANS.edu Research Paper
Advanced Persistent Threats (APTs) are among the most challenging to detect in enterprise environments, often mimicking authorized privileged access prior to their actions on objectives.
https://www.sans.edu/cyber-research/identifying-advanced-persistent-threat-activity-through-threat-informed-detection-engineering-enhancing-alert-visibility-enterprises/

Podcast Transcript

 Hello and welcome to the Friday, March 7th, 2025
 edition of the SANS Internet Stormcenter's Stormcast. My
 name is Johannes Ullrich and today I'm recording from
 Baltimore, Maryland. Well, we all know it's important to
 keep your browsers up to date. Last week, Google did release
 a new update for Google Chrome. Unfortunately, they're
 sort of doubling down on getting rid of older
 extensions that are still using the version 2 of their
 manifest. The problem here is that these older extensions
 had more privileges to interact with Chrome, which
 Google no longer wants to allow. However, there are also
 some beneficial extensions that took advantage of this
 access. One that's very vocal here is uBlock Origin. uBlock
 Origin, now in this latest update, is automatically being
 deactivated. And then if a user is trying to manage their
 extensions, well, they're kind of pushed in the direction of
 actually uninstalling and removing this extension. The
 problem is, well, you don't actually have to remove it.
 You are able to reactivate it for now. Just that Google
 doesn't make that very obvious. This has been an
 ongoing battle between sort of uBlock Origin and Google. Not
 to have to ensure if uBlock Origin could come up with
 other ways to do its work and block advertisements. Of
 course, one of the suspicions here is that Google's reliance
 on advertisement revenue makes them more likely to actually
 prevent users from running these type of extensions in
 their browsers. And we have some critical updates to talk
 about. First of all, Kibana. Kibana, of course, is also
 part of our Honeypot scene. It's the popular dashboard for
 Elasticsearch. And it suffers from a prototype pollution
 vulnerability that could allow arbitrary code execution. In
 order to exploit the vulnerability, an attacker
 would have to have access as viewer to the dashboard. Now,
 this is, of course, a low -privilege account usually and
 often provided without password or with a well-known
 password just to allow users to, for example, review a
 particular public dashboard. So, updated. There's also a
 quick workaround that you can enable. I'll link to the
 advisory from Kibana in the show notes. And Fireart has a
 story, well, that we have sadly seen before. And that's
 Android Web TV devices that are showing up with
 reinstalled back doors. These are these commonly known sort
 of as TV stick devices. Usually, just a little HDMI
 plug that you plug into your TV. Maybe some USB for a power
 supply. But, yes, these devices come with adware and
 the like pre-installed. And apparently, just another batch
 has been found with about a million or so compromised
 devices. Not too much, really, you can do about this other
 than, well, be careful where you're buying these devices
 from. And also probably not necessarily going to the
 cheapest device out there from any kind of no-name supplier.
 Well, and it's Friday. And I do have another Sans.edu
 student to interview here. Eric, could you introduce
 yourself, please? Sure. My name is Eric LeBlanc. I'm a
 senior cybersecurity engineer at the U.S. Strategic
 Petroleum Reserve. And my paper was on a new technique
 that I developed for attempting to detect advanced
 persistent threat actors within an environment. Yeah.
 So advanced persistent threat actors, of course, one thing
 they try to do to stay persistent is not to get
 detected. So what was the new technique that you came up
 with there? So here I created a thing that I call meta
 detection. So it examines your detections over a longer
 period of time than you might traditionally look at for
 detections. It works similar to how risk-based alerting
 works in that you're looking at things over time. However,
 specifically what I was looking at was tracking things
 using the MITRE ATT&CK framework and specifically
 known TTPs that have been used by specific threat actor
 groups. So effectively trying to identify known ones that,
 known actors that might be specifically looking for a
 given enterprise. So it requires you to go through a
 good threat modeling process to know what actors might
 actually be targeting you, as well as understanding their
 tradecraft and what they're doing. But specifically, it's
 looking at how they have acted in historic intrusions and
 trying to connect dots within your environment. Okay, cool.
 So let's make it a little bit more specific. What was one of
 the threat actors that you selected there? Yeah, so
 specifically, I was looking at APT29 for the experiment that
 I conducted because they're a fairly well-known and well
 -researched entity already. And we had a very large
 history to pull from and look at for their potential
 techniques. And it was interesting. In my
 experiments, it showed as effective as risk-based
 alerting, which was a bit of a surprise to me. Yeah, and so
 what was one of the indicators or TDPs that you sort of
 looked for here? Yeah, so specifically, we were looking
 at things that they'd done historically. So we were
 looking at some of the same lateral movement techniques
 that they were using, like via remote management or Windows
 administration tools, things like that. We were looking for
 all of these various sub -techniques that they had done
 in the past, but we were looking to connect them within
 the environment over a history. So basically, one of
 the problems we had found with risk-based alerting is that
 you need to set that threshold because there's going to be
 some amount of noise. Administration tools look a
 lot like hacker tools many times. So basically, a problem
 can be filtering the noise from what is normal behavior.
 One of the things that I found was that normal behavior
 doesn't strictly follow the known plan of an APT actor in
 many cases. So it was easier to say, okay, well, I see this
 execution activity. And then I've later, weeks later, okay,
 well, then this host I saw execution activity on ended up
 attempting lateral movement or attempted to do reconnaissance
 or something like that by unknown method. So like say it
 was, they were doing SMB enumeration or they were doing
 something more specific that is not routinely done within
 the environment. And linking those things together through
 a series of detections that are looking at historical data
 from within the environment. Yeah, so basically, not them
 exploiting it or doing lateral movement right away, but
 waiting then for this. And yeah, that's certainly
 something that APT sometimes tends to do as they sort of
 figure out what they have and don't have. The one challenge
 that I can sort of think of, the alarm bell that sort of
 goes off right away is how do you deal with all the data
 over that amount of time? So in this instance, I am both
 gifted and cursed by working in a federal environment. So
 one of the big OMB mimos that came out in recent years
 involved logging and log retention periods. So by OMB
 rule, we have to maintain minimum one year of logs in
 hot storage so that we have to have that available to look
 over, as well as an additional 18 months in archive storage.
 So at any given time within the environment that I was
 testing in, we have a whole year of logs to look at. So
 it's very pricey. It's a luxury that not a lot of
 entities have. However, per regulatory orders, we have to
 have that. So within federal environments, at least, you're
 going to have those logs. Yeah. And also the ability to
 search then, because that's the other cost factor here.
 And storage is cheap to some extent, but fast storage that
 you can actually search with queries like this. Were you
 able to do some sort of pre -filtering or such to speed
 that up or any sort of data management procedures that
 helped here? Sure. So in this case, we were querying across
 the previous detection specifically. So we're not
 looking at all of the underlying log data. We're
 looking at the records generated from previously
 fired detections. So all of that data is pretty small and
 pretty easily searched through. Specifically, we were
 annotating within the detection field saying, okay,
 using MITRE technique IDs and also the kill chain phase that
 a given detection corresponds to. So from there, you can
 say, show me all of the detections that fired with
 actions on objectives as the kill chain phase. And it would
 then filter out based on whatever your query was,
 whether you were looking at an individual host, you were
 looking at the whole environment, things like that.
 So in this case specifically, I was working within Splunk
 Enterprise Security. You can do the same thing with other
 SIMs as well. I know Softelk, you can have annotations for
 your detections. And yeah, it's not unique to that
 specific tool, but it was one that we're using in our
 environment. Yeah. Give us a little bit of an idea of the
 scalability and practicality of this. Can you say
 approximately like, you know, how many logs or how many
 endpoints or such you're monitoring there? Yeah. So for
 this environment, we were looking at approximately, I
 think it was around 3,000 to 4 ,000 endpoints total between
 end user devices, networking equipment, servers, all that.
 So it's not a small environment. But I would say I
 think our total number of users is around 12 to 1,300
 end users. So not a small business for sure. But it's
 not necessarily the size of some, you know, worldwide
 enterprises. But in theory, the logic should scale because
 you're looking at previously detected events. So you're not
 having to pour through, you know, terabytes of actual log
 files. You're really only looking at previous
 detections. So what might be considered either an event of
 interest or, you know, choose your favorite vocabulary for
 that. But yeah, so you're only looking at sort of the
 summarized events. And what's of the detection delay then?
 Like, you know, basically after the event happened, how
 long would it take? Is like you run these queries daily,
 hourly or? Yeah, so I was running these queries and as
 part of the experiment, I was running them every 15 minutes.
 So they run very quickly. It only took, I think, maybe a
 couple seconds for the actual query to run through. So I was
 doing them periodically. Over a large enterprise, you may
 want to scale that up some. It depends on the sensitivity
 that you're looking for. But because you're going over
 summarized data, it's much faster. Yeah, and I guess in a
 large enterprise, you would also not run it necessarily
 over the entire enterprise, but some department, some
 curricular enclave or something like that. Yeah,
 something that you've identified as part of your,
 you know, crown jewels analysis or something like
 that. Yeah, so that's really cool. You're using this right
 now in your day-to-day job? Yes. Yeah, so great. The link
 to the paper will be added to the show notes. So if
 anybody's interested in any more details here and how this
 all exactly worked, any final words, Eric, anything to give
 people on their way to implement this? Sure. So don't
 be daunted by how much it goes, like how much needs to
 be done beforehand, because this really isn't something
 that I would expect to work early on in the maturity
 process for an environment. You need a whole lot of base
 level skills to make this work. You need to have an
 active threat intelligence practice, both consuming and
 producing. You need to have very mature detection
 engineering that you know that these individual detections
 that are firing have good fidelity and have already been
 tuned well. It takes a lot, and it is a journey. So
 there's a lot of underlying assumptions there that are
 required, and the only way to get there is to actually do
 the work. Yeah, great, and thanks for being here. Thanks
 for everybody listening, and talk to you again on Monday.
 Bye.