This Week in Security – Hackaday https://hackaday.com Fresh hacks every day Sun, 23 Feb 2025 12:56:54 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 156670177 This Week in Security: OpenSSH, JumbledPath, and RANsacked https://hackaday.com/2025/02/21/this-week-in-security-openssh-jumbledpath-and-ransacked/ https://hackaday.com/2025/02/21/this-week-in-security-openssh-jumbledpath-and-ransacked/#comments Fri, 21 Feb 2025 15:00:39 +0000 https://hackaday.com/?p=759500&preview=true&preview_id=759500 OpenSSH has a newly fixed pair of vulnerabilities, and while neither of them are lighting the Internet on fire, these are each fairly important. The central observation made by the …read more]]>

OpenSSH has a newly fixed pair of vulnerabilities, and while neither of them are lighting the Internet on fire, these are each fairly important.

The central observation made by the Qualsys Threat Research Unit (TRU) was that OpenSSH contains a code paradigm that could easily contain a logic bug. It’s similar to Apple’s infamous goto fail; SSL vulnerability. The setup is this: An integer, r, is initialized to a negative value, indicating a generic error code. Multiple functions are called, with r often, but not always, set to the return value of each function. On success, that may set r to 0 to indicate no error. And when one of those functions does fail, it often runs a goto: statement that short-circuits the rest of the checks. At the end of this string of checks would be a return r; statement, using the last value of r as the result of the whole function.

1387 int
1388 sshkey_to_base64(const struct sshkey *key, char **b64p)
1389 {
1390         int r = SSH_ERR_INTERNAL_ERROR;
....
1398         if ((r = sshkey_putb(key, b)) != 0)
1399                 goto out;
1400         if ((uu = sshbuf_dtob64_string(b, 0)) == NULL) {
1401                 r = SSH_ERR_ALLOC_FAIL;
1402                 goto out;
1403         }
....
1409         r = 0;
1410  out:
....
1413         return r;
1414 }

The potential bug? What if line 1401 was missing? That would mean setting r to the success return code of one function (1398), then using a different variable in the next check (1400), without re-initializing r to a generic error value (1401). If that second check fails at line 1400, the code execution jumps to the return statement at the end, but instead of returning an error code, the success code from the intermediary check is returned. The TRU researchers arrived at this theoretical scenario just through the code smell of this particular goto use, and used the CodeQL code analysis tool to look for any instances of this flaw in the OpenSSH codebase.

The tool found 50 results, 37 of which turned out to be false positives, and the other 13 were minor issues that were not vulnerabilities. Seems like a dead end, but while manually auditing how well their CodeQL rules did at finding the potentially problematic code, the TRU team found a very similar case, in the VerifyHostKeyDNS handling, that could present a problem. The burning question on my mind when reaching this point of the write-up was what exactly VerifyHostKeyDNS was.

SSH uses public key cryptography to prevent Man in the Middle (MitM) attacks. Without this, it would be rather trivial to intercept an outgoing SSH connection, and pretend to be the target server. This is why SSH will warn you The authenticity of host 'xyz' can't be established. upon first connecting to a new SSH server. And why it so strongly warns that IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! when a connection to a known machine doesn’t verify properly. VerifyHostKeyDNS is an alternative to trusting a server’s key on first connection, instead getting the cryptographic fingerprint in a DNS lookup.

So back to the vulnerability. TRU found one of these goto out; cases in the VerifyHostKeyDNS handling that returned the error code from a function on failure, but the code a layer up only checked for a -1 value. On one layer of code, only a 0 was considered a success, and on the other layer, only a -1 was considered a failure. Manage to find a way to return an error other than -1, and host key verification automatically succeeds. That seems very simple, but it turns out the only other practical error that can be returned is an out of memory error. This leads to the second vulnerability that was discovered.

OpenSSH has its own PING mechanism to determine whether a server is reachable, and what the latency is. When it receives a PING, it sends a PONG message back. During normal operation, that’s perfectly fine. The messages are sent and the memory used is freed. But during key exchange, those PONG packets are simply queued. There are no control mechanisms on how many messages to queue, and a malicious server can keep a client in the key exchange process indefinitely. In itself it’s a denial of service vulnerability for both the client and server side, as it can eat up ridiculous amount of memory. But when combined with the VerifyHostKeyDNS flaw explained above, it’s a way to trigger the out of memory error, and bypass server verification.

The vulnerabilities were fixed in the 9.9p2 release of OpenSSH. The client attack (the more serious of the two) is only exploitable if your client has the VerifyHostKeyDNS option set to “yes” or “ask”. Many systems default this value to “no”, and are thus unaffected.

JumbledPath

We now have a bit more insight into how Salt Typhoon recently breached multiple US telecom providers, and deployed the JumbledPath malware. Hopefully you weren’t expecting some sophisticated chain of zero-day vulnerabilities, because so far the answer seems to be simple credential stealing.

Cisco Talos has released their report on the attacks, and the interesting parts are what the attackers did after they managed to access target infrastructure. The JumbledPath malware is a Go binary, running on x86-64 Linux machines. Lateral movement was pulled off using some clever tricks, like changing the loopback address to an allowed IP, to bypass Access Control Lists (ACLs). Multiple protocols were abused for data gathering and further attacks, like SNMP, RADIUS, FTP, and SSH. There’s certainly more to this story, like where the captured credentials actually came from, and whose conversations were actually targeted, but so far those answers are not available.

Ivanti Warp-Speed Audit

The preferred method of rediscovering vulnerabilities is patch diffing. Vendors will often announce vulnerabilities, and even release updates to correct them, and never really dive into the details of what went wrong with the old code. Patch diffing is looking at the difference between the vulnerable release and the fixed one, figuring out what changed, and trying to track that back to the root cause. Researchers at Horizon3.ai knew there were vulnerabilities in Ivanti’s Endpoint manager, but didn’t have patches to reverse engineer. Seems like a bummer, but was actually serendipity, as the high-speed code audit looking for the known vulnerability actually resulted in four new ones being found!

They are all the same problem, spread across four API endpoints, and all reachable by an unauthenticated user. The code is designed to look at files on the local filesystem, and generate hashes for the files that are found. The problem is that the attacker can supply a file name that actually resolves to an external Universal Naming Convention (UNC) path. The appliance will happily reach out and attempt to authenticate with a remote server, and this exposes the system to credential relay attacks.

RANsacked

The Florida Institute for Cybersecurity Research have published a post and paper (PDF) about RANsacked, their research into various LTE and 5G systems. This is a challenging area to research, as most of us don’t have any spare LTE routing hardware laying around to research on. The obvious solution was to build their own, using open source software like Open5GS, OpenAirInterface, etc. The approach was to harness a fuzzer to find interesting vulnerabilities in these open implementations, and then apply that approach to closed solutions. Serious vulnerabilities were found in every target the fuzzing system was run against.

Their findings break down into three primary categories of vulnerabilities. The first is untrusted Non-Access Stratum (NAS) control messages getting handled by the “core”, the authentication, routing, and processing part of the cellular system. These messages aren’t properly sanitized before processing, leading to the expected crashes and exploits we see in every other insufficiently hardened system that processes untrusted data. The second category is the uncertainty in the protocol specifications and mismatch between what those specifications seem to indicate and the reality of cellular traffic. And finally, deserialization of ASN.1 data itself is subject to deserialization attacks. This group of research found a staggering 119 vulnerabilities in total.

Bits and Bytes

[RyotaK] at GMO Flatt Security found an interesting vulnerability in Chatwork, a popular messaging application in Japan. The desktop version of this tool is just an electron app, and it makes use of webviewTag, an obsolete Electron feature. This quirk can be combined with a dangerous method in the preload context, allowing for arbitrary remote code execution when a user clicks a malicious link in the application.

Once upon a time, Microsoft published Virtual Machines for developers to use for testing websites inside Edge and IE. Those VM images had the puppet admin engine installed, but no configuration set. And that’s not great, because in this state puppet will look for machine using the puppet hostname on the local network, and attempt to download a configuration from there. And because puppet is explicitly designed to administer machines, this automatically results in arbitrary code execution. The VMs are no longer offered, so we’re past the expiration date on this particular trick, but what an interesting quirk of these once-official images.

[Anurag] has an analysis of the Arechclient2 Remote Access Trojan (RAT). It’s a bit of .NET malware, aggressively obfuscated, that collects and exfiltrates data and credentials. There’s a browser element, in the form of a Chrome extension that reports itself as Google Docs. This is more data collection, looking for passwords and other form fills.

Signal users are getting hacked by good old fashioned social engineering. The trick is to generate a QR code from Signal that will permit the account scanning the code to log in on another device. It’s advice some of us have learned the hard way, but QR codes are just physical manifestations of URLs, and we really shouldn’t trust them lightly. Don’t click that link, and don’t scan that QR code.

]]>
https://hackaday.com/2025/02/21/this-week-in-security-openssh-jumbledpath-and-ransacked/feed/ 7 759500 DarkArts
This Week in Security: The UK Wants Your iCloud, Libarchive Wasn’t Ready, and AWS https://hackaday.com/2025/02/14/this-week-in-security-the-uk-wants-your-icloud-libarchive-wasnt-ready-and-aws/ https://hackaday.com/2025/02/14/this-week-in-security-the-uk-wants-your-icloud-libarchive-wasnt-ready-and-aws/#comments Fri, 14 Feb 2025 15:00:31 +0000 https://hackaday.com/?p=758658&preview=true&preview_id=758658 There’s a constant tension between governments looking for easier ways to catch criminals, companies looking to actually protect their users’ privacy, and individuals who just want their data to be …read more]]>

There’s a constant tension between governments looking for easier ways to catch criminals, companies looking to actually protect their users’ privacy, and individuals who just want their data to be truly private. The UK government has issued an order that threatens to drastically change this landscape, at least when it comes to Apple’s iCloud backups. The order was issued in secret, and instructed Apple to provide a capability for the UK officials to access iCloud backups that use the Advanced Data Protection (ADP) system. ADP is Apple’s relatively new end-to-end encryption scheme that users can opt-into to make their backups more secure. The key feature here is that with ADP turned on, Apple themselves don’t have access to decrypted user data.

If this order wasn’t onerous enough, it seems to explicitly include all ADP-protected data, regardless of the country of origin. This should ring alarm bells. The UK government is attempting to force a US company to add an encryption backdoor to give them access to US customer data. Cryptographer [Matthew Green] has thoughts on this situation. One of the slightly conspiratorial theories he entertains is that portions of the US government are quietly encouraging this new order because the UK has weaker protections against unreasonable search and seizure of data. The implication here is that those elements in the US would use this newfound UK data access capability to sidestep Fourth Amendment protections of citizens’ data. This doesn’t seem like much of a stretch.

[Matthew] does have a couple of suggestions. The first is passing laws that would make it illegal for a US company to add backdoors to their systems, specifically at the request of foreign nations. We’ve seen first-hand how such backdoors can backfire once accessed by less-friendly forces. In an ironic turn of fate, US agencies have even started recommending that users use end-to-end encrypted services to be safe against such backdoors. Technically, if this capability is added, the only recourse will be to disable iCloud backups altogether. Thankfully Apple has pushed back rather forcefully against this order, threatening to simply turn off ADP for UK users, rather than backdoor the rest of the world. Either way, it’s a scary bit of overreach.

Github Actions Can Be Dangerous

This is a bit of tag-team research between [Lupin] and [Snorlhax]. The pair went from competitors on the French Hackerone leaderboard, to co-conspirators looking for bugs. And this is the story of finding the big one. The pair went searching for flaws at a specific unnamed company, and found a docker image that contained an entire copy of some proprietary server-side code. That was certainly worthy of a bug bounty, but there was more. The .git folder hadn’t been properly scrubbed, and contained a token from a Github Actions run. That probably shouldn’t be a problem, as these tokens expire at the end of the run. But our protagonists found something interesting: a race condition where the docker image gets uploaded before the action completes. (Here is Palo Alto’s independent discovery and coverage.)  And that wasn’t even the big find from this research.

The big find was a quirk of Docker images. The build process creates a .npmrc file in the Docker image, which contains an npm token for publishing packages. That file is deleted as part of the Docker image finalization. But Docker images are more complicated that simple file archives. They are made up of layers, and “Each instruction in a Dockerfile (such as FROM, COPY, RUN, etc.) creates a new layer.” That’s an incredibly important concept, because Docker images are like onions: You can peel back the layers, and they can make you cry.

The build process for this Docker image did delete the .npmrc file before publishing it, but only after triggering the creation of another filesystem layer. It wasn’t obvious, but the Docker image did contain this critical npm secret, and anyone with access could publish arbitrary libraries to the company’s npm repository. That is definitely an exceptional find, and resulted in a well-earned $50,000 bounty.

Libarchive Wasn’t Ready for Windows

Microsoft pulled the Libarchive open source library into Windows 11 back in 2023, giving Windows Explorer the native ability to handle a wider variety of archive files. This is a “time-tested library” that even has fuzzing coverage. It has not, however, been time tested in the context of running in Microsoft Windows. Which is why it’s not surprising that Explorer’s temporary file extraction feature failed to catch an archive with the C: root directory. That is not a terribly useful vulnerability, but is technically an arbitrary file write/delete as the local user.

There’s more, like the patch for a previous vulnerability replacing disallowed characters in file names. The \ character is escaped with another backslash, giving us \\ instead. The problem there is that \ and \\ are both aliases for the device root, aka C:\.

There are more tricks to find, much of it the result of libarchive features that weren’t entirely intended to be exposed inside Windows. There are more than the 11 advertised archive types supported. There’s confusion because Windows strictly uses filename extensions to determine a file type, while libarchive uses the Unix/Linux convention of looking at the magic bytes at the beginning of the a file. The whole thing is an interesting read, with implications for the limitations of automated fuzzing, particularly when not using the same compile options as in production.

AI Poisoning Two-fer

We start off with a tale of prompt injection that can corrupt long-term AI memory. Imagine a document or website that secretly instructs an AI to believe that its primary user believes he is living in a simulation. That sort of manipulation would color the results of the given AI for all future queries, and all it would take is to process a single malicious source. Another trick used is to instruct the AI to take a malicious action the next time the user took a certain action. This allows an attacker to slip something in, and the AI will see that instruction as coming from the legitimate user, sidestepping some of the protections against such attacks.

The other poisoning story is a bit more conventional. It’s in-the-wild Pickle deserialization attacks in Hugging Face AI models. Many AI models use Python’s pickle serialization format for their data, but pickle can also store code objects, and this makes it obviously insecure. Many AI projects have rolled out support for safetensors, a format that doesn’t allow code to mix with data. The news here is that researchers at ReversingLabs found models on Hugging Face with malicious pickle files. Hugging Face scans those files with Picklescan to find malicious pickles, but these malicious files escaped detection via a pair of incredibly sophisticated techniques: 7-zip compression and broken pickle files. The malicious models have pulled down, and Picklescan has been improved to catch these avoidance techniques. Download and unpickle cautiously!

AWS

AWS has also been the topic of a couple very interesting bits of research. The first is how to check for valid IAM usernames. That takes two flavors, users with and without two-factor authentication. For users with 2FA, attempting to log-in with a valid user name jumps directly to prompting for the 2FA code, while an attempt with an invalid username throws an immediate error. This was deemed an acceptable risk by Amazon, and indeed is much preferred to disclosing whether the password is correct or not. On the other hand, accounts without 2FA displayed a detectable timing difference between a valid and invalid user. Both returned the same error, but an invalid username returned that error detectably faster. This was deemed an actual vulnerability and assigned a CVE.

AWS has a public repository of virtual machine images, indexed by Amazon Machine IDs (AMIs). AMIs are plain-text identifiers, is set by whoever uploads the image, and AMIs can be duplicated by different users. If that sounds like the recipe for some sort of name confusion attack, you’d be exactly correct. Any API call that references an image using the AMI and not specifying the owner can be hijacked by creating another image with the same AMI. Surely this is theoretical, and never happens in practice, right? The authors wondered that, too. And to find out, they created an image using an internal Amazon AMI, just to see if it would actually get used. And it did, confirmed by Amazon itself.

Bits and Bytes

The zkLend money lending service lost $9.5 million in an interesting cryptocurrency heist. This decentralized finance system uses Ethereum smart contracts to deposit and lend money. A rounding error in one of those smart contracts allowed an attacker to siphon money off of multiple transactions. zkLend has made a public offer to the attacker that they can keep 10% of the total as a bounty, if they return the other 90%. This would be in exchange for not considering the action theft and informing law enforcement.

At least one Swatting as a Service (SaaS?) offering is finally offline. Alan Filion ran a swatting service for nearly two years, committing the potentially deadly crime a staggering 375 times. While it’s a good thing that his reign of terror has finally ended, the paltry 48-month prison sentence is shamefully short in my opinion.

Sitevision is the content management system that seems to run most of the Swedish government. And for more than two years, it’s had a really nasty footgun in the intersection between WebDav, SAML, and Java keystores. To put it simply, the https:///webdav/files/saml-keystore endpoint on multiple Sitevision sites contained the public and private keys for SAML authn requests, encrypted with a random 8-character alphanumeric password. It’s not quite the entire keys to the kingdom, but still not something you really want to leak.

I warn new Linux users all the time not to copy instructions from the Internet into their bash terminals without understanding what the commands actually do. It turns out that this is not just a Linux problem, as that’s the exact attack that North Korean attackers used against a handful of targets. “To register your device, please paste the commands below into an admin PowerShell prompt.” No thank you.

]]>
https://hackaday.com/2025/02/14/this-week-in-security-the-uk-wants-your-icloud-libarchive-wasnt-ready-and-aws/feed/ 15 758658 DarkArts
This Week in Security: Medical Backdoors, Strings, and Changes at Let’s Encrypt https://hackaday.com/2025/02/07/this-week-in-security-medical-backdoors-strings-and-changes-at-lets-encrypt/ https://hackaday.com/2025/02/07/this-week-in-security-medical-backdoors-strings-and-changes-at-lets-encrypt/#comments Fri, 07 Feb 2025 15:00:51 +0000 https://hackaday.com/?p=757806&preview=true&preview_id=757806 There are some interesting questions afoot, with the news that the Contec CMS8000 medical monitoring system has a backdoor. And this isn’t the normal debug port accidentally left in the …read more]]>

There are some interesting questions afoot, with the news that the Contec CMS8000 medical monitoring system has a backdoor. And this isn’t the normal debug port accidentally left in the firmware. The CISA PDF has all the details, and it’s weird. The device firmware attempts to mount an NFS share from an IP address owned by an undisclosed university. If that mount command succeeds, binary files would be copied to the local filesystem and executed.

Additionally, the firmware sends patient and sensor data to this same hard-coded IP address. This backdoor also includes a system call to enable the eth0 network before attempting to access the hardcoded IP address, meaning that simply disabling the Ethernet connection in the device options is not sufficient to prevent the backdoor from triggering. This is a stark reminder that in the firmware world, workarounds and mitigations are often inadequate. For instance, you could set the gateway address to a bogus value, but a slightly more sophisticated firmware could trivially enable a bridge or alias approach, completely bypassing those settings. There is no fix at this time, and the guidance is pretty straightforward — unplug the affected devices.

Reverse Engineering Using… Strings

The Include Security team found a particularly terrifying “smart” device to tear apart: the GoveeLife Smart Space Heater Lite. “Smart Space Heater” should probably be terrifying on its own. It doesn’t get much better from there, when the team found checks for firmware updates happening over unencrypted HTTP connections. Or when the UART password was reverse engineered from the readily available update. It’s not a standard Unix password, just a string comparison with a hardcoded value, and as such readily visible in the strings output.

Now on to the firmware update itself. It turns out that, yes, the device will happily take a firmware update over that unencrypted HTTP connection. The first attempt at running modified firmware failed, with complaints about checksum failures. Turns out it’s just a simple checksum appended to the firmware image. The device has absolutely no protection against running custom firmware. So this leads to the natural question, what could an attacker actually do with access to a device like this?

The proof of concept attack was to toggle the heat control relay for every log message. In a system like this, one would hope there would be hardware failsafes that turn off the heating element in an overheat incident. Considering that this unit has been formally recalled for over 100 reports of overheating, and at least seven fires caused by the device, that hope seems to be in vain.

AMD Releases

We wrote about the mysterious AMD vulnerability a couple weeks ago, and the time has finally come for the full release. It’s officially CVE-2024-56161, “Improper signature verification in AMD CPU ROM microcode patch loader”. The primary danger seems to be malicious microcode that could be used to defeat AMD’s Secure Encrypted Virtualization-Secure Nested Paging (SEV-SNP) technology. In essence, an attacker with root access on a hypervisor could defeat this VM encryption guarantee and compromise the VMs on that system.

This issue was found by the Google Security Team, and there is a PoC published that demonstrates the attack with benign effects.

The Mirai Two-fer

The Mirai botnet seems to have picked up a couple new tricks, with separate strains now attacking Zyxel CPE devices and Mitel SIP phones. Both attacks are actively being exploited, and the Zyxel CPE flaw seems to be limited to an older, out-of-support family of devices. So if you’re running one of the approximately 1,500 “legacy DSL CPE” devices, it’s time to pull the plug. Mitel has published an advisory as well, and is offering firmware updates to address the vulnerability.

Let’s Encrypt Changes

A service many of us depend on is making some changes. Let’s Encrypt is no longer going to email you when your certificate is about to expire. The top reason is simple. It’s getting to be a lot of emails to send, and sending emails can get expensive when you measure them in the millions.

Relatedly, Let’s Encrypt is also about to roll out new six-day certificates. Sending out email reminders for such short lifetimes just doesn’t make much sense. Finally from Let’s Encrypt is a very useful new feature, the IP Address certificate. If you’ve ever found yourself wishing you didn’t have to mess with DNS just to get an HTTPS certificate, Let’s Encrypt is about to have you covered.

Bits and Bytes

There’s a Linux vulnerability in the USB Video Class driver, and CISA has issued an active exploit warning for it. And it’s interesting, because it’s been around for a very long time, and it was disclosed in a Google Android Security Bulletin. It’s been suggested that this was a known vulnerability, and was used in forensic tools for Android, in the vein of Cellebrite.

Pretty much no matter what program you’re using, it’s important to never load untrusted files. The latest application to prove this truism is GarageBand. The details are scarce, but know that versions before 10.4.12 can run arbitrary code when loading malicious images.

Ever wonder how many apps Google blocks and pulls from the app store? Apparently better than two million in 2024. The way Google stays mostly on top of that pile of malware is the use of automated tools, which now includes AI tools. Which, yes, is a bit terrifying, and has caused problems in other Google services. YouTube in particular comes to mind, where channels get content strikes for seemingly no reason, and have trouble finding real human beings at Google to take notice and fix what the automated system has mucked up.

And finally, echoing what Kee had to say on the subject, cryptocurrency fraud really is just fraud. And [Andean Medjedovic] of Canada found that out the hard way, after his $65 million theft landed him in jail on charges of wire fraud, computer hacking, and attempted extortion.

]]>
https://hackaday.com/2025/02/07/this-week-in-security-medical-backdoors-strings-and-changes-at-lets-encrypt/feed/ 12 757806 DarkArts
This Week in Security: DeepSeek’s Oopsie, AI Tarpits, And Apple’s Leaks https://hackaday.com/2025/01/31/this-week-in-security-deepseeks-oopsie-ai-tarpits-and-apples-leaks/ https://hackaday.com/2025/01/31/this-week-in-security-deepseeks-oopsie-ai-tarpits-and-apples-leaks/#comments Fri, 31 Jan 2025 15:00:40 +0000 https://hackaday.com/?p=756873&preview=true&preview_id=756873 DeepSeek has captured the world’s attention this week, with an unexpected release of the more-open AI model from China, for a reported mere $5 million training cost. While there’s lots …read more]]>

DeepSeek has captured the world’s attention this week, with an unexpected release of the more-open AI model from China, for a reported mere $5 million training cost. While there’s lots of buzz about DeepSeek, here we’re interested in security. And DeepSeek has made waves there, in the form of a ClickHouse database unintentionally opened to the world, discovered by the folks from Wiz research. That database contained chat history and log streams, and API keys and other secrets by extension.

Finding this database wasn’t exactly rocket science — it reminds me of my biggest bug bounty win, which was little more than running a traceroute and a port scan. In this case it was domain and sub domain mapping, and a port scan. The trick here was knowing to try this, and then understanding what the open ports represented. And the ClickHouse database was completely accessible, leaking all sorts of sensitive data.

AI Tarpit

Does it really grind your gears that big AI companies are training their models on your content? Is an AI crawler ignoring your robots.txt? You might need help from Nepenthes. Now before you get too excited, let’s be clear, that this is a malicious software project. It will take lots of CPU cycles, and it’s explicitly intended to waste the time of AI crawlers, while also feeding gibberish into their training models.

The project takes the form of a website that loads slowly, generates gibberish text from a Markov chain, and then generates a handful of unique links to other “pages” on the site. It forms the web equivalent of an infinite “maze of twisty little passages, all alike”.

While the project has been a success, confirmed by the amount of time various web crawlers have spent lost inside, AI companies are aware of this style of attack, and mitigations are coming.

Check out the demo, but don’t lose too much time in there.
https://arstechnica.com/tech-policy/2025/01/ai-haters-build-tarpits-to-trap-and-trick-ai-scrapers-that-ignore-robots-txt/

Is The QR Code Blue and Black?

Or is it White and Gold

This is a really interesting bit of research happening on a Mastodon thread. The initial hack was a trio of QR codes, pointing to three different news sites, interleaved beneath a lenticular lens. Depending on the angle from which it was viewed, this arrangement led to a different site. That provoked [Christian Walther] to question whether the lens was necessary, or if some old-school dithering could pull off the same trick. Turns out that it sure can. One image, two URL. We’d love to see this extended to QR codes that register differently under different lighting, or other fun tricks. Head over to Elliot’s coverage for more on this one.

SLAPing and FLOPing Apple

Apple’s A and M chips have a pair of recently discovered speculative execution flaws, FLOP and SLAP. That’s False Load Out Predictions and Speculation in Load Address Predictions . FLOP uses mispredicted memory contents to access data, and SLAP uses mispredicted memory addresses. The takeaway is that Javascript running on one page can leak bytes from another web page.

Both of these attacks have their own wrinkles and complexities. SLAP has only been demonstrated in Safari, and is triggered by training the address prediction on an address layout pattern that leads into memory outside the real buffer. By manipulating Safari into loading another page in the same process as the attacker page, this can be used to leak buffer data from that other page.

FLOP is much more powerful, and works in both Safari and Chrome, and is triggered by training the CPU that a given load instruction tends to return the same data each time. This can be used in Safari to pull off a type confusion speculation issue, leading to arbitrary data leakage from any memory address on the system. In Chrome the details are a bit different, but the result is still an arbitrary memory read primitive.

The worst case scenario is that a compromised site in one tab can pull data from the rest of the system. There’s an impressive demo where a compromised tab reads data from ProtonMail running in a different tab. Apple’s security team is aware of this work, and has stated that it does not consider these attacks to be immediately exploitable as real world attacks.

Bits and Bytes

WatchTowr is back with the details on another Fortigate vulnerability, and this time it’s a race condition in the jsconsole management interface, resulting in an authentication bypass, and jumping straicht to super_admin on the system.

Unicode continues causing security problems, to no great surprise. Windows has a “Best-Fit” character conversion facility, which attempts to convert Unicode characters to their nearest ASCII neighbors. That causes all sorts of problems, in the normal divergent-parser-behavior way. When a security check happens on the Unicode text, but the Best-Fit conversion happens before the text is actually used, the check is neatly bypassed by the text being Best-Fit into ASCII.

And finally, Google’s Project Zero has an in-depth treatment of COM object exploitation with IDispatch. COM objects can sometimes be accessed across security boundaries, and sometimes those remote objects can be used to execute code. This coverage dives into the details of how the IDispatch interface can be used to trigger this behavior. Nifty!

]]>
https://hackaday.com/2025/01/31/this-week-in-security-deepseeks-oopsie-ai-tarpits-and-apples-leaks/feed/ 8 756873 DarkArts
This Week in Security: ClamAV, The AMD Leak, and The Unencrypted Power Grid https://hackaday.com/2025/01/24/this-week-in-security-clamav-the-amd-leak-and-the-unencrypted-power-grid/ https://hackaday.com/2025/01/24/this-week-in-security-clamav-the-amd-leak-and-the-unencrypted-power-grid/#comments Fri, 24 Jan 2025 15:00:05 +0000 https://hackaday.com/?p=755999&preview=true&preview_id=755999 Cisco’s ClamAV has a heap-based buffer overflow in its OLE2 file scanning. That’s a big deal, because ClamAV is used to scan file attachments on incoming emails. All it takes …read more]]>

Cisco’s ClamAV has a heap-based buffer overflow in its OLE2 file scanning. That’s a big deal, because ClamAV is used to scan file attachments on incoming emails. All it takes to trigger the vulnerability is to send a malicious file through an email system that uses ClamAV.

The exact vulnerability is a string termination check that can fail to trigger, leading to a buffer over-read. That’s a lot better than a buffer overflow while writing to memory. That detail is why this vulnerability is strictly a Denial of Service problem. The memory read results in process termination, presumably a segfault for reading protected memory. There are Proof of Concepts (PoCs) available, but so far no reports of the vulnerability being used in the wild.

AMD Vulnerability Leaks

AMD has identified a security problem in how some of its processors verify the signature of microcode updates. That’s basically all we know about the issue, because the security embargo still isn’t up. Instead of an official announcement, we know about this issue via an Asus beta BIOS release that included a bit too much information.

I Read the Docs

There’s nothing quite as fun as winning a Capture The Flag (CTF) challenge the wrong way. The setup for this challenge was a simple banking application, with the challenge being to steal some money from the bank’s website. The intended solution was to exploit the way large floating point numbers round small values. Deposit 1e10 dollars into the bank, and a withdraw of $1000 is literally just a rounding error.

The unintended solution was to deposit NaN dollars. In JavaScript-speak that’s the special Not a Number value that’s used for oddball situations like dividing by a float that’s rounded down to zero. NaN has some other strange behaviors, like always resulting in false comparisons. NaN > 0? False. NaN < 0? False. NaN == NaN? Yep, also false. And when the fake bank web app checks if a requested withdraw amount is greater than the amount in the account? Since the account is set to NaN, it’s also false. Totally defeats the internal bank logic. How did the student find this unintended solution? “I read the docs.” Legendary.

Another Prompt Injection Tool

[Utku Sen] has a story and a revamped tool, and it leads to an interesting question about LLMs. The story starts with a novel LLM prompt, that gives more natural sounding responses from AI tools. LLMs have a unique problem, that there is no inherent difference between pre-loaded system prompts, and user-generated prompts. This leads to an attack, where a creative user prompt can reveal the system prompt. And in a case like [Utku]’s, the system prompt is the special sauce that makes the service work. He knew this, and attempted to protect against such attacks. Within an hour of releasing the tool to the public, [Utku] got a direct message on X with the system prompts.

There’s a really interesting detail, that the prompt injection attack only worked 1 out of 11 times. This sent me down an LLM rabbit hole, asking whether LLMs are deterministic, and if not, why not. The simple answer is the “temperature” control knob on LLMs add some random noise to the output text. There seems to be randomness even when the LLM temperature is turned to zero, caused either by floating point errors, or even a byproduct of doing batched inference. Regardless, prompt injection attacks may only work after several tries.

And that brings us to promptmap tool. It is intended to evaluate a system prompt, and launch multiple attempts to poison or otherwise inject malicious user prompts into the system. And of course, it is now capable of using the approach that successfully revealed [Utku]’s system prompt.

Cloudflare’s Unintentional GPS

There’s a really interesting unintended side effect of using Cloudflare’s CDN network: Users load data from the nearest datacenter. Unique data can be served to a target user, and then the cache can be checked to leak coarse location information. This is novel research, but ultimately not actually all that important from a security perspective. The primary reason is that the same sort of attack has always existed and can be used to extract a much more valuable piece of user identifying data: The user’s IP address.

The Unauthenticated, Unencrypted radios that control The German Power Grid

[Fabian Bräunlein] and [Luca Melette] were just looking for radio-controlled light switches, to pull off a modern take on Project Blinkenlights. What they found was the Radio Ripple Control protocol, an unauthenticated, unencrypted radio control protocol. That just happens to control about 40 Gigawatts of power generation across Germany, not to mention street lamps and other bits of hardware.

The worst-case scenario for an attacker is to turn on all of the devices that use grid power, while turning off all of the connected devices that generate power. Too much of an imbalance might even be capable of resulting in the dreaded grid-down scenario, where all the connected power generation facilities lose sync with each other, and everything has to be disconnected. Recovery from such a state would be slow and tedious. And thankfully not actually very likely. But even if this worst-case scenario isn’t very realistic, it’s still a severe vulnerability in how the German grid is managed. And fixes don’t seem to be coming any time soon.

Bits and Bytes

The Brave browser had a bit of a dishonest downloads issue, where the warning text about a download would show the URL from the referrer header. The danger is that a download may be considered trustworthy, even when it’s actually being served from an arbitrary URL.

If JavaScript in general or next.js in particular is in your security strike zone, you’ll want to check out the write-up from [Rachid.A] about cache poisoning in this particular framework, and the nice cache of security bounties it netted.

Zoom has a weird security disclosure for one of their Linux applications, and it contains a description I’ve never seen before: The bug “may allow an authorized user to conduct an escalation of privilege via network access.” Given the CVSS score of 8.8 with an attack vector of network, this should probably be called a Remote Code Execution vulnerability.

Subaru had a problem with STARLINK. No, not the satellite Internet provider, the other STARLINK. That’s Subaru’s vehicle technology platform that includes remote start and vehicle tracking features. That platform had a pair of flaws, the first allowing an attacker to reset any admin’s password. The second is that the Two Factor Authentication protection can be bypassed simply by hiding the pop-up element in the HTML DOM. Whoops! Subaru had the issues fixed in under 24 hours, which is impressive.

And finally, Silent Signal has the intriguing story of IBM’s i platform, and and a compatibility issue with Windows 11. That compatibility issue was Microsoft cracking down on apps sniffing Windows passwords. And yes, IBM i was grabbing Windows passwords and storing them in the Windows registry. What a trip.

]]>
https://hackaday.com/2025/01/24/this-week-in-security-clamav-the-amd-leak-and-the-unencrypted-power-grid/feed/ 10 755999 DarkArts
This Week in Security: Rsync, SSO, and Pentesting Mushrooms https://hackaday.com/2025/01/17/this-week-in-security-rsync-sso-and-pentesting-mushrooms/ https://hackaday.com/2025/01/17/this-week-in-security-rsync-sso-and-pentesting-mushrooms/#comments Fri, 17 Jan 2025 15:00:56 +0000 https://hackaday.com/?p=755016&preview=true&preview_id=755016 Up first, go check your machines for the rsync version, and your servers for an exposed rsync instance. While there are some security fixes for clients in release 3.4.0, the …read more]]>

Up first, go check your machines for the rsync version, and your servers for an exposed rsync instance. While there are some security fixes for clients in release 3.4.0, the buffer overflow in the server-side rsync daemon is the definite standout. The disclosure text includes this bit of nightmare fuel: “an attacker only requires anonymous read access to a rsync server, such as a public mirror, to execute arbitrary code on the machine the server is running on.”

A naive search on Shodan shows a whopping 664,955 results for rsync servers on the Internet. Red Hat’s analysis gives us a bit more information. The checksum length is specified by the remote client, and an invalid length isn’t properly rejected by the server. The effect is that an attacker can write up to 48 bytes into the heap beyond the normal checksum buffer space. The particularly dangerous case is also the default: anonymous access for file retrieval. Red Hat has not identified a mitigation beyond blocking access.

If you run servers or forward ports, it’s time to look at ports 873 and 8873 for anything listening. And since that’s not the only problem fixed, it’s really just time to update to rsync 3.4.0 everywhere you can. While there aren’t any reports of this being exploited in the wild, it seems like attempts are inevitable. As rsync is sometimes used in embedded systems and shipped as part of appliances, this particular bug threatens to have quite the long tail.

My Gmail is My Passport, Verify Me

Here’s an interesting question. What happens to those “Log In With Google” accounts that we all have all over the Internet, when the domain changes hands? And no, we’re not talking about gmail.com. We’re talking about myfailedbusiness.biz, or any custom domain that has been integrated with a Google Workspace. The business fails, the domain reverts back to unclaimed, someone else purchases it, and re-adds the admin@myfailedbusiness.biz Google Workspace account. Surely that doesn’t register as the same account for the purpose of Google SSO, right?

The answer to this question is to look at what actually happens when a user uses Google Oauth to log in. The service sends a message to Google, asking Google to identify the user. Google asks the user for confirmation, and if granted will send an ID token to the service. That token contains three fields that are interesting for this purpose. The domain and email are straightforward, and importantly don’t make any distinction between the original and new users. So when the domain and email change hands, so does ownership of the token.

Oauth does provide a sub (subject) field, that is a unique token for a given user/service combination. Seems like that solves the issue, right? The problem is that while that identifier is guaranteed to be unique, it’s not guaranteed to be consistent, and thus isn’t widely used as a persistent user identifier. Google is aware of the issue, and while they initially closed it as a “Won’t fix” issue, the concept did eventually earn [Dylan Ayrey] a nifty $1337 bounty and a promise that Google is working on unspecified fixes. There is no immediate solution, and it’s not entirely clear that this is strictly a Google problem. Other SSO solutions may have the same quirk.

Fortigate Under Attack

Fortiguard has reported that a vulnerability in FortiOS and FortiProxy is under active exploitation. Fortiguard lists quite a few Indicators of Compromise (IoCs), but as far as the nature of the vulnerability, all we know is that it is an authentication bypass in an Node.js websocket module that allows a remote attacker to gain super-admin privileges. Yoiks.

Actic Wolf has more details on the exploit campaign, which was first found back in early December, but appears to have begun with widespread scanning for the vulnerability as early as November 16. Attackers moved slowly, with the goal of establishing VPN access into the networks protected behind the vulnerable devices. Arctic Wolf has provided additional IoCs, so time to go hunting.

Ivanti Connect, Too

There’s another security device under attack this week, as watchTowr labs has yet another fun romp through vendor mis-security. This time it’s a two-part series on Ivanti Connect Secure, and the two buffer overflows being used in the wild.

Ivanti has already released a patch, so the researchers ran a diff on the strings output for the patched and unpatched binary of interest. Three new error messages are in the new version, complaining about client data exceeding a size limit. The diaphora binary diffing tool found some interesting debbuging data, like Too late for IFT_PREAUTH_INIT. “IF-T” turns out to be an open VPN standard, and that term led to a statement about backwards compatibility in Ivanti code that had terrible “code smell”.

The IF-T protocol includes the optional clientCapabilities field, and Ivanti’s implementation used a fixed length buffer to store it when parsing incoming connections. The client code almost gets it right, using a strlen() check on the data, and strncpy() to ensure the right number of bytes are copied. Except both of those best-practices are completely useless when the result from strlen() is fed directly into strncpy() as the maximum byte count, without checking whether it overflows the buffer.

The second watchTowr article goes through the steps of turning the vulnerability into a real exploit, but doesn’t actually give away any exploit code. Which hasn’t really mattered, as Proof of Concepts (PoCs) are now available. The takeaway is that Ivanti still has security problems with their code, and this particular exploit is both fully known, and being used in the wild.

Pentesting Mushrooms

The folks at Silent Signal have an off-the-beaten-path write-up for us: How to get hired as a pentester. Or alternatively, the story of hacking Mushroom Inc. See, they built an intentionally vulnerable web application, and invited potential hires to find flaws. This application included cross-site scripting potential, SQL injection, and bad password handling, among other problems. The test was to take 72 hours, and find and document problems.

Part of the test was to present the findings, categorize each vulnerability’s severity, and even make recommendations for how the fictional business could roll out fixes. Along the way, we get insights on how to get your job application dismissed, and what they’re really looking for in a hire. Useful stuff.

Bits and Bytes

Secure Boot continues to be a bit of a problem. Microsoft signed a UEFI application that in turn doesn’t actually do any of the Secure Boot validation checks. This is only an issue after an attacker has admin access to a machine, but it does completely defeat the point of Secure Boot. Microsoft is finally rolling out fixes, revoking the signature on the application.

And if compromising Windows 11 is of interest to you, HN Security has just wrapped a four-part series that covers finding a vulnerability in an old Windows kernel driver, and turning it into a real read/write exploit that bypasses all of Microsoft’s modern security hardening.

Do you have a website, and are you interested in how your API is getting probed? Want to mess with attackers a bit? You might be interested in the new baitroute tool. Put simply, it’s a honeypot for web APIs.

And finally, the minds behind Top10VPN have released another vulnerability, this time in tunneling protocols like IPIP, GRE, and 6in4. The problem is a lack of validation on incoming tunnel packets. This allows for easy traffic injection, and using the tunnel servers as easy proxies. One of the worst cases is where this flaw allows accessing an internal network protected behind a consumer router.

]]>
https://hackaday.com/2025/01/17/this-week-in-security-rsync-sso-and-pentesting-mushrooms/feed/ 14 755016 DarkArts
This Week in Security: Backdoored Backdoors, Leaking Cameras, and The Safety Label https://hackaday.com/2025/01/10/this-week-in-security-backdoored-backdoors-leaking-cameras-and-the-safety-label/ https://hackaday.com/2025/01/10/this-week-in-security-backdoored-backdoors-leaking-cameras-and-the-safety-label/#comments Fri, 10 Jan 2025 15:00:18 +0000 https://hackaday.com/?p=752812&preview=true&preview_id=752812 The mad lads at watchTowr are back with their unique blend of zany humor and impressive security research. And this time, it’s the curious case of backdoors within popular backdoors, …read more]]>

The mad lads at watchTowr are back with their unique blend of zany humor and impressive security research. And this time, it’s the curious case of backdoors within popular backdoors, and the list of unclaimed domains that malicious software would just love to contact.

OK, that needs some explanation. We’re mainly talking about web shells here. Those are the bits of code that get uploaded to a web server, that provide remote access to the computer. The typical example is a web application that allows unrestricted uploads. If an attacker can upload a PHP file to a folder where .php files are used to serve web pages, accessing that endpoint runs the arbitrary PHP code. Upload a web shell, and accessing that endpoint gives a command line interface into the machine.

The quirk here is that most attackers don’t write their own tools. And often times those tools have special, undocumented features, like loading a zero-size image from a .ru domain. The webshell developer couldn’t be bothered to actually do the legwork of breaking into servers, so instead added this little dial-home feature, to report on where to find all those newly backdoored machines. Yes, many of the popular backdoors are themselves backdoored.

This brings us to what watchTowr researchers discovered — many of those backdoor domains were either never registered, or the registration has been allowed to expire. So they did what any team of researchers would do: Buy up all the available backdoor domains, set up a logging server, and just see what happens. And what happened was thousands of compromised machines checking in at these old domains. Among the 4000+ unique systems, there were a total of 4 .gov. domains from governments in Bangladesh, Nigeria, and China. It’s an interesting romp through old backdoors, and a good look at the state of still-compromised machines.

The Cameras are Leaking

One of the fun things to do on the Internet is to pull up some of the online video feeds around the world. Want to see what Times Square looks like right now? There’s a website for that. Curious how much snow is in on the ground in Hokkaido? Easy to check. But it turns out that there are quite a few cameras on the Internet that probably shouldn’t be. In this case, the focus is on about 150 license plate readers around the United States that expose both the live video stream and the database of captured vehicle data to anyone on the Internet that knows where and how to look.

This discovery was spurred by [Matt Brown] purchasing one of these devices, finding how easy they were to access, and then checking a service like Shodan for matching 404 pages. This specific device was obviously intended to be located on a private network, protected by a firewall or VPN, and not exposed to the open Internet. This isn’t the first time we’ve covered this sort of situation, and suggests an extension to Murhpy’s Law. Maybe I’ll refer to it as Bennett’s law: If a device can be put on the public Internet, someone somewhere inevitably will do so.

Some related research is available from RedHunt Labs, who did a recent Internet scan on port 80, and the results are a bit scary. 42,000,000 IP addresses, 1% of the IPv4 Internet, is listening on port 80. There are 2.1 million unique favicons, and 87% of those IPs actually resolve with HTTP connections and don’t automatically redirect to an HTTPS port. The single most common favicon is from a Hikvision IP Camera, with 674,901 IPs exposed.

The Big Extension Compromise

One of the relatively new ways to deploy malicious code is to compromise a browser plugin. Users of the Cyberhaven browser plugin received a really nasty Christmas present, as a malicious update was pushed this Christmas. The Cyberhaven extension is intended to detect data and block ex-filtration attempts in the browser, and as such it has very wide permissions to read page content. The malicious addition looked for API keys in the browser session, and uploaded cookies for sites visited to the attacker. Interestingly the attack seemed to be targeted specifically at OpenAI credentials and tokens.

This started with an OAuth phishing attack, where an email claimed the extension was in danger of removal, just log in with your Chrome Developer account for details. The Cyberhaven clicked through the email, and accidentally gave attackers permission to push updates to the extension. This isn’t the only extension that was targetted, and there are other reports of similar phishing emails. This appears to be a broader attack, with the first observed instance being in May of 2024, and some of the affected extensions used similar techniques. So far just over 30 compromised extensions have been discovered to be compromised in this way.

And while we’re on the topic of browser extensions, [Wladimir Palant] discovered the i18n trick that sketchy browser extensions use to show up in searches like this one for Wireguard.

The trick here is internationalization, or i18n. Every extension has the option to translate its name and description into 50+ languages, and when anyone searches the extension store, the search term can match on any of those languages. So unscrupulous extension developers fill the less common languages with search terms like “wireguard”. Google has indicated to Ars Technica that it is aware of this problem, and plans to take action.

Safety Labels

The US has announced the U.S. Cyber Trust mark, a safety label that indicates that “connected devices are cybersecure”. Part of the label is a QR code, that can be scanned to find information about the support timeline of the product, as well as information on automatic updates. There are some elements of this program that is an obviously good idea, like doing away with well known default passwords. Time will tell if the Cyber Trust mark actually makes headway in making more secure devices, or if it will be just another bit of visual clutter on our device boxes? Time will tell.

Bits and Bytes

SecureLayer7 has published a great little tutorial on using metasploit to automatically deploy known exploits against discovered vulnerabilities. If Metasploit isn’t in your bag of tricks yet, maybe it’s time to grab a copy of Kali Linux and try it out.

Amazon, apparently, never learns, as Giraffe Security scores a hat trick. The vulnerability is Python pip’s “extra-index-url” option preferring to pull packages from PyPi rather than the specified URL. It’s the footgun that Amazon just can’t seem to avoid baking right into its documentation. Giraffe has found this issue twice before in Amazon’s documentation and package management, and in 2024 found it the third time for the hat trick.

It seems that there’s yet another way to fingerprint web browsers, in the form of dynamic CSS features. This is particularly interesting in the context of the TOR browser, that turns off JavaScript support in an effort to be fully anonymous.

And finally, there seems to be a serious new SonicWall vulnerability that has just been fixed. It’s an authentication bypass in the SSLVPN interface, and SonicWall sent out an email indicating that this issue is considered likely to be exploited in the wild.

]]>
https://hackaday.com/2025/01/10/this-week-in-security-backdoored-backdoors-leaking-cameras-and-the-safety-label/feed/ 3 752812 DarkArts